Media and Graphics APIs
1. HTMLMediaElement (Audio/Video) Control API
| Property | Type | Description | Read-Only |
|---|---|---|---|
| src | string |
URL of media resource. Can be set to load new media. | No |
| currentTime | number |
Current playback position in seconds. Set to seek. | No |
| duration | number |
Total duration in seconds. NaN if unknown. |
Yes |
| paused | boolean |
true if media is paused. Use play()/pause() to change. |
Yes |
| ended | boolean |
true if media has finished playing. |
Yes |
| volume | number |
Volume level 0.0 to 1.0. Throws error if out of range. | No |
| muted | boolean |
Whether audio is muted. Doesn't affect volume property. | No |
| playbackRate | number |
Playback speed. 1.0 is normal, 2.0 is 2x, 0.5 is half speed. | No |
| readyState | number |
0=HAVE_NOTHING, 1=HAVE_METADATA, 2=HAVE_CURRENT_DATA, 3=HAVE_FUTURE_DATA, 4=HAVE_ENOUGH_DATA | Yes |
| networkState | number |
0=NETWORK_EMPTY, 1=NETWORK_IDLE, 2=NETWORK_LOADING, 3=NETWORK_NO_SOURCE | Yes |
| buffered | TimeRanges |
Time ranges of buffered media data. | Yes |
| seekable | TimeRanges |
Time ranges user can seek to. | Yes |
| loop | boolean |
Whether to restart playback when ended. | No |
| autoplay | boolean |
Whether to start playing automatically. Restricted by browser policies. | No |
| Method | Returns | Description |
|---|---|---|
| play() | Promise |
Starts playback. Returns Promise that resolves when playback starts. May reject if autoplay blocked. |
| pause() | void |
Pauses playback. Sets paused to true. |
| load() | void |
Resets media element and reloads source. Aborts ongoing loading. |
| canPlayType(type) | string |
Returns "probably", "maybe", or "" for MIME type support detection. |
| Event | When Fired | Use Case |
|---|---|---|
| loadedmetadata | Metadata loaded (duration, dimensions available) | Get video dimensions, duration |
| loadeddata | First frame loaded | Display thumbnail, enable controls |
| canplay | Enough data to play (but may still buffer) | Enable play button |
| canplaythrough | Can play to end without buffering | Start autoplay |
| play | Playback started/resumed | Update UI state |
| pause | Playback paused | Update UI state |
| ended | Playback reached end | Show replay button, load next |
| timeupdate | currentTime changed (fires ~4 times/second) | Update progress bar, time display |
| seeking | Seek operation started | Show loading indicator |
| seeked | Seek operation completed | Hide loading indicator |
| volumechange | volume or muted changed | Update volume UI |
| error | Error occurred loading or playing | Display error message |
| waiting | Playback stopped due to buffering | Show buffering spinner |
| playing | Playback resumed after buffering | Hide buffering spinner |
Example: Basic video controls
const video = document.querySelector("video");
// Play video with error handling
async function playVideo() {
try {
await video.play();
console.log("Playing");
} catch (error) {
console.error("Autoplay blocked:", error);
}
}
// Pause video
function pauseVideo() {
video.pause();
}
// Seek to specific time
function seekTo(seconds) {
video.currentTime = seconds;
}
// Skip forward/backward
function skip(seconds) {
video.currentTime += seconds;
}
// Set volume (0.0 to 1.0)
function setVolume(level) {
video.volume = Math.max(0, Math.min(1, level));
}
// Toggle mute
function toggleMute() {
video.muted = !video.muted;
}
// Set playback speed
function setSpeed(rate) {
video.playbackRate = rate; // 0.5, 1.0, 1.5, 2.0
}
// Toggle fullscreen
async function toggleFullscreen() {
if (document.fullscreenElement) {
await document.exitFullscreen();
} else {
await video.requestFullscreen();
}
}
Example: Event handling and progress tracking
const video = document.querySelector("video");
const progressBar = document.querySelector(".progress");
const timeDisplay = document.querySelector(".time");
// Get duration when metadata loads
video.addEventListener("loadedmetadata", () => {
console.log(`Duration: ${video.duration} seconds`);
console.log(`Dimensions: ${video.videoWidth}x${video.videoHeight}`);
});
// Update progress bar
video.addEventListener("timeupdate", () => {
const percent = (video.currentTime / video.duration) * 100;
progressBar.style.width = `${percent}%`;
// Format time display
const current = formatTime(video.currentTime);
const total = formatTime(video.duration);
timeDisplay.textContent = `${current} / ${total}`;
});
function formatTime(seconds) {
const mins = Math.floor(seconds / 60);
const secs = Math.floor(seconds % 60);
return `${mins}:${secs.toString().padStart(2, "0")}`;
}
// Handle playback state
video.addEventListener("play", () => {
console.log("Video playing");
});
video.addEventListener("pause", () => {
console.log("Video paused");
});
video.addEventListener("ended", () => {
console.log("Video ended");
});
// Handle buffering
video.addEventListener("waiting", () => {
console.log("Buffering...");
});
video.addEventListener("playing", () => {
console.log("Playback resumed");
});
// Handle errors
video.addEventListener("error", () => {
const error = video.error;
console.error("Media error:", error.code, error.message);
});
Note:
play() returns a Promise - use async/await or .then/.catch for error handling. Autoplay is restricted by browsers - user interaction often required. Use canPlayType() to check format support before loading. timeupdate fires frequently - throttle expensive operations.
2. MediaStream and getUserMedia for Camera/Microphone
| Method | Syntax | Description | Browser Support |
|---|---|---|---|
| getUserMedia | navigator.mediaDevices.getUserMedia(constraints) |
Requests camera/microphone access. Returns Promise<MediaStream>. Requires HTTPS or localhost. | All Browsers |
| getDisplayMedia | navigator.mediaDevices.getDisplayMedia(constraints) |
Captures screen/window/tab. Returns Promise<MediaStream>. Requires user gesture. | Modern Browsers |
| enumerateDevices | navigator.mediaDevices.enumerateDevices() |
Lists available media input/output devices. Returns Promise<MediaDeviceInfo[]>. | All Browsers |
| Constraint | Type | Description | Values |
|---|---|---|---|
| video | boolean | object |
Request video track. true for default, object for constraints. |
true, false, { width, height, facingMode } |
| audio | boolean | object |
Request audio track. true for default, object for constraints. |
true, false, { sampleRate, echoCancellation } |
| width | number | object |
Video width. Can specify ideal, min, max, exact. | 1280, { ideal: 1920, min: 1280 } |
| height | number | object |
Video height. Can specify ideal, min, max, exact. | 720, { ideal: 1080, min: 720 } |
| facingMode | string |
Camera direction on mobile devices. | "user" (front), "environment" (back) |
| frameRate | number | object |
Frames per second. Can specify ideal, min, max, exact. | 30, { ideal: 60, min: 30 } |
| echoCancellation | boolean |
Audio echo cancellation. Useful for video calls. | true, false |
| noiseSuppression | boolean |
Audio noise reduction. | true, false |
| MediaStream Method | Description | Returns |
|---|---|---|
| getTracks() | Gets all tracks (audio and video) in stream. | MediaStreamTrack[] |
| getVideoTracks() | Gets only video tracks. | MediaStreamTrack[] |
| getAudioTracks() | Gets only audio tracks. | MediaStreamTrack[] |
| addTrack(track) | Adds track to stream. | void |
| removeTrack(track) | Removes track from stream. | void |
| MediaStreamTrack Method | Description |
|---|---|
| stop() | Stops track permanently. Releases camera/microphone. Cannot be restarted. |
| enabled | Boolean property. Set to false to mute (temporary). Set to true to unmute. |
| getSettings() | Returns actual settings (width, height, frameRate, etc.). |
| getCapabilities() | Returns supported capabilities (min/max width, height, frameRate, etc.). |
| applyConstraints(constraints) | Updates constraints dynamically. Returns Promise. |
Example: Getting camera and microphone
// Basic usage
async function startCamera() {
try {
const stream = await navigator.mediaDevices.getUserMedia({
"video": true,
"audio": true
});
// Display in video element
const video = document.querySelector("video");
video.srcObject = stream;
await video.play();
console.log("Camera started");
} catch (error) {
console.error("Camera access denied:", error);
}
}
// With constraints
async function startHDCamera() {
const stream = await navigator.mediaDevices.getUserMedia({
"video": {
"width": { "ideal": 1920, "min": 1280 },
"height": { "ideal": 1080, "min": 720 },
"frameRate": { "ideal": 60, "min": 30 },
"facingMode": "user"
},
"audio": {
"echoCancellation": true,
"noiseSuppression": true,
"sampleRate": 48000
}
});
const video = document.querySelector("video");
video.srcObject = stream;
return stream;
}
// Screen capture
async function captureScreen() {
try {
const stream = await navigator.mediaDevices.getDisplayMedia({
"video": {
"width": { "ideal": 1920 },
"height": { "ideal": 1080 }
},
"audio": false
});
const video = document.querySelector("video");
video.srcObject = stream;
return stream;
} catch (error) {
console.error("Screen capture cancelled:", error);
}
}
Example: Device enumeration and track control
// List available devices
async function listDevices() {
const devices = await navigator.mediaDevices.enumerateDevices();
const videoInputs = devices.filter((d) => d.kind === "videoinput");
const audioInputs = devices.filter((d) => d.kind === "audioinput");
const audioOutputs = devices.filter((d) => d.kind === "audiooutput");
console.log("Cameras:", videoInputs);
console.log("Microphones:", audioInputs);
console.log("Speakers:", audioOutputs);
return { videoInputs, audioInputs, audioOutputs };
}
// Select specific device
async function useSpecificCamera(deviceId) {
const stream = await navigator.mediaDevices.getUserMedia({
"video": {
"deviceId": { "exact": deviceId }
}
});
return stream;
}
// Stop all tracks
function stopStream(stream) {
stream.getTracks().forEach((track) => {
track.stop();
console.log(`Stopped ${track.kind} track`);
});
}
// Mute/unmute video
function toggleVideo(stream, enabled) {
stream.getVideoTracks().forEach((track) => {
track.enabled = enabled;
});
}
// Mute/unmute audio
function toggleAudio(stream, enabled) {
stream.getAudioTracks().forEach((track) => {
track.enabled = enabled;
});
}
// Get track settings
function getTrackInfo(stream) {
const videoTrack = stream.getVideoTracks()[0];
if (videoTrack) {
const settings = videoTrack.getSettings();
console.log("Video settings:", settings);
// { width: 1920, height: 1080, frameRate: 30, ... }
const capabilities = videoTrack.getCapabilities();
console.log("Video capabilities:", capabilities);
// { width: { min: 640, max: 1920 }, ... }
}
}
// Change constraints dynamically
async function changeResolution(stream, width, height) {
const videoTrack = stream.getVideoTracks()[0];
await videoTrack.applyConstraints({
"width": { "ideal": width },
"height": { "ideal": height }
});
}
Note:
getUserMedia requires HTTPS (or localhost for development). User must grant permission - handle denial gracefully. Always stop() tracks when done to release camera/microphone. Setting enabled = false mutes temporarily; stop() releases permanently.
Warning: Screen capture (
getDisplayMedia) requires user gesture - can't be called on page load. Don't request more permissions than needed - ask for audio OR video separately if possible. Some constraints are mandatory (exact) and will fail if not supported - use ideal for preferences.
3. MediaRecorder API for Recording Media
| Constructor | Description | Browser Support |
|---|---|---|
| MediaRecorder | new MediaRecorder(stream, options) - Creates recorder for MediaStream. Options include mimeType, audioBitsPerSecond, videoBitsPerSecond. |
All Browsers |
| Method | Description | When to Use |
|---|---|---|
| start(timeslice) | Starts recording. Optional timeslice in ms for data chunks. | Begin recording |
| stop() | Stops recording. Fires dataavailable and stop events. | End recording |
| pause() | Pauses recording without ending. Can resume(). | Temporary pause |
| resume() | Resumes paused recording. | Continue after pause |
| requestData() | Forces dataavailable event with current buffer. | Get data during recording |
| Property | Type | Description |
|---|---|---|
| state | string |
"inactive", "recording", or "paused". Read-only. |
| mimeType | string |
MIME type being used (e.g., "video/webm"). Read-only. |
| videoBitsPerSecond | number |
Video encoding bitrate. Read-only. |
| audioBitsPerSecond | number |
Audio encoding bitrate. Read-only. |
| Event | When Fired | Event Data |
|---|---|---|
| dataavailable | Data chunk ready (at timeslice or stop) | event.data - Blob with recorded data |
| start | Recording started | - |
| stop | Recording stopped | - |
| pause | Recording paused | - |
| resume | Recording resumed | - |
| error | Recording error occurred | event.error - DOMException |
Example: Recording camera/microphone
let mediaRecorder;
let recordedChunks = [];
async function startRecording() {
// Get camera and microphone
const stream = await navigator.mediaDevices.getUserMedia({
"video": true,
"audio": true
});
// Create recorder
mediaRecorder = new MediaRecorder(stream, {
"mimeType": "video/webm;codecs=vp9",
"videoBitsPerSecond": 2500000
});
// Collect data chunks
mediaRecorder.ondataavailable = (event) => {
if (event.data.size > 0) {
recordedChunks.push(event.data);
}
};
// Handle recording complete
mediaRecorder.onstop = () => {
const blob = new Blob(recordedChunks, {
"type": "video/webm"
});
// Download file
const url = URL.createObjectURL(blob);
const a = document.createElement("a");
a.href = url;
a.download = "recording.webm";
a.click();
// Cleanup
URL.revokeObjectURL(url);
recordedChunks = [];
};
// Start recording
mediaRecorder.start();
console.log("Recording started");
}
function stopRecording() {
if (mediaRecorder && mediaRecorder.state !== "inactive") {
mediaRecorder.stop();
// Stop camera/microphone
mediaRecorder.stream.getTracks().forEach((track) => track.stop());
console.log("Recording stopped");
}
}
function pauseRecording() {
if (mediaRecorder && mediaRecorder.state === "recording") {
mediaRecorder.pause();
}
}
function resumeRecording() {
if (mediaRecorder && mediaRecorder.state === "paused") {
mediaRecorder.resume();
}
}
Example: Recording screen with chunked data
async function recordScreen() {
const stream = await navigator.mediaDevices.getDisplayMedia({
"video": { "width": 1920, "height": 1080 },
"audio": true
});
// Check supported MIME types
const options = [
"video/webm;codecs=vp9",
"video/webm;codecs=vp8",
"video/webm"
];
const mimeType = options.find(MediaRecorder.isTypeSupported);
console.log("Using MIME type:", mimeType);
const recorder = new MediaRecorder(stream, {
"mimeType": mimeType,
"videoBitsPerSecond": 5000000
});
const chunks = [];
// Capture data every second
recorder.ondataavailable = (event) => {
chunks.push(event.data);
console.log(`Chunk ${chunks.length}: ${event.data.size} bytes`);
};
recorder.onstop = async () => {
const blob = new Blob(chunks, { "type": mimeType });
// Preview recorded video
const videoUrl = URL.createObjectURL(blob);
const video = document.createElement("video");
video.src = videoUrl;
video.controls = true;
document.body.appendChild(video);
// Or upload to server
const formData = new FormData();
formData.append("video", blob, "screen-recording.webm");
await fetch("/upload", {
"method": "POST",
"body": formData
});
};
// Start with 1 second chunks
recorder.start(1000);
return recorder;
}
// Check format support
function checkMediaRecorderSupport() {
const types = [
"video/webm",
"video/webm;codecs=vp9",
"video/webm;codecs=vp8",
"video/mp4",
"audio/webm",
"audio/webm;codecs=opus"
];
types.forEach((type) => {
const supported = MediaRecorder.isTypeSupported(type);
console.log(`${type}: ${supported}`);
});
}
Note: MediaRecorder support varies by browser - use
MediaRecorder.isTypeSupported(mimeType) to check. Common formats: video/webm (Chrome/Firefox), video/mp4 (Safari). Use timeslice parameter in start() to get data in chunks for streaming or progress tracking.
Warning: Recording consumes memory - collect chunks in
dataavailable events. Large recordings can cause memory issues - consider chunked uploads. Always stop tracks when done to release camera/microphone. Some browsers limit recording duration or file size.
4. Web Audio API for Audio Processing
| Node Type | Purpose | Common Use |
|---|---|---|
| AudioContext | Main audio processing graph. All nodes connect through context. | Initialize audio system |
| AudioBufferSourceNode | Plays audio from buffer. One-shot playback. | Sound effects, samples |
| MediaElementSourceNode | Audio source from <audio> or <video> element. | Process music/video audio |
| MediaStreamSourceNode | Audio source from MediaStream (microphone). | Process live audio input |
| OscillatorNode | Generates waveform (sine, square, sawtooth, triangle). | Synthesizers, beeps, tones |
| GainNode | Controls volume (gain). Multiply by 0.0 to 1.0+. | Volume control, fading |
| BiquadFilterNode | Frequency filter (lowpass, highpass, bandpass, etc.). | EQ, bass boost, noise filter |
| ConvolverNode | Reverb and spatial effects using impulse response. | Room reverb, 3D audio |
| DelayNode | Delays audio signal by time in seconds. | Echo, chorus effects |
| AnalyserNode | Provides frequency/time domain data. Doesn't modify audio. | Visualizations, meters |
| StereoPannerNode | Pan audio left/right (-1 to 1). | Stereo positioning |
| ChannelMergerNode | Combines multiple mono into multichannel. | Mixing channels |
| ChannelSplitterNode | Splits multichannel into separate mono. | Process channels separately |
Example: Basic audio playback and synthesis
// Create audio context
const audioContext = new (window.AudioContext || window.webkitAudioContext)();
// Play audio file
async function playAudioFile(url) {
// Fetch audio data
const response = await fetch(url);
const arrayBuffer = await response.arrayBuffer();
// Decode audio data
const audioBuffer = await audioContext.decodeAudioData(arrayBuffer);
// Create source
const source = audioContext.createBufferSource();
source.buffer = audioBuffer;
// Connect to destination (speakers)
source.connect(audioContext.destination);
// Play
source.start();
return source;
}
// Generate tone
function playTone(frequency, duration) {
const oscillator = audioContext.createOscillator();
const gainNode = audioContext.createGain();
oscillator.type = "sine"; // sine, square, sawtooth, triangle
oscillator.frequency.value = frequency; // Hz
// Fade in and out
gainNode.gain.setValueAtTime(0, audioContext.currentTime);
gainNode.gain.linearRampToValueAtTime(0.5, audioContext.currentTime + 0.01);
gainNode.gain.linearRampToValueAtTime(0, audioContext.currentTime + duration);
// Connect: oscillator -> gain -> destination
oscillator.connect(gainNode);
gainNode.connect(audioContext.destination);
// Play
oscillator.start(audioContext.currentTime);
oscillator.stop(audioContext.currentTime + duration);
}
// Play 440Hz (A note) for 1 second
playTone(440, 1);
// Play from <audio> element with processing
function processAudioElement(audioElement) {
const source = audioContext.createMediaElementSource(audioElement);
const gainNode = audioContext.createGain();
// Connect: source -> gain -> destination
source.connect(gainNode);
gainNode.connect(audioContext.destination);
// Control volume
gainNode.gain.value = 0.7;
return { source, gainNode };
}
Example: Audio effects and filtering
// Create audio graph with effects
async function createAudioChain(url) {
const response = await fetch(url);
const arrayBuffer = await response.arrayBuffer();
const audioBuffer = await audioContext.decodeAudioData(arrayBuffer);
// Create nodes
const source = audioContext.createBufferSource();
const filter = audioContext.createBiquadFilter();
const gainNode = audioContext.createGain();
const panner = audioContext.createStereoPanner();
source.buffer = audioBuffer;
// Configure filter (lowpass at 1000Hz)
filter.type = "lowpass"; // lowpass, highpass, bandpass, notch, etc.
filter.frequency.value = 1000;
filter.Q.value = 1;
// Configure gain (volume)
gainNode.gain.value = 0.8;
// Configure panning (center)
panner.pan.value = 0; // -1 (left) to 1 (right)
// Connect: source -> filter -> gain -> panner -> destination
source.connect(filter);
filter.connect(gainNode);
gainNode.connect(panner);
panner.connect(audioContext.destination);
return { source, filter, gainNode, panner };
}
// Dynamic filter sweep
function filterSweep(filterNode, startFreq, endFreq, duration) {
const now = audioContext.currentTime;
filterNode.frequency.setValueAtTime(startFreq, now);
filterNode.frequency.exponentialRampToValueAtTime(endFreq, now + duration);
}
// Fade in/out
function fadeIn(gainNode, duration) {
const now = audioContext.currentTime;
gainNode.gain.setValueAtTime(0, now);
gainNode.gain.linearRampToValueAtTime(1, now + duration);
}
function fadeOut(gainNode, duration) {
const now = audioContext.currentTime;
gainNode.gain.setValueAtTime(gainNode.gain.value, now);
gainNode.gain.linearRampToValueAtTime(0, now + duration);
}
// Create delay/echo effect
function createEcho(delayTime, feedback) {
const delay = audioContext.createDelay();
const feedbackGain = audioContext.createGain();
const wetGain = audioContext.createGain();
delay.delayTime.value = delayTime; // seconds
feedbackGain.gain.value = feedback; // 0.0 to 1.0
wetGain.gain.value = 0.5;
// Create feedback loop
delay.connect(feedbackGain);
feedbackGain.connect(delay);
delay.connect(wetGain);
return { input: delay, output: wetGain };
}
Example: Audio visualization with AnalyserNode
// Create analyzer for visualization
function createVisualizer(sourceNode) {
const analyser = audioContext.createAnalyser();
analyser.fftSize = 2048; // 256, 512, 1024, 2048, 4096, etc.
sourceNode.connect(analyser);
analyser.connect(audioContext.destination);
const bufferLength = analyser.frequencyBinCount;
const dataArray = new Uint8Array(bufferLength);
return { analyser, dataArray };
}
// Draw waveform
function drawWaveform(analyser, dataArray, canvas) {
const ctx = canvas.getContext("2d");
const width = canvas.width;
const height = canvas.height;
function draw() {
requestAnimationFrame(draw);
// Get time domain data
analyser.getByteTimeDomainData(dataArray);
// Clear canvas
ctx.fillStyle = "rgb(0, 0, 0)";
ctx.fillRect(0, 0, width, height);
// Draw waveform
ctx.lineWidth = 2;
ctx.strokeStyle = "rgb(0, 255, 0)";
ctx.beginPath();
const sliceWidth = width / dataArray.length;
let x = 0;
for (let i = 0; i < dataArray.length; i++) {
const v = dataArray[i] / 128.0;
const y = v * height / 2;
if (i === 0) {
ctx.moveTo(x, y);
} else {
ctx.lineTo(x, y);
}
x += sliceWidth;
}
ctx.lineTo(width, height / 2);
ctx.stroke();
}
draw();
}
// Draw frequency bars
function drawFrequencyBars(analyser, dataArray, canvas) {
const ctx = canvas.getContext("2d");
const width = canvas.width;
const height = canvas.height;
function draw() {
requestAnimationFrame(draw);
// Get frequency data
analyser.getByteFrequencyData(dataArray);
ctx.fillStyle = "rgb(0, 0, 0)";
ctx.fillRect(0, 0, width, height);
const barWidth = (width / dataArray.length) * 2.5;
let x = 0;
for (let i = 0; i < dataArray.length; i++) {
const barHeight = (dataArray[i] / 255) * height;
ctx.fillStyle = `rgb(${dataArray[i] + 100}, 50, 50)`;
ctx.fillRect(x, height - barHeight, barWidth, barHeight);
x += barWidth + 1;
}
}
draw();
}
Note: AudioContext may be suspended on page load - resume with user gesture:
audioContext.resume(). All timing uses audioContext.currentTime (high precision). Audio nodes are one-time use - create new nodes for each playback. Use AudioParam methods (setValueAtTime, linearRampToValueAtTime) for smooth parameter changes.
5. Canvas API for 2D Graphics and Drawing
| Method Category | Methods | Purpose |
|---|---|---|
| Rectangles | fillRect, strokeRect, clearRect |
Draw filled/outlined rectangles, clear area |
| Paths | beginPath, moveTo, lineTo, arc, arcTo, quadraticCurveTo, bezierCurveTo, closePath |
Create complex shapes with lines and curves |
| Path Drawing | fill, stroke, clip |
Render paths as filled, outlined, or clipping region |
| Text | fillText, strokeText, measureText |
Draw and measure text |
| Images | drawImage |
Draw images, video frames, or other canvases |
| Transformations | translate, rotate, scale, transform, setTransform, resetTransform |
Move, rotate, scale drawing operations |
| State | save, restore |
Save/restore drawing state (styles, transforms) |
| Pixel Data | getImageData, putImageData, createImageData |
Direct pixel manipulation |
| Style Property | Type | Description | Example |
|---|---|---|---|
| fillStyle | string | CanvasGradient | CanvasPattern |
Fill color/gradient/pattern for shapes and text | "#ff0000", "rgb(255,0,0)" |
| strokeStyle | string | CanvasGradient | CanvasPattern |
Stroke color/gradient/pattern for outlines | "blue", "rgba(0,0,255,0.5)" |
| lineWidth | number |
Line thickness in pixels | 2, 5.5 |
| lineCap | string |
Line end style: "butt", "round", "square" | "round" |
| lineJoin | string |
Line corner style: "miter", "round", "bevel" | "round" |
| font | string |
Text font (CSS syntax) | "16px Arial", "bold 20px serif" |
| textAlign | string |
"left", "right", "center", "start", "end" | "center" |
| textBaseline | string |
"top", "middle", "bottom", "alphabetic", "hanging" | "middle" |
| globalAlpha | number |
Global transparency 0.0 to 1.0 | 0.5 |
| globalCompositeOperation | string |
Blending mode: "source-over", "multiply", "screen", etc. | "multiply" |
| shadowColor | string |
Shadow color | "rgba(0,0,0,0.5)" |
| shadowBlur | number |
Shadow blur radius in pixels | 10 |
| shadowOffsetX | number |
Horizontal shadow offset | 5 |
| shadowOffsetY | number |
Vertical shadow offset | 5 |
Example: Basic shapes and text
const canvas = document.querySelector("canvas");
const ctx = canvas.getContext("2d");
// Rectangle
ctx.fillStyle = "#ff0000";
ctx.fillRect(10, 10, 100, 50);
// Outlined rectangle
ctx.strokeStyle = "#00ff00";
ctx.lineWidth = 3;
ctx.strokeRect(120, 10, 100, 50);
// Circle
ctx.fillStyle = "#0000ff";
ctx.beginPath();
ctx.arc(75, 150, 40, 0, Math.PI * 2);
ctx.fill();
// Triangle
ctx.fillStyle = "#ffff00";
ctx.beginPath();
ctx.moveTo(175, 110);
ctx.lineTo(225, 190);
ctx.lineTo(125, 190);
ctx.closePath();
ctx.fill();
// Text
ctx.fillStyle = "#000000";
ctx.font = "20px Arial";
ctx.textAlign = "center";
ctx.textBaseline = "middle";
ctx.fillText("Hello Canvas!", 150, 250);
// Outlined text
ctx.strokeStyle = "#ff00ff";
ctx.lineWidth = 2;
ctx.strokeText("Outlined", 150, 280);
// Clear area
ctx.clearRect(50, 50, 50, 50);
Example: Paths, gradients, and patterns
// Complex path
ctx.beginPath();
ctx.moveTo(50, 50);
ctx.lineTo(150, 50);
ctx.quadraticCurveTo(200, 75, 150, 100);
ctx.bezierCurveTo(150, 120, 100, 140, 50, 100);
ctx.closePath();
ctx.fillStyle = "#ff6600";
ctx.fill();
ctx.strokeStyle = "#000000";
ctx.lineWidth = 2;
ctx.stroke();
// Linear gradient
const linearGrad = ctx.createLinearGradient(0, 0, 200, 0);
linearGrad.addColorStop(0, "red");
linearGrad.addColorStop(0.5, "yellow");
linearGrad.addColorStop(1, "blue");
ctx.fillStyle = linearGrad;
ctx.fillRect(10, 200, 200, 50);
// Radial gradient
const radialGrad = ctx.createRadialGradient(150, 350, 10, 150, 350, 50);
radialGrad.addColorStop(0, "white");
radialGrad.addColorStop(1, "black");
ctx.fillStyle = radialGrad;
ctx.beginPath();
ctx.arc(150, 350, 50, 0, Math.PI * 2);
ctx.fill();
// Pattern from image
const img = new Image();
img.onload = () => {
const pattern = ctx.createPattern(img, "repeat"); // repeat, repeat-x, repeat-y, no-repeat
ctx.fillStyle = pattern;
ctx.fillRect(250, 200, 150, 150);
};
img.src = "pattern.png";
// Shadows
ctx.shadowColor = "rgba(0, 0, 0, 0.5)";
ctx.shadowBlur = 10;
ctx.shadowOffsetX = 5;
ctx.shadowOffsetY = 5;
ctx.fillStyle = "#00ff00";
ctx.fillRect(300, 50, 100, 100);
// Reset shadows
ctx.shadowColor = "transparent";
ctx.shadowBlur = 0;
ctx.shadowOffsetX = 0;
ctx.shadowOffsetY = 0;
Example: Transformations and animations
// Save/restore state
ctx.save();
// Translate (move origin)
ctx.translate(100, 100);
ctx.fillStyle = "red";
ctx.fillRect(0, 0, 50, 50); // Drawn at (100, 100)
// Rotate (radians)
ctx.rotate(Math.PI / 4); // 45 degrees
ctx.fillStyle = "blue";
ctx.fillRect(0, 0, 50, 50);
// Restore original state
ctx.restore();
// Scale
ctx.save();
ctx.scale(2, 2); // 2x size
ctx.fillRect(10, 10, 50, 50); // Drawn 2x larger
ctx.restore();
// Animation loop
let angle = 0;
function animate() {
// Clear canvas
ctx.clearRect(0, 0, canvas.width, canvas.height);
// Save state
ctx.save();
// Move to center
ctx.translate(canvas.width / 2, canvas.height / 2);
// Rotate
ctx.rotate(angle);
// Draw rotating rectangle
ctx.fillStyle = "purple";
ctx.fillRect(-25, -25, 50, 50);
// Restore state
ctx.restore();
// Update angle
angle += 0.02;
// Continue animation
requestAnimationFrame(animate);
}
animate();
// Draw image
const image = new Image();
image.onload = () => {
// Draw whole image
ctx.drawImage(image, 0, 0);
// Draw scaled
ctx.drawImage(image, 0, 0, 100, 100);
// Draw cropped and positioned
// drawImage(img, sx, sy, sw, sh, dx, dy, dw, dh)
ctx.drawImage(image, 50, 50, 100, 100, 200, 200, 150, 150);
};
image.src = "photo.jpg";
Example: Pixel manipulation
// Get pixel data
const imageData = ctx.getImageData(0, 0, canvas.width, canvas.height);
const pixels = imageData.data; // Uint8ClampedArray [r,g,b,a, r,g,b,a, ...]
// Grayscale filter
for (let i = 0; i < pixels.length; i += 4) {
const r = pixels[i];
const g = pixels[i + 1];
const b = pixels[i + 2];
const gray = 0.299 * r + 0.587 * g + 0.114 * b;
pixels[i] = gray; // Red
pixels[i + 1] = gray; // Green
pixels[i + 2] = gray; // Blue
// pixels[i + 3] is alpha (unchanged)
}
// Put modified pixels back
ctx.putImageData(imageData, 0, 0);
// Invert colors
for (let i = 0; i < pixels.length; i += 4) {
pixels[i] = 255 - pixels[i]; // Red
pixels[i + 1] = 255 - pixels[i + 1]; // Green
pixels[i + 2] = 255 - pixels[i + 2]; // Blue
}
ctx.putImageData(imageData, 0, 0);
// Increase brightness
const brightness = 50;
for (let i = 0; i < pixels.length; i += 4) {
pixels[i] += brightness; // Red
pixels[i + 1] += brightness; // Green
pixels[i + 2] += brightness; // Blue
}
ctx.putImageData(imageData, 0, 0);
Note: Canvas operations are immediate mode - no scene graph. Use
requestAnimationFrame for smooth animations. save() and restore() manage state stack (styles, transforms). Canvas is raster-based - scales poorly; use CSS for sizing and set canvas.width/height for resolution.
6. WebGL API for 3D Graphics Rendering
| WebGL Concept | Description | Purpose |
|---|---|---|
| Context | canvas.getContext("webgl") or "webgl2" |
Initialize WebGL rendering context |
| Shader | GLSL programs (vertex and fragment shaders) | GPU-side rendering logic |
| Program | Linked vertex + fragment shader pair | Complete rendering pipeline |
| Buffer | GPU memory for vertex data (positions, colors, UVs) | Store geometry data |
| Attribute | Per-vertex data (position, normal, UV) | Vertex shader inputs |
| Uniform | Global shader variables (MVP matrix, time, color) | Shader parameters |
| Texture | Image data on GPU for mapping onto geometry | Surface details, materials |
Example: Basic WebGL setup and triangle
const canvas = document.querySelector("canvas");
const gl = canvas.getContext("webgl") || canvas.getContext("experimental-webgl");
if (!gl) {
console.error("WebGL not supported");
}
// Vertex shader (GLSL)
const vertexShaderSource = `
attribute vec2 a_position;
void main() {
gl_Position = vec4(a_position, 0.0, 1.0);
}
`;
// Fragment shader (GLSL)
const fragmentShaderSource = `
precision mediump float;
uniform vec4 u_color;
void main() {
gl_FragColor = u_color;
}
`;
// Create shader
function createShader(gl, type, source) {
const shader = gl.createShader(type);
gl.shaderSource(shader, source);
gl.compileShader(shader);
if (!gl.getShaderParameter(shader, gl.COMPILE_STATUS)) {
console.error("Shader compile error:", gl.getShaderInfoLog(shader));
gl.deleteShader(shader);
return null;
}
return shader;
}
// Create program
function createProgram(gl, vertexShader, fragmentShader) {
const program = gl.createProgram();
gl.attachShader(program, vertexShader);
gl.attachShader(program, fragmentShader);
gl.linkProgram(program);
if (!gl.getProgramParameter(program, gl.LINK_STATUS)) {
console.error("Program link error:", gl.getProgramInfoLog(program));
gl.deleteProgram(program);
return null;
}
return program;
}
// Compile shaders
const vertexShader = createShader(gl, gl.VERTEX_SHADER, vertexShaderSource);
const fragmentShader = createShader(gl, gl.FRAGMENT_SHADER, fragmentShaderSource);
// Create program
const program = createProgram(gl, vertexShader, fragmentShader);
// Get attribute/uniform locations
const positionLocation = gl.getAttribLocation(program, "a_position");
const colorLocation = gl.getUniformLocation(program, "u_color");
// Triangle vertices
const positions = new Float32Array([
0.0, 0.5, // Top
-0.5, -0.5, // Bottom left
0.5, -0.5 // Bottom right
]);
// Create buffer
const positionBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, positionBuffer);
gl.bufferData(gl.ARRAY_BUFFER, positions, gl.STATIC_DRAW);
// Clear canvas
gl.clearColor(0.0, 0.0, 0.0, 1.0);
gl.clear(gl.COLOR_BUFFER_BIT);
// Use program
gl.useProgram(program);
// Enable attribute
gl.enableVertexAttribArray(positionLocation);
// Bind buffer
gl.bindBuffer(gl.ARRAY_BUFFER, positionBuffer);
// Tell attribute how to get data
gl.vertexAttribPointer(
positionLocation,
2, // 2 components per iteration
gl.FLOAT, // data type
false, // normalize
0, // stride
0 // offset
);
// Set color uniform
gl.uniform4f(colorLocation, 1.0, 0.0, 0.0, 1.0); // Red
// Draw
gl.drawArrays(gl.TRIANGLES, 0, 3);
Example: Textured quad with WebGL
// Vertex shader with UVs
const vertexShaderSource = `
attribute vec2 a_position;
attribute vec2 a_texCoord;
varying vec2 v_texCoord;
void main() {
gl_Position = vec4(a_position, 0.0, 1.0);
v_texCoord = a_texCoord;
}
`;
// Fragment shader with texture
const fragmentShaderSource = `
precision mediump float;
varying vec2 v_texCoord;
uniform sampler2D u_texture;
void main() {
gl_FragColor = texture2D(u_texture, v_texCoord);
}
`;
// Quad vertices (2 triangles)
const positions = new Float32Array([
-0.5, 0.5, // Top left
-0.5, -0.5, // Bottom left
0.5, 0.5, // Top right
0.5, -0.5 // Bottom right
]);
// Texture coordinates
const texCoords = new Float32Array([
0.0, 0.0, // Top left
0.0, 1.0, // Bottom left
1.0, 0.0, // Top right
1.0, 1.0 // Bottom right
]);
// Create texture
function loadTexture(gl, url) {
const texture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, texture);
// Temporary 1x1 blue pixel
gl.texImage2D(
gl.TEXTURE_2D, 0, gl.RGBA, 1, 1, 0,
gl.RGBA, gl.UNSIGNED_BYTE,
new Uint8Array([0, 0, 255, 255])
);
// Load image
const image = new Image();
image.onload = () => {
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, image);
// Set filtering
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.LINEAR);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
};
image.src = url;
return texture;
}
// Draw textured quad
gl.drawArrays(gl.TRIANGLE_STRIP, 0, 4);
Note: WebGL is low-level - consider libraries like Three.js, Babylon.js for easier 3D. WebGL 2 has better features but less browser support than WebGL 1. Always check for WebGL support:
!!canvas.getContext("webgl"). Shaders are written in GLSL (OpenGL Shading Language).
Warning: WebGL can fail on old hardware or when GPU is unavailable. Always check context creation. Shader compilation can fail - always check
gl.getShaderParameter and gl.getProgramParameter. WebGL state is global - careful with state management in complex apps.
Media and Graphics API Best Practices
- Use
play().catch()to handle autoplay failures gracefully - Always
stop()media tracks when done to release camera/microphone hardware - Request minimal permissions - audio OR video separately if possible
- Use
MediaRecorder.isTypeSupported()to check format support before recording - For Web Audio, resume AudioContext with user gesture:
audioContext.resume() - Connect audio nodes before starting playback - nodes can't be reconnected after start
- Use
requestAnimationFramefor smooth Canvas and WebGL animations - Save/restore Canvas state with
ctx.save()andctx.restore()for clean transforms - For Canvas, set
canvas.widthandcanvas.heightfor resolution, use CSS for display size - Consider Three.js or Babylon.js instead of raw WebGL for complex 3D scenes
- Always check WebGL shader compilation and program linking for errors
- Use
getUserMediaconstraints wisely - ideal values are preferred, not mandatory