ImagePal
All guides
Technicaltechnicalweb-workerscanvas

How modern browsers compress images without a server

6 min read

Most online image tools work by uploading your file to a server, processing it there, and sending the result back. That model made sense in 2010 because browsers couldn't reliably encode JPEG — let alone WebP or AVIF — without help. In 2026, that's no longer true. Modern browsers ship with the same image codec libraries that servers use, expose them through clean APIs, and run them fast enough to compete with server-side processing for most workloads.

This guide is the technical tour of how that works: the codec landscape, the API stack, what a real client-side compression pipeline looks like end-to-end, and where the limits actually are.

The browser codec landscape in 2026

Every modern browser ships with native implementations of:

  • JPEG: decode and encode. The encoder is libjpeg-turbo or a derivative.
  • PNG: decode and encode. lossless.
  • WebP: decode (universal since 2020), encode (universal since ~2022).
  • AVIF: decode (universal since ~2022), encode (Chrome and Edge since 2024, Firefox progressively, Safari catching up).
  • GIF: decode universal; encoding requires a WASM library since browser support is read-only.

When the browser displays a JPEG on a webpage, it's running the same libjpeg routines that PHP, Python, and Node.js use on the server. The encoder is exposed through Canvas APIs, which means any JavaScript code can ask the browser to re-encode pixel data into any of the supported formats.

The API stack

There are four browser APIs that together enable a complete image processing pipeline. They've been standardised over the past decade and are universally supported in 2026.

createImageBitmap and OffscreenCanvas

createImageBitmap is the fast path for decoding an image into a GPU-friendly representation. Unlike the older Image() approach, it can decode off the main thread and produces a transferable bitmap that can be passed to a Web Worker without copying pixel data.

OffscreenCanvas is a canvas that exists outside the DOM, so it can be created and rendered to from a Web Worker. Together, they let you decode and process images entirely off the main thread — the page UI stays responsive even while compressing a 20MB photo.

Web Workers for off-thread processing

Web Workers are isolated JavaScript threads that don't share memory with the main thread. For image processing, this matters because encoding a large image can take hundreds of milliseconds — long enough to freeze the UI if it ran on the main thread. By doing the work in a worker, the page remains responsive and the user can queue up multiple files in parallel (one worker per file, scaled to CPU core count).

Canvas.toBlob for re-encoding

Once you've decoded an image and drawn it onto a canvas (regular or offscreen), Canvas.toBlob() asks the browser to re-encode it to a target format and quality. This is the workhorse of in-browser compression.

// Re-encode a canvas as JPEG at quality 75
canvas.toBlob(
  (blob) => {
    // blob is now a compressed JPEG you can download or process
  },
  "image/jpeg",
  0.75
);

The MIME type controls the format ('image/jpeg', 'image/webp', 'image/png', 'image/avif'); the third argument controls quality from 0 to 1.

WebCodecs for low-level control

WebCodecs API exposes the browser's underlying encoders and decoders directly, without going through Canvas. It's overkill for most image work but useful for streaming pipelines, frame-by-frame video processing, and anything that needs fine control over encoder parameters (subsampling, chroma quality, etc.). Chrome and Edge fully support it; Firefox is partial; Safari is in progress as of 2026.

A complete client-side compression pipeline

Here's the full flow for compressing a JPEG entirely in the browser, end-to-end:

  1. User drops a File object into the page (drag-and-drop or file input).
  2. The File is transferred to a Web Worker via postMessage. (File objects are transferable — no copy.)
  3. The worker calls createImageBitmap(file) to decode the image into a bitmap.
  4. The worker creates an OffscreenCanvas at the desired output dimensions and draws the bitmap onto it (resize happens here if dimensions differ).
  5. The worker calls offscreenCanvas.convertToBlob({ type: 'image/jpeg', quality: 0.75 }) to re-encode.
  6. The resulting Blob is transferred back to the main thread, which creates an object URL for download.

The main thread doesn't block at any point. The user can drop in 20 files and they'll all process in parallel (capped at navigator.hardwareConcurrency workers, typically 4–16).

// Inside a Web Worker
self.onmessage = async (e) => {
  const { file, quality } = e.data;
  const bitmap = await createImageBitmap(file);
  const canvas = new OffscreenCanvas(bitmap.width, bitmap.height);
  const ctx = canvas.getContext("2d");
  ctx.drawImage(bitmap, 0, 0);
  const blob = await canvas.convertToBlob({
    type: "image/jpeg",
    quality,
  });
  self.postMessage({ blob }, [await blob.arrayBuffer()]);
};

Performance: browser vs server benchmarks

Approximate numbers from internal testing on a representative 4MB JPEG (4000×3000 photograph), compressing to JPEG quality 75:

  • Server (libvips on a modern Linux VM): ~80ms compute time, plus 200–500ms upload + 50–100ms download = 330–680ms total.
  • Desktop browser (M1 Mac, Chrome): ~120ms compute time, no network = 120ms total.
  • Mid-range Android phone (2023, Chrome): ~280ms compute time, no network = 280ms total.
  • iPhone 14, Safari: ~140ms compute time = 140ms total.

The pattern: server-side has slightly faster compute but loses the round-trip war for any reasonably-sized image on a reasonable connection. End-to-end, client-side wins or ties on every device tested except the cheapest phones on the fastest networks (where the network round-trip is fast enough to mask the server's compute advantage).

AVIF encoding is the exception: AVIF is genuinely slow to encode, and a server with multiple cores and a GPU-accelerated AVIF encoder can finish in ~150ms what a phone takes 2–4 seconds to do. If AVIF encoding is on the critical path of your UX, server-side is still the right answer.

Limits and gotchas

Browser-based image processing is mature but not magical. The real-world limits to be aware of:

  • Memory caps. Mobile Safari aggressively kills tabs that allocate more than ~250MB at once. A single 100-megapixel image can hit that limit during decode. For photos this size, processing in tiles or downsampling first is necessary.
  • Canvas size limits. iOS Safari historically capped canvases at 16,777,216 pixels (about 4096×4096). Newer iOS versions are more generous, but processing very large source images may require tiled rendering.
  • AVIF encoding speed. Slow on every browser; very slow on phones. Acceptable for one-off use, painful for batch.
  • WASM startup cost. WASM-based decoders (HEIC, JPEG XL) add ~300–800KB of one-time download. Acceptable for tools where the user expects to do multiple operations.
  • Disk I/O. Reading a multi-gigabyte folder of images via the File System Access API is fast on desktop, much slower on mobile. UX should reflect this.

Why all of this matters

Server-side image processing made architectural sense in 2010. In 2026 it's an inherited assumption that costs users privacy, costs operators money (you have to pay for the servers), and costs end-to-end performance (the network round-trip is usually slower than the compute saving). The browser APIs have caught up, the codecs are universal, and the performance is competitive for everything except the most CPU-hungry encodes (AVIF on phones).

If you're building an image tool today, the default should be client-side. Server-side is the special case, justified only when the operation genuinely needs server resources — collaborative editing, persistent storage, or batch encoding at scale where the user can't supply the compute.

The bottom line

Browsers in 2026 are a fully capable image processing platform. The codec libraries are the same ones servers use. The APIs (createImageBitmap, OffscreenCanvas, Canvas.toBlob, Web Workers, WebCodecs) are mature, fast, and universally supported. The only meaningful performance gap is AVIF encoding on low-end phones. For everything else, the right architectural default has flipped: client-side first, server-side only when you need it.

See it in action
ImagePal is a working implementation of the techniques in this article.

Frequently asked questions

How does in-browser image compression actually work?
The browser decodes the image into raw pixel data using its built-in codec, then re-encodes those pixels through Canvas.toBlob() or createImageBitmap + OffscreenCanvas. The same codec libraries used by servers (libjpeg, libwebp, libavif) ship with every modern browser.
Is in-browser compression as fast as server-side?
On a desktop, comparable. On a mid-range phone, often 2–3× slower than a fast server. The tradeoff is no network round-trip, so end-to-end the user experience is usually faster — especially on slow connections.
What are the main browser APIs for image processing?
createImageBitmap for fast decoding, OffscreenCanvas for off-thread rendering, Canvas.toBlob() for re-encoding, and Web Workers to keep the main thread responsive. WebCodecs API provides lower-level control for advanced cases.
Can browsers encode AVIF and WebP, or only JPEG?
All major browsers can encode JPEG, PNG, and WebP via Canvas.toBlob(). AVIF encoding is available in Chrome and Edge as of 2024–2025; Firefox and Safari are catching up. For broad AVIF support, libavif compiled to WebAssembly is the fallback.
What are the main limitations of browser-based image processing?
Memory: large images (50+ megapixels) can exhaust available memory in mobile browsers. Performance: encoding speed depends on the user's CPU. Format support: niche formats (TIFF, JPEG XL, older raw formats) require additional WASM decoders.

Related guides

Every ImagePal tool runs entirely in your browser. No upload, no account, no tracking.