How to build accelerated video uploads: chunking, resumability, and completion handling in 2026

April 17, 2026
7 Min
Video Engineering
Share
This is some text inside of a div block.
Join Our Newsletter for the Latest in Streaming Technology

It's 9:47 pm. A creator on your platform has been uploading a 3.2 GB 4K screen recording for eleven minutes. At 94% complete, the subway tunnel kills their connection. The progress bar vanishes. They start the upload over from zero.

This is the failure mode single-PUT uploads produce at video scale. A "video upload" in 2026 is rarely a small file over a stable network. It is a multi-gigabyte asset, captured on mobile, uploaded over flaky links, and expected to survive tunnels and radio handoffs without user intervention.

Accelerated uploads are how that creator's 94% does not disappear. This guide walks through the upload handshake, chunking patterns, resumable uploads with TUS, retry logic, completion handling, and how we built the FastPix resumable upload SDK around these primitives.

TL;DR

Accelerated video uploads stand on three patterns: signed URLs instead of direct backend ingestion, chunked transfer so no single request owns the whole file, and resumable offsets so a dropped connection costs one chunk not the whole upload. The TUS protocol standardizes the offset handshake and is being pushed toward an IETF spec. A well-built resumable SDK handles chunk sizing, retry with exponential backoff, offset queries, and progress events. Client code gets smaller. Failed uploads become rare enough that your support queue actually notices.

Why single-PUT uploads fail at video scale

A single-PUT upload treats the whole file as one request body. For a 3 GB file on a 20 Mbps uplink, that request stays open for roughly 20 minutes. Any network event in that window kills the entire upload.

Mobile uploads make this worse. Carrier networks switch radios, IPs rotate, corporate VPNs drop, and TCP keepalives time out. A 20-minute single request has roughly zero chance of surviving a commute.

The fix is structural. Split the file. Track progress per chunk. Let the client query "where did I leave off?" and continue from there.

The upload handshake: signed URLs, chunking, completion

A production upload flow has four moving parts: the client, an upload API that issues signed URLs, object storage that accepts the chunks, and a webhook consumer that reacts once the asset is committed. The sequence matters because each participant has a different job and a different scaling profile.

The client never holds secrets. The upload API issues a short-lived signed URL, scoped to one asset, with an expiry of a few minutes. The client writes chunks to that URL. The backend receives a webhook once the asset is committed and can then kick off encoding, transcription, or CRM sync.

Here's a minimal signed URL request against the FastPix on-demand endpoint:

curl -X POST https://api.fastpix.io/v1/on-demand \ 
  -u "$FASTPIX_TOKEN_ID:$FASTPIX_SECRET" \ 
  -H "Content-Type: application/json" \ 
  -d '{ 
    "inputs": [{"type": "video"}], 
    "metadata": {"source": "web-recorder"}, 
    "cors_origin": "https://yourapp.com" 
  }' 
 

The response returns an upload URL and an asset ID. The client uses the upload URL for chunked PATCHes. The asset ID is what you store in your database to correlate the upload with downstream webhook events.

Implementing resumable uploads with TUS

TUS is an open HTTP-based protocol for resumable file uploads. A client creates an upload, sends chunks with a PATCH request and an Upload-Offset header, and queries the current offset with HEAD whenever it needs to resume. The protocol is deliberately boring, which is exactly what you want in a transfer layer.

What matters for 2026: the TUS working group and the IETF are driving toward a standardized resumable upload spec (Standardizing Resumable Uploads with the IETF, tus.io blog). Once ratified, server-side behavior becomes predictable across clouds and SDKs stop needing per-provider adapters. Teams betting on TUS today are betting on tomorrow's default.

The three verbs you implement:

  • POST to create an upload and get back an upload URL
  • PATCH to write bytes at a given Upload-Offset
  • HEAD to query the current offset before resuming

Every chunk upload includes Upload-Offset and Content-Type: application/offset+octet-stream. The server responds with the new offset. If the client crashes or the network drops, the next session starts with a HEAD, reads the offset, and continues.

Handling network failures and retry logic

Retries are where naive upload code turns into a 2 am incident. Three rules keep this sane.

First, retry idempotent operations only. HEAD and PATCH with Upload-Offset are safe. Blindly retrying a POST that creates the upload is not. Second, use exponential backoff with jitter. Every client retrying on the same 1-second interval after an edge node blip becomes a self-inflicted DDoS. Third, cap total retry time per chunk, then escalate. A chunk that fails six times in 90 seconds is telling you something the client can't fix alone.

Here's a retry loop in Node.js using fetch. It handles 5xx and network errors, backs off with jitter, and bails after a cap:

async function patchChunk(url, chunk, offset, attempt = 0) { 
  const maxAttempts = 6; 
  try { 
    const res = await fetch(url, { 
      method: 'PATCH', 
      headers: { 
        'Upload-Offset': offset, 
        'Content-Type': 'application/offset+octet-stream', 
        'Tus-Resumable': '1.0.0' 
      }, 
      body: chunk 
    }); 
    if (res.status >= 500) throw new Error(`server ${res.status}`); 
    if (!res.ok) throw new Error(`fatal ${res.status}`); 
    return Number(res.headers.get('Upload-Offset')); 
  } catch (err) { 
    if (attempt >= maxAttempts) throw err; 
    const delay = Math.min(2 ** attempt * 500, 16000) + Math.random() * 500; 
    await new Promise(r => setTimeout(r, delay)); 
    return patchChunk(url, chunk, offset, attempt + 1); 
  } 
}

If a chunk keeps failing past the cap, surface the failure to the UI and write the upload state to local storage. The next time the user opens the app, a background worker can read that state, call HEAD to get the current offset, and resume from there without asking the user to pick the file again.

Progress reporting and client-side UX

Progress bars are the part of upload code most teams get wrong. The two common mistakes: reporting bytes queued for transmission instead of bytes confirmed by the server, and throwing progress events on every chunk regardless of UI frame rate.

Report confirmed bytes only. That's the Upload-Offset the server returns on each successful PATCH. Throttle progress events to roughly 10 per second. A React app re-rendering on every TCP ACK is how you lose frames on low-end Android.

When a chunk fails and enters retry, show a "reconnecting" state instead of freezing the bar. Users read a still bar as a dead upload and cancel. A brief subdued pulse tells them the system is working the problem.

Single-PUT vs multipart vs TUS resumable

Three upload patterns dominate. They sit on a complexity-vs-reliability curve.

Dimension Single-PUT Multipart (S3-style) TUS resumable
Max practical file size ~5 GB, network-dependent 5 TB (S3) Effectively unlimited
Resumability No, full restart on failure Yes, at part boundaries Yes, at byte offset
Retry granularity Whole file Per part (typically 5-100 MB) Per chunk (typically 5-16 MB)
Client complexity Low Medium, requires part coordination Medium, SDK-provided
Server complexity Low High, part assembly + lifecycle rules Medium, offset state
Best for Files under 100 MB on stable networks Large archival workloads Any user-uploaded video in 2026

For user-uploaded video, TUS is the default answer. Multipart is fine when your uploaders are trusted services on reliable links. Single-PUT is fine when your files are tiny and your users are forgiving.

How Loom handles large-file video uploads

Loom processes a long-tail of screen recordings that are often multi-gigabyte 4K captures from sessions that run past the user's attention span. What's interesting isn't that they use chunked uploads. It's that they upload during the recording itself, so "stop recording" doesn't trigger a multi-minute upload wait.

The architecture streams chunks from the browser or desktop client to backend storage as the user records, with a background iframe hidden from the UI acting as a proxy to the Loom API (Loom Record SDK architecture). By the time the user hits stop, most of the bytes are already committed server-side. Processing and instant shareable link generation kick off against a file that is already 90%-plus uploaded.

The engineering lesson generalizes. Any upload UX where the user pays the wait twice (record, then wait for upload) is a UX you can shorten by moving the transfer earlier in the pipeline. Streamed chunking during capture, deferred finalization, and webhook-driven post-processing are the three primitives that make "instant" actually feel instant.

What FastPix's resumable upload SDK gives you

We built the FastPix upload SDKs around these patterns so you don't rewrite them per platform. The web, iOS, and Android SDKs all speak the same resumable protocol, with automatic chunk sizing based on measured throughput, exponential backoff with jitter out of the box, and offset recovery across app restarts.

On the backend, a single POST /v1/on-demand call returns a signed upload URL plus an asset ID. You hand the URL to the client SDK and get back a progress stream you can pipe into any UI. No separate upload service, no S3 bucket to manage, no DIY multipart coordination. When the upload finishes, the asset enters encoding automatically and fires webhooks through the full lifecycle.

A quick sketch from the web SDK:

import { createUpload } from '@fastpix/upload'; 
 
const upload = createUpload({ 
  endpoint: signedUrl,  // from POST /v1/on-demand 
  file, 
  chunkSize: 'auto' 
}); 
 
upload.on('progress', p => setProgress(p.percent)); 
upload.on('error', err => showRetry(err)); 
upload.on('success', () => onUploadComplete(assetId)); 
 
upload.start();

This is the whole integration for a web uploader. The SDK handles chunk sizing, retries, offset recovery, and progress events.

Once the upload completes: where webhooks come in

Once uploads complete, you'll want webhooks to drive the next step in your pipeline, things like encoding triggers, CRM notifications, transcript generation. That's a topic on its own, and we covered it in our webhooks deep-dive.

FAQs

What is an accelerated video upload?

An accelerated video upload splits a large file into chunks, uploads them in parallel, and resumes from the last completed chunk if the connection drops. This avoids the single-PUT failure mode where a broken connection forces a full restart.

What chunk size should I use for video uploads?

Most teams settle on 5 to 16 MB chunks. Smaller chunks retry faster on flaky networks but add overhead. Larger chunks reduce request count but waste bandwidth when a chunk fails. Resumable SDKs typically auto-tune chunk size based on measured throughput.

What is the TUS protocol?

TUS is an open protocol for resumable file uploads over HTTP. A client creates an upload URL, PATCHes chunks with an Upload-Offset header, and queries the offset with HEAD to resume after a failure. TUS is being standardized through the IETF.

How do I know when a video upload is complete?

The upload API returns a final 200 or 204 once the last chunk is committed. For async processing, use webhook events like video.asset.created and video.asset.ready to drive the next step in your pipeline.

Should I build chunked upload from scratch or use an SDK?

Use an SDK unless you have a specific reason not to. A resumable upload SDK handles chunk sizing, retry with backoff, offset queries, and progress events. Writing all of that yourself is weeks of work that every upstream protocol change will invalidate.

Explore Pricing Options

Enjoyed reading? You might also like

Try FastPix today!

FastPix grows with you – from startups to growth stage and beyond.