It's 9:47 pm. A creator on your platform has been uploading a 3.2 GB 4K screen recording for eleven minutes. At 94% complete, the subway tunnel kills their connection. The progress bar vanishes. They start the upload over from zero.
This is the failure mode single-PUT uploads produce at video scale. A "video upload" in 2026 is rarely a small file over a stable network. It is a multi-gigabyte asset, captured on mobile, uploaded over flaky links, and expected to survive tunnels and radio handoffs without user intervention.
Accelerated uploads are how that creator's 94% does not disappear. This guide walks through the upload handshake, chunking patterns, resumable uploads with TUS, retry logic, completion handling, and how we built the FastPix resumable upload SDK around these primitives.
Accelerated video uploads stand on three patterns: signed URLs instead of direct backend ingestion, chunked transfer so no single request owns the whole file, and resumable offsets so a dropped connection costs one chunk not the whole upload. The TUS protocol standardizes the offset handshake and is being pushed toward an IETF spec. A well-built resumable SDK handles chunk sizing, retry with exponential backoff, offset queries, and progress events. Client code gets smaller. Failed uploads become rare enough that your support queue actually notices.
A single-PUT upload treats the whole file as one request body. For a 3 GB file on a 20 Mbps uplink, that request stays open for roughly 20 minutes. Any network event in that window kills the entire upload.
Mobile uploads make this worse. Carrier networks switch radios, IPs rotate, corporate VPNs drop, and TCP keepalives time out. A 20-minute single request has roughly zero chance of surviving a commute.
The fix is structural. Split the file. Track progress per chunk. Let the client query "where did I leave off?" and continue from there.
A production upload flow has four moving parts: the client, an upload API that issues signed URLs, object storage that accepts the chunks, and a webhook consumer that reacts once the asset is committed. The sequence matters because each participant has a different job and a different scaling profile.

The client never holds secrets. The upload API issues a short-lived signed URL, scoped to one asset, with an expiry of a few minutes. The client writes chunks to that URL. The backend receives a webhook once the asset is committed and can then kick off encoding, transcription, or CRM sync.
Here's a minimal signed URL request against the FastPix on-demand endpoint:
curl -X POST https://api.fastpix.io/v1/on-demand \
-u "$FASTPIX_TOKEN_ID:$FASTPIX_SECRET" \
-H "Content-Type: application/json" \
-d '{
"inputs": [{"type": "video"}],
"metadata": {"source": "web-recorder"},
"cors_origin": "https://yourapp.com"
}'
The response returns an upload URL and an asset ID. The client uses the upload URL for chunked PATCHes. The asset ID is what you store in your database to correlate the upload with downstream webhook events.
TUS is an open HTTP-based protocol for resumable file uploads. A client creates an upload, sends chunks with a PATCH request and an Upload-Offset header, and queries the current offset with HEAD whenever it needs to resume. The protocol is deliberately boring, which is exactly what you want in a transfer layer.
What matters for 2026: the TUS working group and the IETF are driving toward a standardized resumable upload spec (Standardizing Resumable Uploads with the IETF, tus.io blog). Once ratified, server-side behavior becomes predictable across clouds and SDKs stop needing per-provider adapters. Teams betting on TUS today are betting on tomorrow's default.
The three verbs you implement:
Every chunk upload includes Upload-Offset and Content-Type: application/offset+octet-stream. The server responds with the new offset. If the client crashes or the network drops, the next session starts with a HEAD, reads the offset, and continues.
Retries are where naive upload code turns into a 2 am incident. Three rules keep this sane.
First, retry idempotent operations only. HEAD and PATCH with Upload-Offset are safe. Blindly retrying a POST that creates the upload is not. Second, use exponential backoff with jitter. Every client retrying on the same 1-second interval after an edge node blip becomes a self-inflicted DDoS. Third, cap total retry time per chunk, then escalate. A chunk that fails six times in 90 seconds is telling you something the client can't fix alone.
Here's a retry loop in Node.js using fetch. It handles 5xx and network errors, backs off with jitter, and bails after a cap:
async function patchChunk(url, chunk, offset, attempt = 0) {
const maxAttempts = 6;
try {
const res = await fetch(url, {
method: 'PATCH',
headers: {
'Upload-Offset': offset,
'Content-Type': 'application/offset+octet-stream',
'Tus-Resumable': '1.0.0'
},
body: chunk
});
if (res.status >= 500) throw new Error(`server ${res.status}`);
if (!res.ok) throw new Error(`fatal ${res.status}`);
return Number(res.headers.get('Upload-Offset'));
} catch (err) {
if (attempt >= maxAttempts) throw err;
const delay = Math.min(2 ** attempt * 500, 16000) + Math.random() * 500;
await new Promise(r => setTimeout(r, delay));
return patchChunk(url, chunk, offset, attempt + 1);
}
}If a chunk keeps failing past the cap, surface the failure to the UI and write the upload state to local storage. The next time the user opens the app, a background worker can read that state, call HEAD to get the current offset, and resume from there without asking the user to pick the file again.
Progress bars are the part of upload code most teams get wrong. The two common mistakes: reporting bytes queued for transmission instead of bytes confirmed by the server, and throwing progress events on every chunk regardless of UI frame rate.
Report confirmed bytes only. That's the Upload-Offset the server returns on each successful PATCH. Throttle progress events to roughly 10 per second. A React app re-rendering on every TCP ACK is how you lose frames on low-end Android.
When a chunk fails and enters retry, show a "reconnecting" state instead of freezing the bar. Users read a still bar as a dead upload and cancel. A brief subdued pulse tells them the system is working the problem.
Three upload patterns dominate. They sit on a complexity-vs-reliability curve.
For user-uploaded video, TUS is the default answer. Multipart is fine when your uploaders are trusted services on reliable links. Single-PUT is fine when your files are tiny and your users are forgiving.
Loom processes a long-tail of screen recordings that are often multi-gigabyte 4K captures from sessions that run past the user's attention span. What's interesting isn't that they use chunked uploads. It's that they upload during the recording itself, so "stop recording" doesn't trigger a multi-minute upload wait.
The architecture streams chunks from the browser or desktop client to backend storage as the user records, with a background iframe hidden from the UI acting as a proxy to the Loom API (Loom Record SDK architecture). By the time the user hits stop, most of the bytes are already committed server-side. Processing and instant shareable link generation kick off against a file that is already 90%-plus uploaded.
The engineering lesson generalizes. Any upload UX where the user pays the wait twice (record, then wait for upload) is a UX you can shorten by moving the transfer earlier in the pipeline. Streamed chunking during capture, deferred finalization, and webhook-driven post-processing are the three primitives that make "instant" actually feel instant.
We built the FastPix upload SDKs around these patterns so you don't rewrite them per platform. The web, iOS, and Android SDKs all speak the same resumable protocol, with automatic chunk sizing based on measured throughput, exponential backoff with jitter out of the box, and offset recovery across app restarts.
On the backend, a single POST /v1/on-demand call returns a signed upload URL plus an asset ID. You hand the URL to the client SDK and get back a progress stream you can pipe into any UI. No separate upload service, no S3 bucket to manage, no DIY multipart coordination. When the upload finishes, the asset enters encoding automatically and fires webhooks through the full lifecycle.
A quick sketch from the web SDK:
import { createUpload } from '@fastpix/upload';
const upload = createUpload({
endpoint: signedUrl, // from POST /v1/on-demand
file,
chunkSize: 'auto'
});
upload.on('progress', p => setProgress(p.percent));
upload.on('error', err => showRetry(err));
upload.on('success', () => onUploadComplete(assetId));
upload.start();This is the whole integration for a web uploader. The SDK handles chunk sizing, retries, offset recovery, and progress events.
Once uploads complete, you'll want webhooks to drive the next step in your pipeline, things like encoding triggers, CRM notifications, transcript generation. That's a topic on its own, and we covered it in our webhooks deep-dive.
An accelerated video upload splits a large file into chunks, uploads them in parallel, and resumes from the last completed chunk if the connection drops. This avoids the single-PUT failure mode where a broken connection forces a full restart.
Most teams settle on 5 to 16 MB chunks. Smaller chunks retry faster on flaky networks but add overhead. Larger chunks reduce request count but waste bandwidth when a chunk fails. Resumable SDKs typically auto-tune chunk size based on measured throughput.
TUS is an open protocol for resumable file uploads over HTTP. A client creates an upload URL, PATCHes chunks with an Upload-Offset header, and queries the offset with HEAD to resume after a failure. TUS is being standardized through the IETF.
The upload API returns a final 200 or 204 once the last chunk is committed. For async processing, use webhook events like video.asset.created and video.asset.ready to drive the next step in your pipeline.
Use an SDK unless you have a specific reason not to. A resumable upload SDK handles chunk sizing, retry with backoff, offset queries, and progress events. Writing all of that yourself is weeks of work that every upstream protocol change will invalidate.
