A team adds chapters when users start asking the same question over and over:
“Where does this part start?”
To answer it, someone opens the video, scrubs the timeline, finds the right moment, and adds a timestamp with a label. It takes a few minutes per video.
When you’re uploading one or two videos a week, that workflow is fine. When you’re uploading dozens, it stops working. Chapters are added inconsistently. Labels vary by person. Updated videos keep old timestamps. Some videos ship without chapters at all. Over time, no one is sure which chapters are correct.
We see this across edtech platforms, OTT libraries, and training tools. Chapters are expected by users, but manual workflows don’t scale, and basic automation produces unreliable results.
This guide explains how to automate video chapter creation using AI in a way that stays consistent and usable as your video catalog grows.

Traditional chaptering is painful. Either someone on your team watches the full video and manually timestamps sections, or you hack together a brittle pipeline of speech-to-text, keyword detection, and cut heuristics. It works… until it doesn’t.
Manual tagging:
Rule-based automation:
What AI-generated chapters offer instead is something devs can actually build on top of:
[
{
"start": 0,
"end": 42,
"title": "Getting started with setup"
},
{
"start": 43,
"end": 110,
"title": "Authenticating the SDK"
}
]
This means you can directly plug chapter data into your:
Whether you’re processing a few demo videos or an entire OTT library, the pipeline works the same. There’s no editor bottleneck, no manual QA, no stale chapter metadata when content changes.
The output is structured, predictable, and easy to work with, just like any other piece of content in your system. Chapters become real data you can use across your app: jump links, player markers, search indexes, even analytics.
And that’s where a platform like FastPix comes in.
Instead of piecing together tools or tagging videos by hand, you can generate chapters automatically using FastPix’s built-in AI features.
There are two ways to add AI-generated chapters using FastPix, when uploading new media, or retroactively for content that’s already live in your system. In both cases, it’s just one API call. No extra tooling, no separate processing steps.
Let’s walk through both.
Option 1: Add chapters while uploading new media
If you’re uploading a new video (either by direct upload or via URL), you can enable chapter generation at the time of upload by adding "chapters": true in your request.
Via pull from URL:
POST /on-demand
{
"inputs": [
{
"type": "video",
"url": "https://static.fastpix.io/sample.mp4"
}
],
"chapters": true,
"accessPolicy": "public"
} Via direct upload:
POST /upload
{
"corsOrigin": "*",
"pushMediaSettings": {
"chapters": true,
"accessPolicy": "public"
}
}
Once uploaded, FastPix will:
You can then access these chapters from the media object or listen for the video.mediaAI.chapters.ready event when generation is complete.
Option 2: Generate chapters for existing media
Already have media uploaded to FastPix? You can generate chapters retroactively using the PATCH /on-demand/{mediaId}/chapters endpoint.
Here’s what that looks like:
PATCH /on-demand/your-media-id/chapters
{
"chapters": true
}
That’s it. FastPix will begin processing the existing file and attach chapter data once ready.
Accessing chapter data
Once chaptering is complete, you’ll get a webhook event:
{
"type": "video.mediaAI.chapters.ready",
"data": {
"isChaptersGenerated": true,
"chapters": [
{
"chapterNumber": 1,
"title": "Introduction to Blockchain",
"startTime": "00:00:00",
"endTime": "00:02:30",
"description": "Introduction to transactional problems and blockchain."
},
...
]
}
}
Each chapter includes:
This JSON is structured, clean, and ready to use:
Check out our docs and guides to understand it better.
Things to keep in mind
It’s one thing to generate chapters. It’s another to make them part of your product workflow.
With FastPix, you can take chapter metadata and wire it directly into your stack, across your CMS, player, analytics, and localization systems.
Here’s how teams scale it in production:
Once chapters are part of your video metadata, they stop being a passive feature, and start becoming a foundation for richer experiences across your app.
Here are a few advanced ways teams are using them:
Once chapters become structured data, they’re no longer just a viewer feature, they’re a building block for smarter, more navigable, and more personal video experiences.
For a solo creator, chapters are a nice UX touch. For a developer building a real product, an OTT app, a learning platform, a video CMS, chapters are what separate a basic player from a truly navigable experience.
They help users find what they came for. They reduce drop-offs. They turn long-form content into something you can move through, not just watch.
FastPix gives you a clean, API-first way to make that happen. No manual timestamping. No custom logic. Just an automated, scalable pipeline you can plug directly into your stack. Sign up now, and try with $25 free credit.
Yes, many platforms now support chapter creation for live streams, though this is often done after the stream ends. Some platforms use AI to automatically generate chapters based on the content of the stream, but you may need to edit or finalize them once the stream is over.
When creating video chapters, you might face challenges like ensuring accurate timestamps, handling overlapping chapter sections, and managing content updates. AI-generated chapters may also need a manual review to confirm that they align with the video content and make sense to viewers.
AI-generated chapters enhance video accessibility by providing clear, easy-to-navigate segments. Viewers can jump directly to the parts they’re interested in, making it easier to consume content, especially for longer videos. This improves user experience, especially for educational or tutorial videos.
Yes, AI-generated chapters can be customized. Depending on the platform or tool you use, you can adjust chapter titles, timing, and descriptions to better fit your content and viewer preferences. Customization ensures that chapters align with the video’s intent and make navigation more intuitive for viewers.
AI-generated chapters can be added to most types of videos, including tutorials, webinars, and long-form content. However, they work best with well-structured videos, where the AI can easily detect logical segments. For more complex videos, you might need to manually adjust the chapters for accuracy.
Yes, AI-generated chapters can work alongside subtitles or closed captions. The subtitles can provide additional context for each chapter, making the content even more accessible for viewers who rely on them. This combination helps in enhancing both the usability and the accessibility of your videos.
AI-generated chapters are generally quite accurate, but they might require some adjustments. The AI uses algorithms to detect natural breaks in the content, but it’s still recommended to review the chapters, especially for more complex videos, to ensure they align with the intended structure and meaning.
