How to detect unsafe or abusive content using APIs

February 20, 2026
7 Min
In-Video AI
Share
This is some text inside of a div block.
Join Our Newsletter for the Latest in Streaming Technology

One UGC platform we spoke to said their biggest growth bottleneck wasn’t engagement.

It was unsafe content.

They were scaling fast. Uploads were increasing every week. But so were edge cases, explicit videos, abusive speech, content unsafe for minors. Manual review worked when they had a few hundred uploads a day. It completely collapsed at a few thousand.

That’s when the real question surfaced: Why is content moderation important for user-generated platforms?

Because risk scales faster than growth. Unsafe or abusive content doesn’t just harm users. It creates legal exposure, app store violations, advertiser pullback, and brand damage. And once problematic content is live, even for a few minutes, screenshots and screen recordings make it permanent.

This is why automated moderation for user-generated content is not a “feature.” It’s infrastructure. If moderation isn’t built into your upload and media processing pipeline, you’re relying on reaction instead of prevention. And at scale, reaction is always too slow.

TL;DR

If your platform allows user uploads, automated content moderation is not optional, it’s infrastructure.

As UGC scales, manual review breaks. Unsafe or abusive content can trigger legal risk, app store penalties, advertiser pullback, and long-term brand damage. The cost of reacting after content goes live is always higher than preventing it upfront. Moderation must be built into your upload and media processing pipeline, not treated as a separate feature, but as a core engineering responsibility.

How to detect unsafe or abusive content using APIs

What is considered unsafe or abusive content?

When developers ask “what is unsafe content?” they’re usually not asking for a legal definition. They’re asking what their system needs to detect.

In video and audio platforms, unsafe or abusive content typically includes:

  • Sexually explicit or NSFW visuals
  • Graphic violence or disturbing imagery
  • Hate speech or targeted harassment
  • Profanity or abusive language in speech
  • Self-harm or dangerous behavior
  • Content inappropriate for minors

But the important nuance is this: unsafe content is contextual.

A kids learning app, a private enterprise training portal, and a public creator marketplace will enforce different standards. What one platform flags immediately, another may allow with restrictions. That’s why modern content moderation APIs do not simply return a yes or no result.

Instead, they return confidence scores across categories.

For example:

  • NSFW: 0.82
  • Violence: 0.14
  • Profanity: 0.67

These scores allow you to define your own enforcement thresholds. You might auto-block above a certain confidence level, send borderline content to human review, or allow low-confidence results. The API provides structured signals. Your platform applies policy.

That separation is what makes automated moderation usable in real-world systems.

Real-world example: Using FastPix’s moderation API

If you’re building a video or audio platform, the real question isn’t whether to moderate, it’s when in your workflow to trigger it. FastPix provides NSFW detection and profanity filtering for audio and video content. The moderation system can be integrated directly into your media lifecycle.

You have three implementation paths, depending on how your platform handles uploads:

Enable moderation at the time of media creation (URL upload)

When creating media using a remote URL, moderation can be enabled directly in the media creation request.

Base media creation API

curl --request POST \
  --url https://api.fastpix.io/v1/on-demand \
  --header 'accept: application/json' \
  --header 'authorization: Basic <YOUR_AUTH_TOKEN>' \
  --header 'content-type: application/json' \
  --data '
{
  "inputs": [
    {
      "type": "video",
      "url": "https://static.fastpix.io/fp-sample-video.mp4"
    }
  ],
  "accessPolicy": "public",
  "maxResolution": "1080p",
  "mediaQuality": "standard"
}
'

Adding moderation

To enable moderation, include the following parameter:

“moderation": { "type": "av" }

What does type mean?

  • audio (default) – Moderates only the audio track to detect profanity or abusive speech.
  • video – Moderates only video frames to detect NSFW or unsafe visuals.
  • av – Moderates both audio and video together for full coverage.

Final API call with moderation enabled

curl --request POST \
  --url https://api.fastpix.io/v1/on-demand \
  --header 'accept: application/json' \
  --header 'authorization: Basic <YOUR_AUTH_TOKEN>' \
  --header 'content-type: application/json' \
  --data '
{
  "inputs": [
    {
      "type": "video",
      "url": "https://static.fastpix.io/fp-sample-video.mp4"
    }
  ],
  "accessPolicy": "public",
  "moderation": {
    "type": "av"
  },
  "maxResolution": "1080p",
  "mediaQuality": "standard"
}
'

Enable moderation at the time of media creation (Direct video upload)


If you’re uploading media directly from a device, moderation can stillbe enabled during the upload flow.

This is a two‑step process:

Step 1: Create an upload session with moderation

curl --request POST \
  --url https://api.fastpix.io/v1/on-demand/upload \
  --header 'accept: application/json' \
  --header 'authorization: Basic <YOUR_AUTH_TOKEN>' \
  --header 'content-type: application/json' \
  --data '
{
  "corsOrigin": "*",
  "pushMediaSettings": {
    "accessPolicy": "public",
    "maxResolution": "1080p",
    "mediaQuality": "standard",
    "moderation": {
      "type": "av"
    }
  }
}
'

If successful, the response includes an uploadId and a signed upload URL (example values shown below):

{
  "success": true,
  "data": {
    "uploadId": "<UPLOAD_ID_PLACEHOLDER>",
    "status": "waiting",
    "url": "<SIGNED_UPLOAD_URL_PLACEHOLDER>",
    "timeout": 14400
  }
}

Step 2: Upload the file using the signed URL

curl --request PUT \
  --url "<SIGNED_UPLOAD_URL_PLACEHOLDER>" \
  --header "Content-Type: video/mp4" \
  --data-binary "@/path/to/local/video.mp4"

Viewing the moderation status

Once uploaded, you can fetch the media details (including moderation status) using:

curl --request GET \
  --url https://api.fastpix.io/v1/on-demand/<UPLOAD_ID_PLACEHOLDER> \
  --header 'accept: application/json' \
  --header 'authorization: Basic <YOUR_AUTH_TOKEN>'

Enable moderation on existing media (PATCH request)

Moderation can also be applied after media has already beencreated.

Step 1: Trigger Moderation

curl --request PATCH \
  --url https://api.fastpix.io/v1/on-demand/<MEDIA_ID_PLACEHOLDER>/moderation \
  --header 'accept: application/json' \
  --header 'authorization: Basic <YOUR_AUTH_TOKEN>' \
  --header 'content-type: application/json' \
  --data '
{
  "moderation": {
    "type": "audio"
  }
}
'

Step 2: Check moderation results

curl --request GET \
  --url https://api.fastpix.io/v1/on-demand/<MEDIA_ID_PLACEHOLDER> \
  --header 'accept: application/json' \
  --header 'authorization: Basic <YOUR_AUTH_TOKEN>'

For more details: Go through our Docs and Guides.

Conclusion

Automated moderation isn’t a feature you add later, it’s infrastructure youbuild in early.

If your platform handles user-generated video or audio, you need detectionbuilt directly into your upload and processing pipeline. FastPix makes thatintegration straightforward whether at upload time, during ingestion, orretroactively.

Want to explore it?

Join our Slack.
Sign up and try the API.
Or contact us to review yoursetup.

 

Get Started

Enjoyed reading? You might also like

Try FastPix today!

FastPix grows with you – from startups to growth stage and beyond.