UGC Video Processing for Brands: Automate at Scale

March 20, 2026 ยท RenderIO

Why UGC video processing breaks at scale

Your brand receives 200 UGC videos per month from creators. Each video is different: shot on different phones, at different resolutions, in different aspect ratios, with different audio levels.

Before these videos can go on your social accounts, each one needs:

  1. Resolution normalized to your standard

  2. Aspect ratio adjusted for the target platform

  3. Audio levels normalized

  4. Brand intro/outro added

  5. Watermark/logo overlay

  6. Compressed for upload

Your social media manager processes 5 per day manually in Premiere. That's 100 per month. The other 100 sit in a Google Drive folder, unused.

The content exists. The bottleneck is processing.

The processing requirements

Input variety

Real UGC comes in every format:

SourceResolutionAspect RatioAudio
iPhone 15 Pro4K (3840x2160)9:16AAC, variable levels
Samsung Galaxy1080p or 4K9:16 or 16:9Various
Webcam720p-1080p16:9Often poor
Screen recordingVariable16:9System audio
DSLR/mirrorless4K16:9External mic or none

Output requirements

Every output should be:

  • 1080x1920 (9:16) for TikTok/Reels/Shorts

  • 1920x1080 (16:9) for YouTube/LinkedIn

  • Audio at -14 LUFS

  • Brand intro (2 seconds)

  • Logo watermark (top-right)

  • H.264, CRF 20, faststart

FFmpeg processing pipeline

Step 1: Normalize resolution and aspect ratio

For 9:16 output:

ffmpeg -i ugc-input.mp4 \
  -filter_complex "[0:v]scale=1080:1920:force_original_aspect_ratio=increase,crop=1080:1920[v]" \
  -map "[v]" -map 0:a \
  -c:v libx264 -crf 20 -c:a aac \
  normalized.mp4

This handles any input aspect ratio: landscape videos get center-cropped, portrait videos get scaled. The output is always exactly 1080x1920. For more resizing options for different platforms, including letterboxing and smart crop, check the resize guide.

Step 2: Normalize audio

ffmpeg -i normalized.mp4 \
  -af "loudnorm=I=-14:TP=-2:LRA=7" \
  -c:v copy -c:a aac -b:a 128k \
  audio-fixed.mp4

Step 3: Add brand intro

ffmpeg -i brand-intro.mp4 -i audio-fixed.mp4 \
  -filter_complex "[0:v][0:a][1:v][1:a]concat=n=2:v=1:a=1[v][a]" \
  -map "[v]" -map "[a]" \
  -c:v libx264 -crf 20 -c:a aac -b:a 128k \
  with-intro.mp4

Step 4: Add brand outro

ffmpeg -i with-intro.mp4 -i brand-outro.mp4 \
  -filter_complex "[0:v][0:a][1:v][1:a]concat=n=2:v=1:a=1[v][a]" \
  -map "[v]" -map "[a]" \
  -c:v libx264 -crf 20 -c:a aac -b:a 128k \
  with-outro.mp4

Step 5: Add logo watermark

For more positioning options, animated watermarks, and text-based alternatives, see the FFmpeg watermark guide.

ffmpeg -i with-outro.mp4 -i brand-logo.png \
  -filter_complex "[1:v]scale=80:-1,format=rgba,colorchannelmixer=aa=0.4[logo];[0:v][logo]overlay=W-w-20:20[v]" \
  -map "[v]" -map 0:a \
  -c:v libx264 -crf 20 -c:a copy \
  branded.mp4

Step 6: Compress and optimize for upload

The video compression guide covers CRF values, presets, and two-pass encoding if you need tighter file size control.

ffmpeg -i branded.mp4 \
  -movflags +faststart \
  -c copy \
  final.mp4

Combined pipeline command

All steps (without intro/outro concatenation) in one command:

ffmpeg -i ugc-input.mp4 -i brand-logo.png \
  -filter_complex "\
    [0:v]scale=1080:1920:force_original_aspect_ratio=increase,crop=1080:1920[scaled];\
    [1:v]scale=80:-1,format=rgba,colorchannelmixer=aa=0.4[logo];\
    [scaled][logo]overlay=W-w-20:20[v]" \
  -map "[v]" -map 0:a \
  -af "loudnorm=I=-14:TP=-2:LRA=7" \
  -c:v libx264 -crf 20 -preset medium \
  -c:a aac -b:a 128k \
  -movflags +faststart \
  processed.mp4

Automate with RenderIO API

Process a single UGC video

curl -X POST https://renderio.dev/api/v1/run-ffmpeg-command \
  -H "X-API-KEY: your_api_key" \
  -H "Content-Type: application/json" \
  -d '{
    "ffmpeg_command": "-i {{in_video}} -i {{in_logo}} -filter_complex \"[0:v]scale=1080:1920:force_original_aspect_ratio=increase,crop=1080:1920[scaled];[1:v]scale=80:-1,format=rgba,colorchannelmixer=aa=0.4[logo];[scaled][logo]overlay=W-w-20:20[v]\" -map \"[v]\" -map 0:a -af \"loudnorm=I=-14:TP=-2:LRA=7\" -c:v libx264 -crf 20 -c:a aac -b:a 128k -movflags +faststart {{out_video}}",
    "input_files": {
      "in_video": "https://storage.example.com/ugc/creator-video-001.mp4",
      "in_logo": "https://storage.example.com/brand/logo.png"
    },
    "output_files": { "out_video": "ugc-001-processed.mp4" }
  }'

Batch process all incoming UGC

async function processUGCBatch(ugcVideos) {
  const BRAND_LOGO = "https://storage.example.com/brand/logo.png";

  const UGC_PIPELINE = `-i {{in_video}} -i {{in_logo}} -filter_complex "[0:v]scale=1080:1920:force_original_aspect_ratio=increase,crop=1080:1920[scaled];[1:v]scale=80:-1,format=rgba,colorchannelmixer=aa=0.4[logo];[scaled][logo]overlay=W-w-20:20[v]" -map "[v]" -map 0:a -af "loudnorm=I=-14:TP=-2:LRA=7" -c:v libx264 -crf 20 -c:a aac -b:a 128k -movflags +faststart {{out_video}}`;

  const jobs = ugcVideos.map((video, i) =>
    fetch("https://renderio.dev/api/v1/run-ffmpeg-command", {
      method: "POST",
      headers: {
        "X-API-KEY": process.env.RENDERIO_API_KEY,
        "Content-Type": "application/json",
      },
      body: JSON.stringify({
        ffmpeg_command: UGC_PIPELINE,
        input_files: {
          in_video: video.url,
          in_logo: BRAND_LOGO,
        },
        output_files: { out_video: `ugc-${video.creatorId}-processed.mp4` },
      }),
    }).then(r => r.json())
  );

  return Promise.all(jobs);
}

// Process 50 UGC videos at once
const videos = await getUnprocessedUGCVideos();
const results = await processUGCBatch(videos);

50 videos process in parallel. Total time: roughly the same as processing one.

Multi-platform output

Each UGC video needs multiple platform versions:

async function processForAllPlatforms(videoUrl, creatorId) {
  const LOGO = "https://storage.example.com/brand/logo.png";

  const platforms = [
    {
      name: "tiktok",
      command: `-i {{in_video}} -i {{in_logo}} -filter_complex "[0:v]scale=1080:1920:force_original_aspect_ratio=increase,crop=1080:1920[s];[1:v]scale=80:-1,format=rgba,colorchannelmixer=aa=0.4[l];[s][l]overlay=W-w-20:20[v]" -map "[v]" -map 0:a -af "loudnorm=I=-14" -c:v libx264 -crf 22 -c:a aac -movflags +faststart {{out_video}}`,
    },
    {
      name: "youtube",
      command: `-i {{in_video}} -i {{in_logo}} -filter_complex "[0:v]scale=1920:1080:force_original_aspect_ratio=decrease,pad=1920:1080:(ow-iw)/2:(oh-ih)/2:black[s];[1:v]scale=100:-1,format=rgba,colorchannelmixer=aa=0.3[l];[s][l]overlay=W-w-20:20[v]" -map "[v]" -map 0:a -af "loudnorm=I=-14" -c:v libx264 -crf 20 -c:a aac -movflags +faststart {{out_video}}`,
    },
    {
      name: "instagram",
      command: `-i {{in_video}} -i {{in_logo}} -filter_complex "[0:v]scale=1080:1080:force_original_aspect_ratio=increase,crop=1080:1080[s];[1:v]scale=80:-1,format=rgba,colorchannelmixer=aa=0.4[l];[s][l]overlay=W-w-15:15[v]" -map "[v]" -map 0:a -af "loudnorm=I=-14" -c:v libx264 -crf 22 -c:a aac -movflags +faststart {{out_video}}`,
    },
  ];

  const jobs = platforms.map(p =>
    fetch("https://renderio.dev/api/v1/run-ffmpeg-command", {
      method: "POST",
      headers: {
        "X-API-KEY": process.env.RENDERIO_API_KEY,
        "Content-Type": "application/json",
      },
      body: JSON.stringify({
        ffmpeg_command: p.command,
        input_files: { in_video: videoUrl, in_logo: LOGO },
        output_files: { out_video: `${creatorId}-${p.name}.mp4` },
      }),
    }).then(r => r.json())
  );

  return Promise.all(jobs);
}

Automation workflow

Trigger on Google Drive upload

When creators upload UGC to a shared Google Drive folder:

  1. n8n watches the Google Drive folder for new files

  2. New file detected: Get the public URL

  3. HTTP Request: POST to RenderIO with the UGC pipeline

  4. Wait: Poll for completion

  5. Download: Get processed video URLs

  6. Upload: Move to "Processed" folder or upload to scheduling tool

Webhook-based processing

Configure RenderIO webhooks for completion notifications:

app.post("/webhook/ugc-processed", async (req, res) => {
  const { command_id, status, output_files } = req.body;

  if (status === "SUCCESS") {
    // Move to content library
    await addToContentLibrary({
      commandId: command_id,
      outputUrl: output_files["processed.mp4"],
      processedAt: new Date(),
    });

    // Notify social media manager
    await sendSlackNotification(
      `New UGC video processed: ${output_files["processed.mp4"]}`
    );
  }

  res.sendStatus(200);
});

Cost comparison

Approach200 UGC videos/monthCost
Manual editing (Premiere)40 hours @ $50/hr$2,000
Freelance editor200 videos @ $10/each$2,000
RenderIO API (3 platforms each)600 API calls$29

At 600 API calls per month, this fits on the Growth plan at $29/month. The cost difference is significant, but the bigger win is speed: 200 videos process in minutes rather than accumulating in a backlog folder for weeks.

Quality control for processed UGC

Automated processing handles the mechanical work, but you still need quality checks. Not every UGC video is worth posting, and automation can't judge brand fit.

Pre-processing filters

Before running a video through the pipeline, check the basics programmatically:

  • Duration: Skip videos under 5 seconds or over 3 minutes. Too short has no value. Too long needs manual trimming.

  • Resolution: Flag anything under 720p. Upscaling a 480p webcam video to 1080x1920 looks terrible.

  • File format: Most phones output MP4 or MOV. If you get an MKV or AVI, the pipeline still works but you might want to check why the creator is using unusual formats.

// Pre-check with FFprobe via the API
async function checkVideoQuality(videoUrl) {
  const res = await fetch("https://renderio.dev/api/v1/run-ffmpeg-command", {
    method: "POST",
    headers: {
      "X-API-KEY": process.env.RENDERIO_API_KEY,
      "Content-Type": "application/json",
    },
    body: JSON.stringify({
      ffmpeg_command: "-i {{in_video}} -f null -",
      input_files: { in_video: videoUrl },
      output_files: {},
    }),
  });
  const data = await res.json();
  return data;
}

Post-processing review

Set up a simple review queue. After automated processing, videos go into a "Ready for Review" folder. Your social media manager scans thumbnails and approves or rejects. This takes 2-3 minutes for 50 videos versus the hours of manual editing they replaced.

Creator guidelines

The better the input, the better the output. Send creators a brief:

  • Film in portrait (9:16) to avoid aggressive cropping

  • Good lighting, natural or ring light

  • Keep it under 60 seconds for TikTok/Reels

  • Speak clearly (the audio normalization handles volume, not clarity)

  • No heavy filters or edited text overlays (they interfere with your brand overlay)

A one-page brief prevents 90% of quality issues before they reach your processing pipeline.

FAQ

What video formats can the UGC pipeline handle?

FFmpeg accepts virtually every video format: MP4, MOV, AVI, MKV, WebM, FLV, and more. The pipeline normalizes everything to H.264 MP4 output regardless of input format. The most common UGC formats from phones are MP4 (Android) and MOV (iPhone), both of which process without issues.

How does audio normalization work?

The loudnorm filter in FFmpeg measures the integrated loudness of the entire audio track and adjusts it to the target level (-14 LUFS in this pipeline, which matches Spotify and YouTube's standard). It also limits true peaks to -2 dBTP so audio doesn't clip on playback. This means a quiet webcam recording and a loud iPhone video both come out at the same volume.

Can I add brand intros and outros through the API?

Yes. The pipeline examples show the concatenation approach: encode the intro, the UGC content, and the outro as separate files, then concatenate them with FFmpeg's concat filter. The intro and outro need to match the output resolution (1080x1920 for 9:16). Keep intros under 3 seconds for social content.

How do I handle UGC videos with copyrighted music?

The pipeline processes whatever audio is in the source video. If a creator used copyrighted music, that's a content moderation issue, not a processing issue. You can strip audio entirely with -an in the FFmpeg command and add your own licensed track, or use the loudnorm filter only and accept the original audio after manual review.

What happens if a creator's video is too low quality to use?

The pre-processing check catches resolution issues automatically. For subjective quality (bad lighting, shaky footage, poor framing), you'll need the manual review step. The automated pipeline saves time on the mechanical processing, but editorial judgment still requires a human.