Runway Gen-3 Video to TikTok: The Complete Post-Processing Pipeline

March 14, 2026 · RenderIO

Runway output isn't TikTok-ready

Runway Gen-3 Alpha generates impressive video. But the raw output has problems for TikTok:

  • 16:9 aspect ratio instead of TikTok's 9:16

  • AI metadata embedded in the file

  • Too-clean visuals with zero sensor noise

  • 720p or 1080p landscape instead of 1080x1920 portrait

  • Variable audio levels (or no audio at all)

Upload raw Runway output to TikTok and you'll get black bars, low distribution, and potentially an AI content flag. Not what you want after spending generation credits.

The fix is a post-processing pipeline. Five FFmpeg steps, one API call. If you're using Kling instead, the Kling post-processing guide covers its specific artifacts (texture flickering, color drift) with tuned filter values.

The problems with Runway Gen-3 output

Aspect ratio mismatch

Runway Gen-3 outputs at 1280x768 (roughly 5:3) or 1920x1080 (16:9). TikTok displays at 1080x1920 (9:16). That's a 90-degree rotation of the aspect ratio.

You can't just rotate the video. You need to either crop to the center (losing sides) or add a blurred background fill.

Metadata exposure

Runway embeds generation metadata:

encoder: Lavf/runway
comment: gen-3-alpha, seed=12345, guidance=7.5

Some platforms read this. Even if TikTok doesn't actively penalize it today, there's no reason to leave it in.

Visual sterility

Runway video is mathematically perfect. No noise, no grain, no lens imperfections. Human eyes are tuned to detect this. On a platform where 99% of content comes from phone cameras, Runway output looks alien.

Step-by-step FFmpeg pipeline

1. Strip metadata

ffmpeg -i runway-output.mp4 -map_metadata -1 -c copy clean.mp4

2. Convert to 9:16 with blurred background

The best approach for landscape-to-portrait conversion uses a blurred, zoomed version of the video as the background:

ffmpeg -i clean.mp4 \
  -filter_complex "\
    [0:v]scale=1080:1920:force_original_aspect_ratio=increase,crop=1080:1920,boxblur=25[bg];\
    [0:v]scale=1080:1920:force_original_aspect_ratio=decrease[fg];\
    [bg][fg]overlay=(W-w)/2:(H-h)/2[out]" \
  -map "[out]" -map 0:a? \
  -c:v libx264 -crf 20 -c:a aac \
  portrait.mp4

This creates a full-bleed 1080x1920 video with the original content centered and a blurred version filling the background.

3. Add grain and color correction

ffmpeg -i portrait.mp4 \
  -vf "noise=alls=16:allf=t,eq=brightness=0.01:contrast=1.04:saturation=0.96" \
  -c:v libx264 -crf 20 -c:a copy \
  graded.mp4

noise=alls=16 matches indoor phone camera grain. The slight desaturation and contrast bump simulate a phone camera's processing.

4. Normalize audio

ffmpeg -i graded.mp4 \
  -af "loudnorm=I=-14:TP=-2:LRA=7" \
  -c:v copy -c:a aac -b:a 128k \
  normalized.mp4

5. Optimize for mobile playback

ffmpeg -i normalized.mp4 \
  -movflags +faststart \
  -c copy \
  final.mp4

faststart moves the moov atom to the beginning so mobile playback starts immediately.

Single-command version

All five steps combined:

ffmpeg -i runway-output.mp4 \
  -map_metadata -1 \
  -filter_complex "\
    [0:v]scale=1080:1920:force_original_aspect_ratio=increase,crop=1080:1920,boxblur=25[bg];\
    [0:v]scale=1080:1920:force_original_aspect_ratio=decrease,noise=alls=16:allf=t,eq=brightness=0.01:contrast=1.04:saturation=0.96[fg];\
    [bg][fg]overlay=(W-w)/2:(H-h)/2[out]" \
  -map "[out]" -map 0:a? \
  -af "loudnorm=I=-14:TP=-2:LRA=7" \
  -c:v libx264 -crf 22 -preset medium \
  -c:a aac -b:a 128k \
  -movflags +faststart \
  tiktok-ready.mp4

Batch process with RenderIO API

One video is fine locally. But if you're generating 20-50 Runway clips per day for a content operation, you need automation.

curl -X POST https://renderio.dev/api/v1/run-ffmpeg-command \
  -H "X-API-KEY: your_api_key" \
  -H "Content-Type: application/json" \
  -d '{
    "ffmpeg_command": "-i {{in_video}} -map_metadata -1 -filter_complex \"[0:v]scale=1080:1920:force_original_aspect_ratio=increase,crop=1080:1920,boxblur=25[bg];[0:v]scale=1080:1920:force_original_aspect_ratio=decrease,noise=alls=16:allf=t,eq=brightness=0.01:contrast=1.04:saturation=0.96[fg];[bg][fg]overlay=(W-w)/2:(H-h)/2[out]\" -map \"[out]\" -map 0:a? -af \"loudnorm=I=-14:TP=-2:LRA=7\" -c:v libx264 -crf 22 -preset medium -c:a aac -b:a 128k -movflags +faststart {{out_video}}",
    "input_files": { "in_video": "https://storage.example.com/runway-gen3-output.mp4" },
    "output_files": { "out_video": "tiktok-ready.mp4" }
  }'

Processing 100 videos in parallel

const RUNWAY_TO_TIKTOK = `-i {{in_video}} -map_metadata -1 -filter_complex "[0:v]scale=1080:1920:force_original_aspect_ratio=increase,crop=1080:1920,boxblur=25[bg];[0:v]scale=1080:1920:force_original_aspect_ratio=decrease,noise=alls=16:allf=t,eq=brightness=0.01:contrast=1.04:saturation=0.96[fg];[bg][fg]overlay=(W-w)/2:(H-h)/2[out]" -map "[out]" -map 0:a? -af "loudnorm=I=-14:TP=-2:LRA=7" -c:v libx264 -crf 22 -c:a aac -b:a 128k -movflags +faststart {{out_video}}`;

async function processRunwayBatch(videoUrls) {
  const jobs = videoUrls.map((url, i) =>
    fetch("https://renderio.dev/api/v1/run-ffmpeg-command", {
      method: "POST",
      headers: {
        "X-API-KEY": process.env.RENDERIO_API_KEY,
        "Content-Type": "application/json",
      },
      body: JSON.stringify({
        ffmpeg_command: RUNWAY_TO_TIKTOK,
        input_files: { in_video: url },
        output_files: { out_video: `tiktok-${i}.mp4` },
      }),
    }).then(r => r.json())
  );

  return Promise.all(jobs);
}

// Process 100 Runway outputs
const urls = Array.from({ length: 100 }, (_, i) =>
  `https://storage.example.com/runway-batch/${i}.mp4`
);
const results = await processRunwayBatch(urls);
console.log(`Started ${results.length} processing jobs`);

Each job runs independently on Cloudflare's edge network. 100 videos finish in roughly the time it takes to process 1-2 locally.

Set up webhooks for completion

Instead of polling, configure a webhook to get notified when processing finishes:

curl -X POST https://renderio.dev/api/v1/run-ffmpeg-command \
  -H "X-API-KEY: your_api_key" \
  -H "Content-Type: application/json" \
  -d '{
    "ffmpeg_command": "-i {{in_video}} -map_metadata -1 -filter_complex \"[0:v]scale=1080:1920:force_original_aspect_ratio=increase,crop=1080:1920,boxblur=25[bg];[0:v]scale=1080:1920:force_original_aspect_ratio=decrease,noise=alls=16:allf=t[fg];[bg][fg]overlay=(W-w)/2:(H-h)/2[out]\" -map \"[out]\" -c:v libx264 -crf 22 -movflags +faststart {{out_video}}",
    "input_files": { "in_video": "https://example.com/runway.mp4" },
    "output_files": { "out_video": "tiktok.mp4" }
  }'

RenderIO supports webhook configuration with automatic retries and a dead letter queue. No lost notifications.

Cost comparison

Approach100 videos/dayMonthly costTime
Manual in Premiere100 videos$0 (your time)8+ hours
Local FFmpeg script100 videos$0 (your CPU)2-3 hours
RenderIO API100 videos$49/mo (Pro plan, 5,000 commands)Minutes

With the Business plan at 99/monthfor20,000commands,themathisstraightforward.Process100Runwayvideosperdayforamonthandeachvideocostsunder99/month for 20,000 commands, the math is straightforward. Process 100 Runway videos per day for a month and each video costs under 0.005. For smaller volumes, the Growth plan at $29/mo covers 1,000 commands.

The Growth plan at $29/mo covers 1,000 commands -- enough for processing hundreds of Runway outputs daily.