Remove AI Artifacts from Video with FFmpeg

March 13, 2026 · RenderIO

AI video artifacts are predictable

Every AI video generator produces artifacts. They're different per model, but they follow patterns. Runway Gen-3 tends toward edge wobble. Kling produces texture flickering. Pika has motion inconsistency between frames.

You don't need to re-generate. FFmpeg has filters that target each artifact type directly.

Common AI video artifacts

Temporal flickering

Random brightness or color changes between frames. Most visible in large flat areas like walls, skies, or skin. Caused by the diffusion process generating each frame with slightly different noise patterns.

Edge wobble

Object edges shift by 1-3 pixels between frames. Especially visible on straight lines, text, and face boundaries. The model isn't temporally consistent at pixel level.

Texture inconsistency

Surface textures change frame-to-frame. A brick wall might have slightly different mortar patterns. Fabric texture shifts. Hair detail varies. The model hallucinates texture independently per frame.

Motion smearing

Fast movement produces ghosting or smearing instead of clean motion blur. The model interpolates poorly between poses.

Resolution artifacts

Upscaled areas show repeating patterns or overly smooth patches. Common when the generator upscales from a lower internal resolution.

FFmpeg filters for each artifact

Fix temporal flickering with nlmeans

The Non-Local Means denoiser is the best general-purpose fix for flickering:

ffmpeg -i input.mp4 \
  -vf "nlmeans=s=6:p=3:r=9" \
  -c:v libx264 -crf 18 output.mp4

Parameters:

  • s=6: Denoising strength. Higher values smooth more. Range 3-10 for AI video.

  • p=3: Patch size. Keep at 3 for most cases.

  • r=9: Research window. Larger windows find better matches but cost more CPU.

For heavy flickering, increase strength:

ffmpeg -i input.mp4 \
  -vf "nlmeans=s=10:p=5:r=15" \
  -c:v libx264 -crf 18 output.mp4

Fix edge wobble with deshake

ffmpeg -i input.mp4 \
  -vf "deshake=rx=16:ry=16" \
  -c:v libx264 -crf 18 output.mp4

The deshake filter stabilizes frame-to-frame jitter. rx and ry set the maximum correction range in pixels. For AI edge wobble, 16 pixels is usually enough.

For more precise stabilization, use the two-pass vidstab approach:

# Pass 1: Analyze motion
ffmpeg -i input.mp4 -vf "vidstabdetect=shakiness=5:accuracy=15" -f null -

# Pass 2: Apply stabilization
ffmpeg -i input.mp4 \
  -vf "vidstabtransform=smoothing=10:crop=black:zoom=2" \
  -c:v libx264 -crf 18 output.mp4

Fix texture inconsistency with temporal averaging

ffmpeg -i input.mp4 \
  -vf "tmix=frames=3:weights='1 2 1'" \
  -c:v libx264 -crf 18 output.mp4

tmix blends adjacent frames with weighted averaging. The center frame gets double weight, so you keep sharpness while smoothing temporal inconsistencies. This specifically targets frame-to-frame texture changes.

Fix motion smearing with sharpening

ffmpeg -i input.mp4 \
  -vf "unsharp=5:5:1.0:5:5:0.5" \
  -c:v libx264 -crf 18 output.mp4

The unsharp mask sharpens edges that motion smearing has softened. Parameters are luma_msize_x:luma_msize_y:luma_amount:chroma_msize_x:chroma_msize_y:chroma_amount.

Fix resolution artifacts with subtle blur then sharpen

ffmpeg -i input.mp4 \
  -vf "smartblur=lr=1.0:ls=-0.5:lt=-3.0,unsharp=3:3:0.8" \
  -c:v libx264 -crf 18 output.mp4

smartblur with negative strength sharpens only where needed, then unsharp adds back controlled detail. This breaks up the repeating upscale patterns.

Combined artifact removal pipeline

Most AI videos have multiple artifact types. This command addresses all of them:

ffmpeg -i input.mp4 \
  -vf "nlmeans=s=6:p=3:r=9,tmix=frames=3:weights='1 2 1',deshake=rx=8:ry=8,unsharp=3:3:0.6" \
  -c:v libx264 -crf 18 -preset medium \
  -c:a copy \
  output.mp4

Order matters. Denoise first (nlmeans), then temporal smooth (tmix), then stabilize (deshake), then sharpen (unsharp). Each filter works best on the output of the previous one.

Process through RenderIO API

These filters are CPU-intensive. A 30-second video takes 2-5 minutes to process locally depending on resolution. Offload to the API:

curl -X POST https://renderio.dev/api/v1/run-ffmpeg-command \
  -H "X-API-KEY: your_api_key" \
  -H "Content-Type: application/json" \
  -d '{
    "ffmpeg_command": "-i {{in_video}} -vf \"nlmeans=s=6:p=3:r=9,tmix=frames=3:weights=1_2_1,deshake=rx=8:ry=8,unsharp=3:3:0.6\" -c:v libx264 -crf 18 -preset medium -c:a copy {{out_video}}",
    "input_files": { "in_video": "https://example.com/ai-video.mp4" },
    "output_files": { "out_video": "cleaned.mp4" }
  }'

Batch processing for multiple videos

const videos = [
  "https://storage.example.com/runway-clip-1.mp4",
  "https://storage.example.com/runway-clip-2.mp4",
  "https://storage.example.com/kling-clip-1.mp4",
];

const ARTIFACT_REMOVAL = `-i {{in_video}} -vf "nlmeans=s=6:p=3:r=9,tmix=frames=3:weights=1_2_1,deshake=rx=8:ry=8,unsharp=3:3:0.6" -c:v libx264 -crf 18 -preset medium -c:a copy {{out_video}}`;

const jobs = videos.map((url, i) =>
  fetch("https://renderio.dev/api/v1/run-ffmpeg-command", {
    method: "POST",
    headers: {
      "X-API-KEY": "your_api_key",
      "Content-Type": "application/json",
    },
    body: JSON.stringify({
      ffmpeg_command: ARTIFACT_REMOVAL,
      input_files: { in_video: url },
      output_files: { out_video: `cleaned-${i}.mp4` },
    }),
  })
);

await Promise.all(jobs);

Tuning per generator

Different AI tools need different filter strengths:

Generatornlmeans strengthtmix framesdeshake rangeNotes
Runway Gen-34-6312-16Edge wobble is main issue
Kling6-858Texture flickering is primary
Pika5-738-12Motion inconsistency
Sora3-536Fewer artifacts overall
Stable Video8-10512Most artifacts

Start with these values and adjust based on your specific outputs. Higher nlmeans strength means smoother video but less detail. Find the balance for your content.

When to remove artifacts vs. re-generate

Remove artifacts when:

  • The composition and content are good

  • Artifacts are mild to moderate

  • You need the video quickly

  • Re-generation would produce different content

Re-generate when:

  • Major structural problems (missing limbs, impossible geometry)

  • Artifacts cover the main subject

  • The motion is fundamentally broken

For most social media content, artifact removal is faster and cheaper than re-generation. Process 100 videos through RenderIO for under 1total.Regenerating100videoscosts1 total. Re-generating 100 videos costs 10-50 in generation credits and takes hours.

Fix the artifacts. Ship the content. Explore the full FFmpeg cloud API or get your API key to start cleaning up AI video.