FFmpeg Resize Video: Scale, Crop & Change Resolution (CLI + API)

April 1, 2026 · RenderIO

Resize a video in one command

ffmpeg -i input.mp4 -vf "scale=1280:720" output.mp4

That's the simplest way to resize video with FFmpeg — it takes input.mp4 and outputs it at 1280x720 pixels. If your source is 1920x1080, the aspect ratio matches and everything looks fine. If your source is 4:3 or vertical, the output gets stretched. Usually not what you want.

This guide covers how to resize without distortion, how to crop and pad for different platforms, which scaling algorithm to pick, and how to batch-process hundreds of files. There's also an API approach at the end for when you don't want FFmpeg on your server at all.

New to FFmpeg? The command line tutorial covers installation and basic syntax. Already comfortable? The FFmpeg cheat sheet has 50 commands organized by task, including several resize recipes.

The scale filter, properly explained

The scale filter is where all FFmpeg resize operations start. The syntax is scale=width:height, but the interesting part is what happens when you don't specify both.

Fixed dimensions (force exact size):

ffmpeg -i input.mp4 -vf "scale=1920:1080" output.mp4

This forces the output to exactly 1920x1080 regardless of the input aspect ratio. A 4:3 source will look horizontally stretched. A 9:16 vertical video will look squished. Only use exact dimensions when you know the source ratio matches.

Auto-calculate one dimension:

ffmpeg -i input.mp4 -vf "scale=1280:-2" output.mp4

Setting width to 1280 and height to -2 tells FFmpeg to figure out the height itself while keeping the aspect ratio. The -2 also ensures the result is divisible by 2. That last part matters because H.264 and H.265 both require even dimensions. Use -2 instead of -1 to avoid cryptic encoder errors like:

[libx264 @ 0x...] height not divisible by 2 (1280x719)

You can flip it around:

ffmpeg -i input.mp4 -vf "scale=-2:720" output.mp4

This locks the height at 720 and auto-calculates the width. Useful when your priority is vertical resolution (720p, 1080p) regardless of aspect ratio.

Scale by percentage:

ffmpeg -i input.mp4 -vf "scale=iw*0.5:ih*0.5" output.mp4

iw and ih are FFmpeg variables for input width and input height. This shrinks the video to 50% of its original size. Handy for generating preview thumbnails or reducing a 4K file for quick review.

Cap the size without upscaling:

ffmpeg -i input.mp4 -vf "scale='min(1920,iw)':'min(1080,ih)':force_original_aspect_ratio=decrease" output.mp4

This is the one I use most. It scales down videos larger than 1920x1080 but leaves smaller videos alone. No upscaling, no distortion. The force_original_aspect_ratio=decrease parameter ensures the output fits within 1920x1080 while keeping the original proportions.

Scale with pixel format conversion:

ffmpeg -i input.mp4 -vf "scale=1280:-2,format=yuv420p" output.mp4

Adding format=yuv420p after the scale filter ensures the output uses the most compatible pixel format. Some sources (screen recordings, ProRes footage) use yuv444p or yuv422p, which won't play in many browsers or on mobile devices. Chaining format=yuv420p after your scale avoids playback issues.

FFmpeg crop video: cut out what you don't need

Cropping removes pixels from the edges. The filter syntax is crop=width:height:x:y, where x and y are the top-left corner of the crop rectangle. When you omit x and y, FFmpeg centers the crop automatically.

Center crop to square (Instagram-style):

ffmpeg -i input.mp4 -vf "crop=ih:ih" output.mp4

This uses the input height as both dimensions, cutting equally from the left and right. Works for turning landscape footage into square posts.

Crop a specific region:

ffmpeg -i input.mp4 -vf "crop=640:480:100:50" output.mp4

Grabs a 640x480 rectangle starting 100 pixels from the left and 50 from the top.

Crop then scale (the combo you actually need):

ffmpeg -i input.mp4 -vf "crop=ih*16/9:ih,scale=1920:1080" output.mp4

First crops to 16:9 aspect ratio (useful if you have a wider source), then scales to 1920x1080. Filters chain left to right, separated by commas. The crop happens on the original pixels, and the scale operates on the cropped result.

Remove black bars automatically:

ffmpeg -i input.mp4 -vf "cropdetect" -f null -

Run this first. FFmpeg analyzes the video and prints crop parameters to the terminal. You'll see lines like:

[Parsed_cropdetect_0 @ 0x...] x1:0 x2:1919 y1:140 y2:939 w:1920 h:800 ...
crop=1920:800:0:140

Copy those values into your actual crop command:

ffmpeg -i input.mp4 -vf "crop=1920:800:0:140" output.mp4

One thing to watch: cropdetect analyzes frame by frame, and the detected crop area can fluctuate if the black bars aren't perfectly uniform (common with older DVD rips). Run it for a few seconds and pick the most consistent values from the output.

Pad and letterbox: add space instead of removing it

Sometimes you need to fit a video into a frame without cropping anything. Padding adds black bars (or any color) around the video.

Letterbox a 4:3 video into 16:9:

ffmpeg -i input.mp4 -vf "scale=1920:-2,pad=1920:1080:(ow-iw)/2:(oh-ih)/2:black" output.mp4

This scales the video to 1920 pixels wide (keeping aspect ratio), then pads it to exactly 1920x1080 with centered black bars. The expressions (ow-iw)/2 and (oh-ih)/2 center the video within the padded frame.

Pillarbox a vertical video into landscape:

ffmpeg -i vertical.mp4 -vf "scale=-2:1080,pad=1920:1080:(ow-iw)/2:0:black" output.mp4

Scales the height to 1080, then adds black bars on the left and right to fill a 1920x1080 frame. You see this on YouTube whenever someone uploads a phone recording.

You can change the bar color:

# White bars
ffmpeg -i input.mp4 -vf "scale=-2:1080,pad=1920:1080:(ow-iw)/2:(oh-ih)/2:white" output.mp4

# Blurred background (more complex, but looks professional)
ffmpeg -i input.mp4 -filter_complex \
  "[0:v]scale=1920:1080,boxblur=20:20[bg];[0:v]scale=-2:1080[fg];[bg][fg]overlay=(W-w)/2:(H-h)/2" \
  output.mp4

The blurred background approach scales the original video to fill the frame (blurry), then overlays the sharp version on top. TikTok compilations use this trick constantly.

Platform presets: copy-paste commands for social media

Every platform wants something different. Here are the commands I keep in a script file:

YouTube 1080p (16:9):

ffmpeg -i input.mp4 -vf "scale=1920:1080:force_original_aspect_ratio=decrease,pad=1920:1080:(ow-iw)/2:(oh-ih)/2" \
  -c:v libx264 -crf 18 -preset slow -c:a aac -b:a 192k -movflags +faststart output_yt.mp4

CRF 18 because YouTube re-encodes everything anyway. Give it the best source you can. The -movflags +faststart moves the moov atom to the beginning of the file, which means the video can start playing before the entire file downloads. The compression guide explains CRF tuning in more detail.

TikTok / Instagram Reels (9:16, 1080x1920):

ffmpeg -i input.mp4 -vf "scale=1080:1920:force_original_aspect_ratio=decrease,pad=1080:1920:(ow-iw)/2:(oh-ih)/2" \
  -c:v libx264 -crf 20 -c:a aac -b:a 128k output_tiktok.mp4

If your source is landscape, this letterboxes it vertically. For a better result with landscape source footage, crop to the center first:

ffmpeg -i landscape.mp4 -vf "crop=ih*9/16:ih,scale=1080:1920" \
  -c:v libx264 -crf 20 -c:a aac output_tiktok.mp4

TikTok's upload limit is 10 minutes and 4GB as of early 2026. They re-encode aggressively on their end, so there's no benefit in going below CRF 18 or uploading at 4K. 1080x1920 at CRF 20 is the sweet spot for quality-to-size ratio.

Instagram Square (1:1, 1080x1080):

ffmpeg -i input.mp4 -vf "crop=min(iw\,ih):min(iw\,ih),scale=1080:1080" \
  -c:v libx264 -crf 20 -c:a aac output_ig.mp4

Crops to a square from the center, then scales to 1080x1080.

Twitter/X (16:9, 1280x720):

ffmpeg -i input.mp4 -vf "scale=1280:720:force_original_aspect_ratio=decrease,pad=1280:720:(ow-iw)/2:(oh-ih)/2" \
  -c:v libx264 -crf 23 -c:a aac -b:a 128k output_twitter.mp4

Twitter compresses aggressively on their end. No point going above 720p.

LinkedIn (16:9, 1920x1080, max 10 min):

ffmpeg -i input.mp4 -vf "scale=1920:1080:force_original_aspect_ratio=decrease,pad=1920:1080:(ow-iw)/2:(oh-ih)/2" \
  -c:v libx264 -crf 22 -preset medium -c:a aac -b:a 128k -t 600 -movflags +faststart output_linkedin.mp4

LinkedIn caps video at 10 minutes, so -t 600 trims to that limit. If you need more precise trimming, combine with -ss for start-point control.

Scaling algorithms: when default isn't good enough

FFmpeg defaults to bilinear scaling. It's fast and fine for downscaling. For upscaling or when quality matters, you have options.

# Lanczos (sharpest, best for downscaling)
ffmpeg -i input.mp4 -vf "scale=1920:1080:flags=lanczos" output.mp4

# Bicubic (good balance, slight sharpening)
ffmpeg -i input.mp4 -vf "scale=1920:1080:flags=bicubic" output.mp4

# Bilinear (default, fastest)
ffmpeg -i input.mp4 -vf "scale=1920:1080:flags=bilinear" output.mp4

# Spline (smooth, good for upscaling)
ffmpeg -i input.mp4 -vf "scale=1920:1080:flags=spline" output.mp4

Here's the practical difference: lanczos preserves more fine detail when shrinking video. Text, edges, and textures stay sharper. Bilinear tends to soften everything slightly. For most content the difference is subtle, but it's visible on text-heavy screencasts or footage with fine patterns. Spline produces the smoothest interpolation when scaling up, which means fewer visible artifacts on upscaled footage, though it won't add real detail.

Use lanczos when downscaling (1080p to 720p, 4K to 1080p). Use spline when upscaling (720p to 1080p). Bilinear is fine for quick previews where sharpness doesn't matter.

The speed difference is negligible. Lanczos adds maybe 2-3% to encode time compared to bilinear. Not worth worrying about unless you're processing thousands of files. If encoding speed is your bottleneck, GPU acceleration with NVENC will make a bigger difference than any filter flag.

Rotate and flip: fix orientation problems

Phone videos sometimes have wrong rotation metadata. Or you need to flip footage for a mirror effect.

# Rotate 90 degrees clockwise
ffmpeg -i input.mp4 -vf "transpose=1" output.mp4

# Rotate 90 degrees counter-clockwise
ffmpeg -i input.mp4 -vf "transpose=2" output.mp4

# Rotate 180 degrees
ffmpeg -i input.mp4 -vf "transpose=1,transpose=1" output.mp4

# Flip horizontally (mirror)
ffmpeg -i input.mp4 -vf "hflip" output.mp4

# Flip vertically
ffmpeg -i input.mp4 -vf "vflip" output.mp4

The transpose values: 0 = 90° counter-clockwise + vertical flip, 1 = 90° clockwise, 2 = 90° counter-clockwise, 3 = 90° clockwise + vertical flip. I never remember these either, which is why I keep the cheat sheet open.

Fix metadata rotation without re-encoding:

ffmpeg -i input.mp4 -c copy -metadata:s:v rotate=0 output.mp4

Some players read the rotation metadata and apply it automatically. This strips the rotation flag without touching the video data. Use this when the video plays sideways in some apps but looks correct in others.

Combine rotation with resize:

ffmpeg -i portrait.mp4 -vf "transpose=1,scale=1920:1080" output.mp4

Rotates first, then scales. Order matters in the filter chain. If you're also adding a watermark, put the overlay filter last: transpose=1,scale=1920:1080,overlay=....

Batch resize: process an entire folder

Single file commands are fine until you have 200 files to resize. Here's a shell script that handles an entire directory:

#!/bin/bash
INPUT_DIR="./raw"
OUTPUT_DIR="./resized"
mkdir -p "$OUTPUT_DIR"

for f in "$INPUT_DIR"/*.mp4; do
  filename=$(basename "$f")
  ffmpeg -i "$f" -vf "scale=1280:-2:flags=lanczos" \
    -c:v libx264 -crf 23 -c:a aac -b:a 128k \
    "$OUTPUT_DIR/$filename" -y
done
echo "Done. Processed $(ls "$OUTPUT_DIR" | wc -l) files."

Multi-resolution batch (generate multiple sizes at once):

#!/bin/bash
for f in ./input/*.mp4; do
  name=$(basename "$f" .mp4)
  ffmpeg -i "$f" \
    -vf "scale=1920:-2:flags=lanczos" -c:v libx264 -crf 22 "./output/${name}_1080p.mp4" \
    -vf "scale=1280:-2:flags=lanczos" -c:v libx264 -crf 23 "./output/${name}_720p.mp4" \
    -vf "scale=854:-2:flags=lanczos" -c:v libx264 -crf 25 "./output/${name}_480p.mp4"
done

This generates 1080p, 720p, and 480p versions in one pass per file. Useful for creating multiple quality tiers for an adaptive video player.

Parallel processing with xargs:

find ./raw -name "*.mp4" | xargs -P 4 -I {} bash -c \
  'ffmpeg -i "{}" -vf "scale=1280:-2:flags=lanczos" -c:v libx264 -crf 23 "./resized/$(basename {})" -y'

The -P 4 flag runs 4 FFmpeg processes simultaneously. Adjust based on your CPU cores. On an 8-core machine, -P 4 is a safe bet that leaves headroom for the OS. Going higher risks thrashing.

For heavier workloads, the RenderIO API can process files in parallel across multiple machines without tying up yours. More on that below.

FFmpeg resize video via API

Running FFmpeg locally works until you're processing video in production. Then you deal with scaling workers, security patches, and the fact that a 4K resize can pin a CPU core for minutes. The hosted vs self-hosted tradeoffs break down the cost math at different volumes.

With RenderIO, you send the same FFmpeg command over HTTP:

curl -X POST https://renderio.dev/api/v1/run-ffmpeg-command \
  -H "Content-Type: application/json" \
  -H "X-API-KEY: ffsk_your_api_key" \
  -d '{
    "input_files": {
      "in_video": "https://example.com/source.mp4"
    },
    "output_files": {
      "out_video": "resized.mp4"
    },
    "ffmpeg_command": "ffmpeg -i {{in_video}} -vf \"scale=1280:-2:flags=lanczos\" -c:v libx264 -crf 23 -c:a aac {{out_video}}"
  }'

The API returns a command_id. Poll GET /api/v1/commands/:commandId until the status is SUCCESS, then grab the output URL from the response:

# Check status
curl https://renderio.dev/api/v1/commands/cmd_abc123 \
  -H "X-API-KEY: ffsk_your_api_key"

# Response when complete:
# {
#   "command_id": "cmd_abc123",
#   "status": "SUCCESS",
#   "output_files": {
#     "out_video": {
#       "storage_url": "https://storage.renderio.dev/files/...",
#       "filename": "resized.mp4",
#       "size_mbytes": 1.24
#     }
#   }
# }

The FFmpeg syntax is identical to what you'd run locally. No new API to learn. The same crop, pad, and rotate commands from this guide all work, just wrapped in JSON. Get an API key and try it with one of the commands above. For language-specific examples, see the Python and Node.js API guides.

If you're building automation workflows, the n8n resize video guide and Zapier resize video for TikTok guide walk through connecting RenderIO to those platforms step by step.

Common pitfalls (and how to avoid them)

Odd dimensions crash the encoder. H.264 and H.265 require width and height divisible by 2. Some codecs need divisibility by 4 or even 16. Always use -2 instead of -1 in scale expressions, or add a safety-net filter:

-vf "scale=1280:-2,pad=ceil(iw/2)*2:ceil(ih/2)*2"

Upscaling doesn't add detail. Scaling a 480p video to 1080p doesn't improve quality. You're just interpolating between existing pixels. FFmpeg won't warn you about this. If you must upscale, use flags=spline and accept that the output will look softer than native 1080p. Check the source resolution first with ffprobe.

Aspect ratio distortion is silent. FFmpeg won't warn you when you stretch a 4:3 video into 16:9. It does exactly what you ask. Always use -2 for auto-calculation or force_original_aspect_ratio=decrease with pad to avoid squished output.

Filter order matters. In -vf "crop=640:480,scale=1280:720", the crop happens first on the original resolution, then the scale operates on the cropped 640x480 result. Swap them and you get a completely different output. Think of the filter chain as a pipeline, left to right.

Missing -c:v on resize means re-encoding. Any filter operation triggers a decode/re-encode cycle. You cannot use -c copy with scale, crop, or pad filters. If you're also trimming or compressing, chain everything into one command to avoid encoding the video twice.

CRF matters more after resize. Resizing changes the bitrate requirements. A video resized from 4K to 720p needs far fewer bits per frame. If you copy your CRF value from a 4K workflow, you might be overspending on file size. CRF 23 is a safe default for 1080p. Push to 25-28 for 720p web delivery. The compression guide goes deeper on tuning CRF per resolution.

Container format compatibility. After resizing, make sure your output container supports the codec you're using. MP4 with H.264 works everywhere. MKV is more flexible but won't play natively in browsers. The formats reference has a compatibility table for every major codec-container combination.

FAQ

How do I resize a video without losing quality?

You can't avoid some quality loss when resizing, because any scale or crop filter forces FFmpeg to decode and re-encode the video. The goal is to minimize it. Use flags=lanczos for the sharpest scaling, set CRF to 18 or lower for near-lossless quality, and use -preset slow to give the encoder more time to optimize. If you're only changing the container or metadata (not the pixel dimensions), you can use -c copy to avoid re-encoding entirely.

What's the difference between scale and crop in FFmpeg?

scale changes the dimensions of the entire frame. Every pixel gets resized. crop cuts a rectangle out of the frame and discards the rest. Use scale when you want to keep the full picture at a different size. Use crop when you want to remove unwanted edges or change the aspect ratio by trimming content.

Why does FFmpeg give me "width/height not divisible by 2"?

H.264 and H.265 encode video in macroblocks that require even pixel dimensions. If your scale expression produces an odd number (like 1281x719), the encoder throws this error. Fix it by using -2 instead of -1 in your scale filter: scale=1280:-2 instead of scale=1280:-1. The -2 rounds to the nearest even number automatically.

Can I resize video with FFmpeg without re-encoding?

No. Resizing requires decoding each frame, applying the scale filter, and re-encoding the result. There's no way around this because the pixel data itself changes. The only FFmpeg operations that avoid re-encoding are container changes, stream copying, and metadata edits. If encoding speed is a concern, consider hardware-accelerated encoding or offloading to an FFmpeg API.

How do I resize a batch of videos to the same resolution?

Use a bash for loop: for f in *.mp4; do ffmpeg -i "$f" -vf "scale=1280:-2" "resized_$f"; done. For parallel processing, pipe through xargs -P 4. For large-scale batch jobs (hundreds or thousands of files), the RenderIO API handles concurrency and storage so you don't have to manage worker processes.

What resolution should I use for TikTok, Instagram, or YouTube?

TikTok and Instagram Reels: 1080x1920 (9:16 vertical). Instagram feed square: 1080x1080. YouTube: 1920x1080 (16:9). Twitter/X: 1280x720. The platform presets section above has ready-to-copy commands for each one, including proper aspect ratio handling and codec settings.

Quick reference

Here's everything from this guide condensed:

# Resize to exact dimensions
ffmpeg -i in.mp4 -vf "scale=1280:720" out.mp4

# Resize with auto-height (keep aspect ratio)
ffmpeg -i in.mp4 -vf "scale=1280:-2" out.mp4

# Downscale only (no upscaling)
ffmpeg -i in.mp4 -vf "scale='min(1920,iw)':'min(1080,ih)':force_original_aspect_ratio=decrease" out.mp4

# Center crop to square
ffmpeg -i in.mp4 -vf "crop=ih:ih" out.mp4

# Letterbox into 16:9
ffmpeg -i in.mp4 -vf "scale=-2:1080,pad=1920:1080:(ow-iw)/2:(oh-ih)/2" out.mp4

# TikTok vertical from landscape
ffmpeg -i in.mp4 -vf "crop=ih*9/16:ih,scale=1080:1920" out.mp4

# Rotate 90° clockwise
ffmpeg -i in.mp4 -vf "transpose=1" out.mp4

# Batch resize folder
for f in *.mp4; do ffmpeg -i "$f" -vf "scale=1280:-2:flags=lanczos" -c:v libx264 -crf 23 "resized_$f"; done

Want the full list of common FFmpeg operations? The cheat sheet has 50 commands. Need to change codecs while resizing? The transcoding guide covers H.264, H.265, AV1, and VP9 conversions. Working with container formats? The formats reference explains which codecs fit inside which containers.