FFmpeg Options and Flags: A Practical Reference

April 2, 2026 · RenderIO

Why another FFmpeg options reference?

The output of ffmpeg -h full is over 7,000 lines. A GitHub gist of the same dump has 469 stars, which tells you two things: developers need this reference, and nobody wants to read raw help output.

This guide organizes the FFmpeg options and flags you'll actually use into categories, with syntax, real examples, and notes on when each flag matters. It covers the options that show up in 95% of real commands. Not exhaustive on purpose. That's what the official docs are for.

If you're new to FFmpeg entirely, start with the command line tutorial for installation and basic syntax. If you want full command recipes instead of individual flags, the cheat sheet has 50 copy-paste commands organized by task.

How FFmpeg options work

FFmpeg options follow a strict ordering that trips people up constantly.

ffmpeg [global_options] {[input_options] -i input} ... {[output_options] output} ...

Three rules:

  1. Global options go first, before any -i.

  2. Input options go directly before the -i they apply to.

  3. Output options go after all inputs but before the output filename.

Get the order wrong and FFmpeg either ignores your flag or applies it to the wrong file. The -ss flag is the classic example: put it before -i and it seeks the input (fast, approximate). Put it after -i and it trims the output (slow, frame-accurate). The trim guide covers this in detail.

You can verify what FFmpeg options and codecs your build supports:

# List all supported options
ffmpeg -h full

# List just encoders
ffmpeg -encoders

# List format-specific options
ffmpeg -h encoder=libx264

That last one is useful when you can't remember the exact name of a codec option. It dumps every flag the encoder accepts with descriptions.

Global options

These apply to the entire FFmpeg process, not to any specific input or output.

-y — overwrite without asking

ffmpeg -y -i input.mp4 output.mp4

Without -y, FFmpeg prompts for confirmation if the output file exists. Add this to any script or automation pipeline. Forgetting it is a common reason cron jobs and CI pipelines hang forever because FFmpeg sits there waiting for input that never comes.

-n — never overwrite

ffmpeg -n -i input.mp4 output.mp4

Opposite of -y. If the output exists, FFmpeg exits immediately with code 1. Useful for batch jobs where you want to skip already-processed files without wasting cycles.

-v / -loglevel — control output verbosity

ffmpeg -v error -i input.mp4 output.mp4
ffmpeg -loglevel quiet -i input.mp4 output.mp4

Levels, from most to least verbose: debug, verbose, info (default), warning, error, fatal, quiet. Use -v error in scripts to suppress progress output and only surface problems. Add +repeat to see repeated log messages: -v warning+repeat.

You can combine a level with +repeat or +level suffixes:

# Show warnings and tag each line with the log level
ffmpeg -v warning+level -i input.mp4 output.mp4

-hide_banner — suppress build info

ffmpeg -hide_banner -i input.mp4 output.mp4

Skips the FFmpeg version, build config, and library versions that print on every run. Doesn't affect actual processing. Most developers alias ffmpeg to ffmpeg -hide_banner in their shell config:

# Add to ~/.bashrc or ~/.zshrc
alias ffmpeg='ffmpeg -hide_banner'

-stats / -nostats — progress reporting

ffmpeg -nostats -v error -i input.mp4 output.mp4

-stats is on by default (that's the frame= 1234 fps=60 ... line you see). Turn it off with -nostats when you're piping FFmpeg output or logging to a file. The stats line includes speed=2.5x, which tells you encoding is running at 2.5x realtime. Handy for estimating how long a job will take.

-progress — machine-readable progress

ffmpeg -progress pipe:1 -i input.mp4 output.mp4

Outputs key=value pairs you can parse in scripts. Each update includes frame, fps, total_size, out_time, speed, and progress (which reads continue during encoding and end when finished). More reliable than parsing the stats line with regex.

Input options

These go directly before the -i flag and affect how FFmpeg reads the input.

-i — input file

ffmpeg -i video.mp4 -i audio.wav -i watermark.png output.mp4

You can specify multiple inputs. Each gets an index starting from 0. Use -map (covered below) to pick which streams from which inputs end up in the output. URLs work too:

ffmpeg -i https://example.com/video.mp4 -c copy local.mp4

-f — force input format

ffmpeg -f rawvideo -pixel_format rgb24 -video_size 1920x1080 -i raw.rgb output.mp4

FFmpeg usually detects the format automatically. Force it when dealing with raw streams, pipes, or when auto-detection gets it wrong. The formats guide covers all supported containers and when you'd need to specify one explicitly.

-ss (before -i) — seek to start time

ffmpeg -ss 01:30:00 -i movie.mkv -t 60 -c copy clip.mp4

When placed before -i, this seeks the input at the demuxer level. It's fast because FFmpeg jumps to the nearest keyframe instead of decoding every frame from the start. The tradeoff: your start time might be off by a fraction of a second (up to one GOP length, typically 2-10 seconds for most content).

Put -ss after -i for frame-accurate seeking at the cost of speed. The trim video guide has benchmarks comparing both approaches.

Accepted time formats:

-ss 90          # 90 seconds
-ss 01:30       # 1 minute 30 seconds
-ss 01:30:00    # 1 hour 30 minutes
-ss 01:30:00.500  # with milliseconds

-t — limit duration

ffmpeg -i input.mp4 -t 30 output.mp4

Stops reading/writing after 30 seconds. Works as both input and output option. Accepts seconds (30) or timestamp format (00:00:30).

-to — stop at timestamp

ffmpeg -i input.mp4 -ss 00:01:00 -to 00:02:00 -c copy output.mp4

Unlike -t (duration), -to specifies an absolute time to stop. Watch out: when -ss is before -i, -to becomes relative to the seek point, which makes it behave like -t. This is the single most confusing quirk in FFmpeg.

-stream_loop — loop input

ffmpeg -stream_loop 3 -i intro.mp4 -c copy looped.mp4

Loops the input a specified number of times. -stream_loop -1 loops forever (useful for streaming). Only works with formats that support seeking.

-re — read input at native frame rate

ffmpeg -re -i input.mp4 -f flv rtmp://streaming-server/live/key

Without -re, FFmpeg reads the input as fast as possible. With it, FFmpeg reads at the file's native speed (1x realtime). You need this for live streaming. Without it, FFmpeg would dump the entire file to the streaming server in seconds.

Output options: video

-c:v / -vcodec — video codec

ffmpeg -i input.mov -c:v libx264 -c:a aac output.mp4

Selects the video encoder. These are the codecs you'll run into most often:

CodecFlagUse case
H.264libx264Maximum compatibility
H.265libx26550% smaller at same quality
AV1libsvtav1Best compression, slow encode
VP9libvpx-vp9WebM containers
CopycopyNo re-encoding (instant)

Use copy when you just need to change the container or trim on keyframes. It's the difference between a 2-second remux and a 15-minute encode. The formats guide explains when container changes alone are enough, and the transcoding guide covers choosing between these codecs.

-b:v — video bitrate

ffmpeg -i input.mp4 -c:v libx264 -b:v 2M output.mp4

Sets a target bitrate. 2M = 2 megabits/sec. Suffixes: K (kilobits), M (megabits). Works for both CBR and as a target for ABR. For most use cases, CRF (below) produces better results because it adapts to scene complexity.

-crf — constant rate factor

ffmpeg -i input.mp4 -c:v libx264 -crf 23 output.mp4

CRF is the simplest way to control quality. Lower numbers mean higher quality and bigger files. The scale depends on the codec:

  • H.264: 0-51, default 23. Visually lossless around 18.

  • H.265: 0-51, default 28. Visually lossless around 22.

  • AV1 (SVT-AV1): 0-63, default 35.

CRF 23 for H.264 is the sweet spot for most content. You rarely need to go lower unless you're archiving source footage. The compression guide has side-by-side file size comparisons across CRF values and explains when to use CRF vs two-pass bitrate targeting.

CRF values aren't comparable across codecs. CRF 23 in libx264 and CRF 28 in libx265 produce roughly similar visual quality. People set CRF 23 in libx265 expecting the same result as libx264 and end up with files much larger than they expected.

-preset — encoding speed vs compression

ffmpeg -i input.mp4 -c:v libx264 -crf 23 -preset slow output.mp4

Controls how much time the encoder spends optimizing compression. Options from fastest to slowest: ultrafast, superfast, veryfast, faster, fast, medium (default), slow, slower, veryslow.

Slower presets produce smaller files at the same CRF. The difference between medium and slow is typically 5-10% smaller output. Between medium and ultrafast, it can be 2-3x. For batch processing, medium is the right tradeoff. For a single important encode, slow is worth the wait. Don't bother with placebo. The name is accurate.

H.265 uses the same preset names. NVENC uses a different scale (p1 through p7). The GPU acceleration guide covers NVENC presets and benchmarks.

-tune — optimize for content type

ffmpeg -i input.mp4 -c:v libx264 -crf 23 -tune film output.mp4

Adjusts encoder settings for specific content types:

TuneBest for
filmLive-action footage with grain
animationCartoons, anime, flat areas
grainPreserve film grain explicitly
stillimageSlideshow-style content
fastdecodeLow-power playback devices
zerolatencyLive streaming, real-time

zerolatency disables B-frames and reduces lookahead, which cuts latency but hurts compression. Only use it when latency actually matters (live streaming, video conferencing).

-profile:v and -level — compatibility constraints

ffmpeg -i input.mp4 -c:v libx264 -profile:v high -level 4.1 output.mp4

Profiles restrict which encoder features are used. baseline (no B-frames, no CABAC) for old devices. main for most streaming. high (default) for maximum compression. Level sets resolution and bitrate caps, so 4.1 supports up to 1080p@30fps or 720p@60fps.

You rarely need to set these unless targeting specific devices (smart TVs, old phones, hardware decoders with limited feature sets).

-pix_fmt — pixel format

ffmpeg -i input.mp4 -c:v libx264 -pix_fmt yuv420p output.mp4

Sets the chroma subsampling. yuv420p is what browsers and most players expect. ProRes and some camera formats use yuv422p or yuv444p, which look better but aren't widely supported for playback. If your video shows a black screen in some players, wrong pixel format is usually why.

-r — frame rate

ffmpeg -i input.mp4 -r 30 output.mp4

Sets the output frame rate. FFmpeg duplicates or drops frames to match. As an input option before -i, it forces the input frame rate (only useful for raw formats or image sequences).

-s — resolution

ffmpeg -i input.mp4 -s 1280x720 output.mp4

Shorthand for resizing. Equivalent to -vf scale=1280:720. If you need to preserve aspect ratio, use the scale filter directly:

ffmpeg -i input.mp4 -vf "scale=1280:-1" output.mp4

The -1 tells FFmpeg to calculate the height automatically. Use -2 instead of -1 to ensure the result is divisible by 2 (required by H.264/H.265).

-aspect — display aspect ratio

ffmpeg -i input.mp4 -aspect 16:9 output.mp4

Sets the display aspect ratio metadata. Doesn't resize pixels, just tells the player how to display them. Use this when the file's SAR (sample aspect ratio) is wrong and the video looks stretched.

-vn — disable video

ffmpeg -i input.mp4 -vn -c:a copy audio_only.m4a

Drops all video streams. Use when extracting audio.

-g — keyframe interval (GOP size)

ffmpeg -i input.mp4 -c:v libx264 -g 48 output.mp4

Sets the maximum number of frames between keyframes. Default is 250 (about 10 seconds at 25fps). For streaming, use fps * 2 (e.g., -g 60 for 30fps) so the player can seek to any 2-second point. For HLS/DASH, set this to match your segment duration for clean segment boundaries.

Output options: audio

-c:a / -acodec — audio codec

ffmpeg -i input.mp4 -c:v copy -c:a aac -b:a 192k output.mp4

Common audio codecs: aac (MP4 default), libmp3lame (MP3), libopus (best quality per bit, WebM/OGG), copy (no re-encoding), flac (lossless).

-b:a — audio bitrate

ffmpeg -i input.mp4 -c:a aac -b:a 128k output.mp4

128k is fine for speech. 192k-256k for music. 320k for archival (though at that point, FLAC makes more sense).

-ar — sample rate

ffmpeg -i input.mp4 -ar 44100 output.mp4

Sets audio sample rate in Hz. 44100 is CD quality. 48000 is video standard. Rarely needs changing unless you're feeding audio into something with strict requirements.

-ac — channel count

ffmpeg -i input.mp4 -ac 1 output.mp4

1 for mono, 2 for stereo. Downmixing 5.1 surround to stereo: -ac 2. FFmpeg handles the downmix automatically using standard coefficients.

-an — disable audio

ffmpeg -i input.mp4 -an -c:v copy video_only.mp4

Drops all audio streams. Useful when creating silent loops or processing video-only content.

-af volume — adjust audio level

ffmpeg -i input.mp4 -af "volume=1.5" output.mp4

Multiplier: 1.0 is unchanged, 0.5 is half volume, 2.0 is double. You can also use dB: -af "volume=3dB". For broadcast-standard loudness normalization, use the loudnorm filter:

ffmpeg -i input.mp4 -af "loudnorm=I=-16:TP=-1.5:LRA=11" output.mp4

Filter options

Filters are where FFmpeg gets interesting. This is the part of the FFmpeg arguments list that most people underuse.

-vf / -filter:v — video filter chain

ffmpeg -i input.mp4 -vf "scale=1280:720,fps=30" output.mp4

Apply one or more video filters as a comma-separated chain. Filters run left to right. Each filter takes the output of the previous one.

Common filters worth knowing:

# Resize
-vf "scale=1920:1080"

# Resize preserving aspect ratio
-vf "scale=1280:-2"

# Crop (width:height:x:y)
-vf "crop=640:480:100:50"

# Auto-detect and remove black bars
-vf "cropdetect" # (first pass, read the output, then use the crop values)

# Add padding (letterbox)
-vf "pad=1920:1080:(ow-iw)/2:(oh-ih)/2:black"

# Rotate 90 degrees clockwise
-vf "transpose=1"

# Overlay (watermark)
-vf "overlay=10:10"

# Deinterlace
-vf "yadif"

# Speed up 2x (video only, pair with atempo for audio)
-vf "setpts=0.5*PTS"

# Add text overlay
-vf "drawtext=text='Sample':fontsize=24:fontcolor=white:x=10:y=10"

The watermark guide covers overlay positioning in detail. The frame extraction guide shows how to combine filters with frame output.

-af / -filter:a — audio filter chain

ffmpeg -i input.mp4 -af "volume=1.5,aresample=44100" output.mp4

Same syntax as video filters, applied to audio:

# Adjust volume
-af "volume=2.0"

# Normalize loudness (EBU R128)
-af "loudnorm"

# Speed up audio 2x (match with video setpts)
-af "atempo=2.0"

# Fade in first 3 seconds, fade out last 3
-af "afade=t=in:ss=0:d=3,afade=t=out:st=27:d=3"

# Remove silence from beginning/end
-af "silenceremove=start_periods=1:start_silence=0.5:start_threshold=-50dB"

# High-pass filter (remove low rumble)
-af "highpass=f=200"

-filter_complex — complex filter graphs

ffmpeg -i video.mp4 -i watermark.png \
  -filter_complex "[0:v][1:v]overlay=W-w-10:H-h-10[out]" \
  -map "[out]" -map 0:a output.mp4

Use this instead of -vf when you have multiple inputs, multiple outputs, or need to split/merge streams. The syntax uses stream labels in brackets. More verbose, but it handles things simple filters can't: picture-in-picture, combining audio from different sources, or creating multiple output resolutions from one input.

A practical example, side-by-side comparison:

ffmpeg -i original.mp4 -i compressed.mp4 \
  -filter_complex "[0:v]scale=960:540[left];[1:v]scale=960:540[right];[left][right]hstack[out]" \
  -map "[out]" comparison.mp4

Format and container options

-f — force output format

ffmpeg -i input.mp4 -f mp3 pipe:1

Usually FFmpeg guesses the format from the filename extension. Force it when writing to pipes, using raw formats, or when the extension is ambiguous. For a full breakdown of container formats and when to use each, see the formats guide.

-movflags +faststart — web-optimized MP4

ffmpeg -i input.mp4 -c:v libx264 -crf 23 -movflags +faststart output.mp4

Moves the MP4 moov atom to the beginning of the file. Without this, browsers download the entire file before playback starts. Always include it for web video. There's no downside.

-movflags +frag_keyframe+empty_moov — fragmented MP4

ffmpeg -i input.mp4 -c copy -movflags +frag_keyframe+empty_moov output.mp4

Creates a fragmented MP4 suitable for DASH streaming or progressive download of live content. Each fragment is independently decodable. Use this when generating MP4 for adaptive bitrate streaming.

-hls_time / -hls_list_size — HLS streaming

ffmpeg -i input.mp4 -c:v libx264 -c:a aac \
  -hls_time 6 -hls_list_size 0 -hls_segment_filename "seg_%03d.ts" output.m3u8

Creates HLS segments for adaptive streaming. -hls_time 6 sets 6-second segments. -hls_list_size 0 keeps all segments in the playlist (set to a number for live streams with a sliding window). Pair with -g 180 (for 30fps) to ensure keyframes align with segment boundaries.

-metadata — set file metadata

ffmpeg -i input.mp4 -metadata title="My Video" -metadata artist="Author" -c copy output.mp4

Sets container-level metadata. Use -metadata:s:v:0 for stream-specific metadata. To strip all metadata, use -map_metadata -1.

Stream mapping

-map — select streams manually

# Take video from first input, audio from second
ffmpeg -i video.mp4 -i audio.wav -map 0:v -map 1:a output.mp4

# Take all streams from input
ffmpeg -i input.mkv -map 0 output.mp4

# Exclude subtitle streams
ffmpeg -i input.mkv -map 0 -map -0:s output.mp4

# Select specific stream by index
ffmpeg -i input.mkv -map 0:a:1 output.mp4  # second audio stream

Without -map, FFmpeg auto-selects one stream per type (one video, one audio, one subtitle). With -map, you have full control. The syntax is input_index:stream_type:stream_index.

This is particularly useful when merging files. The merge videos guide covers concatenation and stream mapping for multi-file workflows.

-map_metadata — copy metadata

# Copy all metadata from input to output
ffmpeg -i input.mp4 -map_metadata 0 -c copy output.mp4

# Strip all metadata
ffmpeg -i input.mp4 -map_metadata -1 -c copy output.mp4

-map_metadata 0 copies metadata from the first input. -map_metadata -1 strips everything. Useful for privacy or when metadata causes player issues. See the metadata stripping guide for the full walkthrough.

-disposition — set stream flags

ffmpeg -i input.mkv -map 0 -disposition:a:0 default -disposition:a:1 0 -c copy output.mkv

Marks a stream as default, forced, or other flags. Useful when muxing multiple audio tracks and you want players to auto-select a specific one.

Hardware acceleration

-hwaccel — hardware decode

ffmpeg -hwaccel cuda -hwaccel_output_format cuda -i input.mp4 -c:v h264_nvenc output.mp4

Options: cuda (NVIDIA), vaapi (Linux/Intel/AMD), videotoolbox (macOS), qsv (Intel Quick Sync), d3d11va (Windows).

The -hwaccel_output_format cuda part keeps decoded frames in GPU memory, avoiding slow CPU-GPU transfers that can cut throughput in half. Always pair these two FFmpeg flags together. The GPU acceleration guide benchmarks NVENC vs CPU encoding across codecs and resolutions.

Hardware encoders

# NVIDIA
ffmpeg -hwaccel cuda -hwaccel_output_format cuda -i input.mp4 \
  -c:v h264_nvenc -preset p4 -cq 23 output.mp4

# macOS
ffmpeg -i input.mp4 -c:v h264_videotoolbox -b:v 5M output.mp4

# Intel Quick Sync
ffmpeg -hwaccel qsv -i input.mp4 -c:v h264_qsv -global_quality 23 output.mp4

Hardware encoders trade a small quality decrease for speed. NVENC on a modern GPU encodes 10-50x faster than libx264 on CPU. Good for real-time processing and batch jobs where encoding time matters more than squeezing out every last byte.

NVENC presets use p1-p7 (not the ultrafast-veryslow scale), and quality is controlled with -cq rather than -crf on some versions. Run ffmpeg -h encoder=h264_nvenc to see what your build supports.

Using these FFmpeg options via API

Every flag in this guide works with the RenderIO API. Send the command over HTTP and get results stored in the cloud. No FFmpeg installation, no server to maintain. Get an API key and you can run any of these commands in about 30 seconds.

curl -X POST https://api.renderio.dev/api/v1/run-ffmpeg-command \
  -H "Content-Type: application/json" \
  -H "X-API-KEY: ffsk_your_api_key_here" \
  -d '{
    "ffmpeg_command": "-i {{in_video}} -c:v libx264 -crf 23 -preset slow -movflags +faststart {{out_video}}",
    "input_files": { "in_video": "https://your-bucket.s3.amazonaws.com/video.mp4" },
    "output_files": { "out_video": "compressed.mp4" }
  }'

Replace the ffmpeg_command value with any command from this guide. Input files get downloaded from your URLs, FFmpeg runs in a sandboxed container, and output files get uploaded to cloud storage. You get back metadata like duration, codec, resolution, and file size with every result.

For batch processing (say, compressing 500 videos with the same CRF and preset), loop over the API instead of managing FFmpeg processes yourself. The commands list guide pairs dozens of FFmpeg commands with their exact API calls. Language-specific examples are in the Python SDK guide and the Node.js guide. Sign up here to try it.

Common FFmpeg options questions

What's the difference between -ss before and after -i?

Before -i, FFmpeg seeks at the demuxer level. Fast but potentially inaccurate by up to one keyframe interval. After -i, FFmpeg decodes every frame from the start until it reaches your timestamp. Slow but frame-accurate. For most trimming tasks, put -ss before -i with -c copy for speed. If you need exact frame cuts, put it after -i and re-encode. The trim video guide covers this with benchmarks.

How do I check what options my FFmpeg build supports?

# All options
ffmpeg -h full

# Specific encoder options
ffmpeg -h encoder=libx264

# List available encoders/decoders
ffmpeg -encoders
ffmpeg -decoders

# Check if a specific codec is available
ffmpeg -encoders 2>/dev/null | grep nvenc

What does -c copy actually do?

It skips re-encoding entirely. FFmpeg copies the compressed bitstream from input to output without decoding or re-encoding. This is instant (limited by disk I/O, not CPU) and lossless, with no generation loss. The tradeoff: you can't apply filters, change codecs, or do frame-accurate cuts. Use it when you only need to change the container format, strip/add streams, or cut on keyframe boundaries.

Why does FFmpeg ignore my options?

Usually option placement. FFmpeg flags before -i are input options. Flags after -i but before the output filename are output options. Put an output option in the input position and FFmpeg silently ignores it. Run with -v verbose to see what FFmpeg is actually doing with your command.

What's the best CRF value?

There's no universal answer. It depends on the codec and your quality requirements. For H.264, CRF 18-23 covers most use cases (18 for archival, 23 for distribution). For H.265, add 5-6 to get equivalent quality (so 23-28). The compression guide has visual comparisons and file size tables across CRF values.

How do I combine -crf with a maximum bitrate?

ffmpeg -i input.mp4 -c:v libx264 -crf 23 -maxrate 2M -bufsize 4M output.mp4

This tells the encoder to target CRF 23 quality but never exceed 2 Mbps. Set -bufsize to roughly 2x your maxrate. Useful for streaming platforms that cap bitrate.

Quick reference table

These are the most common FFmpeg flags and arguments in one table.

OptionCategoryWhat it does
-yGlobalOverwrite output without asking
-nGlobalExit if output file exists
-v errorGlobalOnly show errors
-hide_bannerGlobalSuppress build info
-progressGlobalMachine-readable progress output
-iInputSpecify input file
-fInput/OutputForce format
-ssInput/OutputSeek to time position
-tInput/OutputLimit duration
-toOutputStop at timestamp
-reInputRead at native frame rate
-stream_loopInputLoop input N times
-c:vOutputSet video codec
-c:aOutputSet audio codec
-c copyOutputCopy streams without re-encoding
-crfCodecConstant quality factor
-presetCodecSpeed vs compression tradeoff
-tuneCodecContent-type optimization
-profile:vCodecCompatibility profile
-pix_fmtCodecPixel format (yuv420p for web)
-b:vOutputVideo bitrate
-b:aOutputAudio bitrate
-rOutputFrame rate
-gCodecKeyframe interval (GOP size)
-sOutputResolution
-vfFilterVideo filter chain
-afFilterAudio filter chain
-filter_complexFilterMulti-input filter graph
-mapStreamManual stream selection
-map_metadataStreamCopy or strip metadata
-dispositionStreamSet stream default/forced flags
-movflagsFormatMP4 container flags
-metadataFormatSet file metadata
-hwaccelHardwareHardware decode acceleration
-vn / -anOutputDisable video/audio stream