The one command that solves 80% of compression
You have a video that's too big. Maybe it's a 2GB screen recording for a Slack message. Maybe you're storing thousands of user uploads and your S3 bill keeps climbing. Whatever the reason, you want to compress video with FFmpeg without making it look terrible. (New to FFmpeg? The command line tutorial covers installation and first commands. Can't install it? FFmpeg online tools run in the browser, though they struggle with large files.)
Here's the command:
For a typical 1080p video, this drops file size by 50-80% with no visible quality loss. The -crf 23 tells FFmpeg to target a constant visual quality rather than a fixed bitrate. The -movflags +faststart moves the MP4 metadata to the front of the file so browsers and players can start playback before the full download finishes. Always include it for anything destined for the web.
But if you copy-paste that and move on, you're leaving compression on the table. The rest of this guide covers when to change those numbers, which codec to pick, how to hit a specific file size, and how to compress hundreds of videos without babysitting the process. If any of the flags look unfamiliar, the FFmpeg options reference breaks down every option by category.
How CRF works (and why it beats fixed bitrate)
CRF stands for Constant Rate Factor. Instead of allocating a fixed number of bits per second, CRF lets the encoder spend more bits on complex scenes (fast motion, lots of detail) and fewer bits on simple ones (static shots, talking heads). The result is consistent perceived quality across the whole video.
For H.264 (libx264), the CRF scale runs from 0 to 51:
0 is lossless. The file will be enormous.
18 is visually lossless. You won't spot artifacts, but the file is still large.
23 is the default. Good balance for most content.
28 is where you start noticing quality loss on close inspection, but it's fine for web delivery.
33+ gets ugly. Visible blocking, color banding, smeared details.
I keep coming back to CRF 23 as the default because it's the sweet spot where most people can't tell the difference from the original. If you're compressing for archival, drop to 18. If you're compressing for a chat app or email attachment, push to 28.
The alternative is fixed bitrate (-b:v 2M), which allocates exactly 2 megabits per second regardless of scene complexity. This gives you predictable file sizes but wastes bits on easy scenes and starves complex ones. Use bitrate targeting when you need exact file size control. Use CRF for everything else.
Constrained quality: the best of both
There's a hybrid approach worth knowing about. Constrained encoding lets you target CRF quality while capping the maximum bitrate:
This tells the encoder: "aim for CRF 23 quality, but never exceed 2 Mbps." The -bufsize controls how strictly the rate limit is enforced; set it to roughly 2x your maxrate. This is useful when you're streaming or uploading to platforms with bitrate caps but still want CRF's intelligent bit distribution.
ffmpeg compress video with H.264: the practical guide
H.264 is still the right default for most compression tasks. It plays everywhere: browsers, phones, smart TVs, video players from 2010. If you need broad compatibility, this is it.
Basic compression
Better compression with a slower preset
Presets control how much time the encoder spends finding optimal compression. Slower presets squeeze out smaller files at the same quality, but encoding takes longer.
The full preset ladder, from fastest to smallest output:
ultrafast > superfast > veryfast > faster > fast > medium > slow > slower > veryslow
In practice, medium is a good default. slow gives you roughly 5-10% smaller files for 2-3x the encoding time. Going past slower has diminishing returns that rarely justify the wait. Don't use placebo — the name is accurate.
Compression + downscale
Dropping resolution is the single most effective way to cut file size. A 4K video scaled to 1080p at CRF 23 can be 75% smaller than the original.
The -2 tells FFmpeg to calculate the height automatically while keeping the aspect ratio, and ensuring the result is divisible by 2 (which H.264 requires). For the full breakdown of scale filter syntax, aspect ratio preservation, platform presets for TikTok and YouTube, and the scale+pad letterbox pattern, see the FFmpeg scale video guide. The FFmpeg cheat sheet also has a quick-reference section on resizing.
Compression + reduced frame rate
If your source is 60fps but the content doesn't need it (a screencast, a talking head, a slideshow), dropping to 30fps cuts the file roughly in half:
Don't do this with action footage or anything with fast motion. The stutter will be obvious.
Verify your compression with ffprobe
Before and after compressing, check what you've got:
This gives you file size, bitrate, resolution, and codec in a clean JSON dump. Comparing the output for your source and compressed file tells you exactly how much you saved and whether the codec changed as expected. Useful for scripting, too — you can parse the JSON and flag any output that didn't hit your compression target. The ffprobe tutorial goes deeper into field extraction, output formats, and building validation scripts around ffprobe output.
Two-pass encoding: hit an exact file size
Sometimes you need the output under a specific size. Maybe the upload limit is 25MB, or you're targeting a specific bitrate for streaming. CRF can't guarantee file size because it targets quality instead. Two-pass encoding solves this.
First, calculate the target bitrate. If you want a 2-minute video under 25MB:
Then run two passes:
Pass 1 scans the entire video and writes a stats file. Pass 2 reads those stats to distribute bits intelligently: more bits for complex scenes, fewer for simple ones. The result is better quality than single-pass encoding at the same file size.
On Windows, replace /dev/null with NUL.
This is also where an API can save you real time. Running two sequential FFmpeg commands locally means waiting for both to finish. With RenderIO's chained commands, you submit both passes as a single request and the API handles the sequencing on cloud infrastructure. The Node.js SDK and Python SDK both support chained commands natively.
H.265 compression: 40-50% smaller files
H.265 (HEVC) is the successor to H.264. At the same visual quality, H.265 produces files 40-50% smaller. A codec comparison by 32blog.com confirmed this: at equivalent VMAF scores, H.265 consistently needs 40-45% fewer bits. The catch? Slower encoding and less universal device support.
A few things to note:
The CRF scale for H.265 is different. CRF 28 in H.265 looks roughly equivalent to CRF 23 in H.264. The numbers shifted by about 4-6 points.
The
-tag:v hvc1flag is necessary for Apple device playback. Without it, macOS and iOS may refuse to open the file.Encoding is about 2-3x slower than H.264 at the same preset. A file that takes 5 minutes with libx264 might take 12-15 minutes with libx265.
The transcoding guide has a full codec-by-codec breakdown of CRF ranges, preset behavior, and VMAF quality verification if you want to dig deeper.
When H.265 makes sense
Use H.265 when storage cost matters more than encoding time. If you're compressing an archive of thousands of videos, the 40-50% space savings add up fast. If you're compressing one file to send to someone on an older Android phone or Windows 7 machine, stick with H.264. Not sure which codec your target platform supports? The FFmpeg formats guide has a platform compatibility breakdown and a container-codec matrix.
AV1: the future of compression (if you can wait)
AV1 is the newest practical option. It's royalty-free (unlike H.265's messy patent situation), and it compresses even better: roughly 50% smaller than H.264 and 10-20% smaller than H.265 at the same quality, according to comparative benchmarks from the Alliance for Open Media. YouTube, Netflix, and all major browsers support it.
The problem is speed. AV1 encoding with the reference libaom encoder is painfully slow, 10-20x slower than H.264. For a 5-minute 1080p video, you might wait over an hour.
The -cpu-used 4 flag trades some compression efficiency for speed (range is 0-8, higher is faster). The -b:v 0 enables pure constant quality mode for AV1.
SVT-AV1 (libsvtav1) is the practical encoder. Developed by Intel and Netflix, it runs roughly 6-10x faster than libaom while producing comparable quality:
SVT-AV1 presets range from 0-13. Preset 6 is a good balance for production work. Below 4 gets slow; above 10 sacrifices noticeable quality.
AV1 is worth it for content that gets viewed millions of times (streaming platforms, CDN delivery). For compressing your project files or one-off conversions, it's hard to justify the encoding time unless you have a GPU — NVENC on an RTX 4000+ card makes AV1 encoding hundreds of times faster than libaom. The FFmpeg CUDA and NVENC guide covers GPU-accelerated encoding for H.264, HEVC, and AV1. For a side-by-side of all three codecs covering speed, quality, and compatibility, the transcoding guide has the full comparison.
Compressing video for social media platforms
Each platform has its own bitrate and file size sweet spot. Push too much compression and the platform's re-encoding makes it worse. Here are practical commands tuned for each:
TikTok / Instagram Reels (9:16 vertical)
CRF 22 (not 23) because TikTok re-encodes everything. Starting with slightly higher quality gives the platform's encoder more to work with. The batch processing guide for social media walks through automating this for multiple platforms in a single pipeline.
YouTube (16:9, high bitrate ceiling)
YouTube is generous with bitrate. Don't over-compress.
CRF 18 keeps the quality high, which matters because YouTube's re-encoding is aggressive. Upload the best quality you can.
Twitter/X (720p max recommended)
Twitter's player caps practical quality at 720p. Anything higher gets re-encoded to 720p anyway.
Discord (25MB limit on free tier, 50MB with Nitro)
Discord's file size limits are strict. Two-pass encoding is your friend here:
Adjust -b:v based on your clip's duration. The formula: (target_MB × 8000) / duration_seconds = bitrate_kbps. Subtract audio bitrate from the total. If your source video is longer than you need, trim it first with -c copy (instant) and then compress the shorter clip — you'll save encoding time proportionally.
Batch compression with the RenderIO API
Everything above works great for one file. When you have 500 videos to compress, running FFmpeg locally means your machine is pegged at 100% CPU for hours. Add a second file and your encoding time doubles.
The RenderIO API runs FFmpeg commands on cloud infrastructure. Each command gets its own isolated container, so you can submit hundreds in parallel without throttling your own machine.
Here's a compression job via curl:
For more curl examples with polling and error handling, and complete integration guides for Python and Node.js, see the dedicated SDK pages.
And in Python, batch-compressing a folder of videos:
All three jobs run in parallel. No local CPU usage. If you're considering whether to run this locally or offload to an API, the hosted vs self-hosted comparison breaks down the real costs at different scales. Short version: under 100 videos a month, either approach works. Past that, managing your own encoding infrastructure gets expensive fast. Or skip infrastructure entirely by running FFmpeg in the cloud without a server.
Quick reference: CRF values by use case
| Use case | Codec | CRF | Preset | Expected reduction |
| Archival | H.264 | 18 | slow | 40-60% |
| General web | H.264 | 23 | medium | 60-80% |
| Email/chat | H.264 | 28 | fast | 80-90% |
| Storage savings | H.265 | 28 | medium | 80-90% |
| Max compression | H.265 | 32 | slow | 90-95% |
| Streaming CDN | AV1 | 30 | preset 6 | 85-95% |
These are rough estimates. Actual compression depends on the source material. A screencast with static UI compresses much better than handheld footage of a concert.
Common mistakes
Using -b:v and -crf together. Pick one. CRF targets quality. Bitrate targets size. Using both confuses the encoder and you'll get weird results. The exception is constrained encoding (-crf 23 -maxrate 2M -bufsize 4M), which caps the bitrate while still targeting quality (see the constrained quality section above).
Encoding faster than necessary. If you're archiving video, use -preset slow or slower. The extra encoding time pays for itself in storage savings over years. Only use ultrafast or fast when you need the output immediately.
Not using -movflags +faststart for web delivery. This moves the MP4 metadata (moov atom) to the beginning of the file so browsers can start playback before the whole file downloads. Without it, the browser has to download the entire file first. You can add it to an existing file without re-encoding: ffmpeg -i input.mp4 -c copy -movflags +faststart output.mp4.
Re-compressing already compressed video. Every re-encode loses quality. If you receive an H.264 file and need a smaller H.264 file, you're doing generational compression and quality degrades. When possible, go back to the original source and compress from there. If you only need to change the container, remux instead of transcode. For converting between fundamentally different formats — like turning animated GIFs into MP4 video — CRF 18-20 preserves the most detail since GIF source material is already low-fidelity.
Ignoring audio. Audio is small compared to video, but -c:a copy keeps the original audio stream without re-encoding. Use it when you don't need to change the audio format. If the source has uncompressed PCM audio, switching to AAC (-c:a aac -b:a 128k) can save significant space.
Using the wrong CRF scale for H.265. CRF 23 in H.264 and CRF 23 in H.265 are not equivalent. H.265's default is CRF 28, which roughly matches H.264's CRF 23. Setting CRF 23 in H.265 gives you a much larger file than expected. See the FFmpeg commands list for ready-to-use commands with correct CRF values per codec.
What to use when
If you want the simplest possible command: CRF 23 with H.264. It works everywhere.
If you want smaller files and don't mind slower encoding: H.265 at CRF 28.
If you need an exact file size: two-pass encoding with a calculated bitrate.
If you're compressing for a specific platform: match the commands in the social media section.
If you're compressing at scale: use RenderIO's API to process in parallel without managing infrastructure.
If you've merged multiple clips together and the result is too large, compress the merged file as a separate step. Merge first with -c copy, then compress.
The best compression strategy depends on what you're optimizing for. Pick the approach that matches your actual constraint (quality, file size, encoding speed, or compatibility) and don't overthink the rest.
FAQ
What CRF value should I use for ffmpeg compress video?
Start with CRF 23 for H.264. That's the default and it works well for most content. If you can see quality loss, drop to 20-22. If the file is still too big, push to 26-28. For H.265, shift those numbers up by about 5. CRF 28 is the equivalent starting point. The "right" value depends on your source material: screen recordings and animations compress better than shaky handheld footage.
Does ffmpeg compress video without losing quality?
Not with lossy codecs (which is what H.264, H.265, and AV1 are). Every lossy encode introduces some quality loss. At CRF 18 (H.264) or CRF 22 (H.265), the loss is imperceptible to most people, often called "visually lossless." True lossless compression is possible (-crf 0), but the file sizes aren't practical for delivery. For more on this trade-off, the transcoding guide explains generational loss and when it matters.
How do I compress a video to a specific size with ffmpeg?
Use two-pass encoding. Calculate the target bitrate: (target_size_MB × 8000) / duration_seconds = bitrate_kbps. Subtract your audio bitrate (usually 128k) from the total. Run pass 1 with -pass 1 -an -f null /dev/null, then pass 2 with -pass 2 and your audio settings. See the two-pass section above for the full command.
Is H.265 compression worth the slower encoding?
If you're compressing a large archive, yes. The 40-50% file size reduction adds up across thousands of files. If you're compressing a single video for a quick share, H.264 is faster and plays on more devices. If you're processing video as part of an application, offloading the encoding to an FFmpeg API service makes the speed difference irrelevant since you're not waiting for it locally.
Can I compress video with ffmpeg without re-encoding?
Not really. Compression requires re-encoding by definition. The encoder makes new decisions about how to represent the data. You can reduce file size without re-encoding by stripping metadata (-map_metadata -1 -c copy) or removing unnecessary streams, but the savings are marginal. If you want to keep the original codec and just change the container, that's remuxing, not compressing.
What's the difference between compression and transcoding?
Compression reduces file size, usually by adjusting quality settings within the same or different codec. Transcoding specifically means converting between codecs (like H.264 to H.265). All transcoding involves re-encoding, and most re-encoding involves some compression. The transcoding guide covers this distinction in depth.