Containers and codecs are different things
If you've ever searched "ffmpeg formats" trying to figure out why your video won't play somewhere, you've hit the core confusion. You have a .mp4 file, so you think the video is "in MP4 format." But MP4 is a container. Inside it, the video might be encoded with H.264, H.265, AV1, or a dozen other codecs. The audio might be AAC, MP3, or Opus.
A container is a box. It holds compressed video streams, audio streams, subtitles, metadata and chapter markers. The container defines how those streams are organized and interleaved, but it says nothing about how the pixels or audio samples were compressed.
A codec is the compression algorithm. H.264 compresses video by predicting pixel blocks from neighboring frames. AAC compresses audio by discarding frequencies humans can barely hear. The codec determines file size, quality, encoding speed and what hardware can decode it.
Why does this matter? Because you can have an H.264 video stream inside an MP4 container, an MKV container, or an FLV container. Same codec, different box. Knowing this distinction is the difference between a 2-second remux and a 15-minute transcode. The transcoding guide covers that tradeoff in detail, but the short version: if only the container needs to change, use -c copy and skip re-encoding entirely.
The ffprobe tutorial goes deeper on inspecting files, but that one-liner tells you everything you need to decide your next move. If you haven't installed FFmpeg yet, the command line tutorial covers setup on every OS.
How many FFmpeg formats exist
A default FFmpeg build (version 7.x or later) supports roughly 460 codecs and 370 container formats. The exact count depends on your build flags. Distro packages sometimes strip out patent-encumbered codecs. You can check your specific build:
The output of ffmpeg -formats uses flags to tell you what each format supports:
So when you see DE matroska, that means FFmpeg can both read and write Matroska (MKV) files. When you see D avi, it can read AVI but not write it (in practice, AVI muxing is supported through a separate muxer entry).
Most of those 370 FFmpeg formats are obscure. Brainvision EEG data, Sierra Online audio, TechSmith Screen Capture Codec containers. You'll use maybe 10 of them in your entire career. Here are the ones that actually matter.
Video container formats
MP4 (MPEG-4 Part 14)
The default choice for video on the web, and that's unlikely to change anytime soon.
MP4 supports H.264, H.265, AV1 and MPEG-4 for video. AAC, MP3, AC-3 and Opus for audio (Opus since the 2024 spec update, with growing browser support). Subtitles go in as mov_text. It handles chapter markers, metadata and multiple streams.
Browser support is universal. Every phone, tablet, smart TV and desktop browser made in the last 15 years plays MP4. Social media platforms accept it without conversion.
That -movflags +faststart flag moves the moov atom (the file's index) to the front so browsers can start playback before the full download. Always include it for web delivery. The compression guide covers moov atom placement and other MP4 optimization flags in detail.
Good for web delivery, social media, mobile apps. Not great if you need lossless codecs or multiple subtitle tracks (MKV handles that better).
MKV (Matroska)
The "put anything in it" container. MKV is open-source, supports virtually every codec FFmpeg can handle, allows unlimited audio and subtitle tracks, and handles chapter markers plus attachments like fonts for styled subtitles.
MKV is popular for archival and media servers. Plex, Jellyfin and Kodi handle it natively. The downside: browser support is spotty. Chrome plays it, Safari doesn't. Bad for web delivery, great for everything else.
WebM
Google's open container format, based on Matroska. WebM restricts codecs to VP8, VP9 or AV1 for video, and Vorbis or Opus for audio. That sounds limiting, but it's deliberate. WebM is built for the web, and those codecs are all royalty-free.
VP9 WebM is common on YouTube (they transcode everything into VP9 alongside H.264). AV1 WebM is the next generation, with better compression but slower encoding.
If you want royalty-free codecs for web delivery, WebM is the pick. Just watch out for older iOS versions: Safari added WebM VP9 support in iOS 16.4, but older iPhones won't play it.
MOV (QuickTime)
Apple's container format. Structurally almost identical to MP4 (they share the same ISO base media file format), but MOV supports ProRes, which MP4 doesn't. That makes MOV the format cameras and editing software spit out.
This works when the MOV contains H.264/AAC (common from iPhones). It fails when the MOV contains ProRes, since MP4 doesn't support ProRes. In that case, you need to transcode. You'll get this error:
That error means the container can't hold that codec. Change the container (use MKV) or change the codec (transcode to H.264).
MOV only makes sense if you're in Final Cut or receiving files directly from cameras. For delivery, use MP4.
AVI (Audio Video Interleave)
Microsoft's container from 1992. Still kicking around because legacy software generates it. AVI doesn't handle modern codecs, has no streaming support, and can't store subtitles properly. If you're getting AVI files, convert them.
MPEG-TS (Transport Stream)
Designed for broadcasting and streaming. MPEG-TS breaks video into small packets that can survive transmission errors, which makes it the backbone of HLS (HTTP Live Streaming) and broadcast TV.
The cheat sheet has this and other streaming commands if you need them. Stick with MP4 for file delivery; MPEG-TS is for live streaming and HLS.
FLV (Flash Video)
Flash died in 2020, but FLV didn't. RTMP streaming still uses it. OBS sends FLV over RTMP to Twitch and YouTube. If you're building a live streaming pipeline, you'll run into FLV.
Usually works with -c copy because most RTMP streams use H.264/AAC, which MP4 supports natively.
Audio container formats
MP3
Not really a container in the traditional sense. An MP3 file is just a stream of MPEG-1 Audio Layer III frames with optional ID3 metadata tags. FFmpeg treats it as a format anyway.
The -q:a 2 flag sets VBR quality (0 is best, 9 is worst). Quality 2 gives roughly 190 kbps on average, good enough for most uses. For more audio extraction methods, the video-to-audio conversion guide covers doing this at scale.
WAV
Uncompressed PCM audio in a RIFF container. Large files, zero quality loss. Use it for editing and processing, convert to something smaller for delivery.
pcm_s16le means 16-bit signed little-endian PCM, the standard CD-quality format. For higher resolution, use pcm_s24le (24-bit) or pcm_f32le (32-bit float).
FLAC
Lossless compressed audio. Cuts file sizes by 40-60% compared to WAV with zero quality loss. Open-source, well-supported. If you're archiving audio, this is the format.
OGG
Open container format. Usually paired with Vorbis (audio) or Theora (video). Mostly superseded by WebM/Opus for web audio, but still used in games (Unity and Unreal Engine support OGG natively) and open-source projects.
AAC (in M4A container)
AAC audio in an MP4 container with an .m4a extension. Better quality than MP3 at the same bitrate. Default audio format for Apple devices and iTunes.
FFmpeg video codecs that matter
Here's where developers spend most of their time making decisions. Each codec is a tradeoff between compression efficiency, encoding speed, hardware support and licensing.
H.264 (AVC)
The workhorse. If you don't know which codec to pick, pick H.264.
H.264 has universal hardware decoding support. It plays on every device made after 2005 and encodes fast. The compression isn't the best anymore (H.265 and AV1 beat it by 40-50%), but nothing else comes close on compatibility.
FFmpeg encoder: libx264
-crf 23 sets quality (lower = better, 18 is visually lossless, 28 is noticeably compressed). -preset controls encoding speed vs compression ratio. ultrafast encodes quickly but produces larger files. veryslow compresses better but takes much longer. medium is the default and usually the right call.
The compression guide covers CRF tuning in depth if you need to squeeze out every byte.
One thing H.264 requires: even pixel dimensions. If your source is 1921x1081, the encoder throws an error:
Fix it with -vf "scale=trunc(iw/2)*2:trunc(ih/2)*2" or use -2 in your scale filter.
H.265 (HEVC)
40-50% better compression than H.264 at the same visual quality. The catch: it's slower to encode (about 0.3-0.5x the speed of H.264), hardware support is less universal, and licensing is a mess with multiple patent pools each setting different terms.
FFmpeg encoder: libx265
That -tag:v hvc1 flag matters. Without it, some Apple devices refuse to play the file, showing a black screen or "format not supported" error. HEVC has two valid tag formats (hev1 and hvc1), and Apple requires hvc1.
Hardware decoding works well on devices from 2017 onward. Older Android phones, smart TVs from before 2016, and Firefox (without a system codec) won't play it. If your audience has older devices, stick with H.264.
AV1
The newest of the three major codecs and the most efficient. AV1 matches or beats H.265 compression by another 20-30% and is royalty-free (H.265 has a messy patent pool). The tradeoff: encoding is painfully slow with software encoders.
FFmpeg encoders: libaom-av1 (reference, very slow), libsvtav1 (faster, production-ready)
libsvtav1 at preset 6 is roughly comparable to libx264 at preset medium in encoding speed, while producing files 30-40% smaller. Developed by Intel and Netflix, SVT-AV1 is what moved AV1 from "academically interesting" to production-ready.
Hardware AV1 encoding landed with NVIDIA's RTX 40-series (Ada Lovelace) and Intel Arc GPUs. The GPU acceleration guide covers NVENC AV1 encoding if you have the hardware. On the decoding side, Chrome, Firefox, Edge and Safari (from macOS Ventura / iOS 17) all handle AV1 playback.
VP9
Google's answer to H.265, developed before AV1 existed. VP9 compression is roughly equivalent to H.265 but royalty-free. YouTube uses VP9 heavily. When you watch a video on YouTube, you're probably watching VP9.
FFmpeg encoder: libvpx-vp9
The -row-mt 1 flag enables row-based multithreading, which can double VP9 encoding speed. It's off by default for historical reasons.
VP9 is mostly relevant for WebM delivery. If you're targeting the web and want royalty-free codecs, VP9 sits between H.264's compatibility and AV1's efficiency.
ProRes
Apple's intermediate codec for editing. Not designed for delivery (files are huge), but designed for speed. ProRes decodes fast enough for real-time editing at 4K and preserves quality through multiple rounds of editing.
FFmpeg encoder: prores_ks
ProRes profiles run from 0 (Proxy, smallest) to 5 (4444 XQ, largest, highest quality). Profile 3 (HQ) is the sweet spot for most editing workflows.
You'll encounter ProRes when receiving files from cameras or editors. Convert it to H.264 or H.265 for delivery using the transcoding guide.
Audio codecs
AAC
The default audio codec for MP4 video. Better than MP3 at the same bitrate, universally supported. FFmpeg's built-in AAC encoder is decent. For higher quality, there's libfdk_aac, but it requires a specific build flag due to licensing.
Opus
The best lossy audio codec right now. Opus beats AAC at every bitrate and handles everything from voice calls (at 6 kbps) to music (at 128+ kbps). Royalty-free.
The limitation is container support. Opus works in WebM and OGG natively. MP4 Opus support is technically possible (added to the spec in 2024) but playback support is still inconsistent across players.
MP3 (LAME)
Worse than AAC and Opus at the same bitrate. Still ubiquitous because everything plays MP3. Use it when you need maximum compatibility for standalone audio files.
FLAC
Lossless. No quality loss, but files are 2-3x larger than lossy equivalents. Use for archival and editing. FFmpeg handles it natively.
Which FFmpeg format for which platform
Picking formats for social platforms shouldn't require trial and error. Here's what works:
Web (general): MP4 with H.264 video, AAC audio. The safe default. H.265 MP4 works on modern browsers if you want better compression.
YouTube: Accepts almost everything, but transcodes to VP9 and AV1 internally. Upload MP4 H.264 for the fastest processing. Upload ProRes or DNxHR if you want maximum quality passed to the transcoder.
TikTok/Instagram Reels: MP4, H.264, AAC. Vertical 9:16. Both platforms re-encode your upload, so high-quality source material matters more than your export codec.
Discord: 8MB file limit on free tier (25MB with Nitro). MP4 H.264 with aggressive CRF compression. The video compression guide has specific commands for hitting file size targets.
Email attachments: Keep it under 10MB. MP4, H.264, CRF 28+, 720p resolution.
Frame extraction: The container and codec affect which extraction methods work best. The frame extraction guide covers keyframe-accurate extraction across formats.
Checking FFmpeg format support in your build
Not every FFmpeg build includes every codec. Distro packages sometimes strip out patent-encumbered codecs or rarely-used formats.
If a codec is missing, you either need a different FFmpeg build (static builds from ffmpeg.org include more codecs) or you need to compile from source with the right --enable flags. Common missing encoders and their flags:
| Encoder | Build flag | Why it's missing |
| libx265 | --enable-libx265 | Separate library dependency |
| libsvtav1 | --enable-libsvtav1 | Relatively new, not in all distros |
| libfdk_aac | --enable-libfdk-aac --enable-nonfree | Non-free license |
| h264_nvenc | --enable-nvenc | Requires NVIDIA GPU + headers |
Converting between FFmpeg formats with the API
All these commands work locally. But when you're processing hundreds or thousands of files, running FFmpeg on your own server creates scaling problems. Every encoding job pins a CPU core for minutes. A queue builds up. Your application slows down.
RenderIO runs FFmpeg commands in isolated cloud containers. Send the same command you'd run locally, get the output file back. Get an API key and try it:
The complete API guide covers authentication, webhooks and batch processing in detail. If you're building this into n8n or Zapier workflows, the n8n video processing guide and Zapier integration walk through the setup.
Quick reference: container and codec compatibility
Here's what goes inside what:
| Container | H.264 | H.265 | AV1 | VP9 | ProRes | AAC | Opus | MP3 | FLAC |
| MP4 | Yes | Yes | Yes | Yes | No | Yes | Partial | Yes | No |
| MKV | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
| WebM | No | No | Yes | Yes | No | No | Yes | No | No |
| MOV | Yes | Yes | Yes | Yes | Yes | Yes | No | Yes | No |
| AVI | Yes | No | No | No | No | No | No | Yes | No |
| MPEG-TS | Yes | Yes | Yes | No | No | Yes | No | Yes | No |
| FLV | Yes | No | No | No | No | Yes | No | Yes | No |
"Partial" for Opus in MP4: spec-compliant since 2024, but not all players handle it yet. MKV wins on flexibility. MP4 wins on compatibility. WebM is locked to royalty-free codecs by design.
Picking the right FFmpeg format
Most developers overthink this. Here's the quick version.
Play everywhere? MP4 container, H.264 video, AAC audio. Done. Nothing comes close on compatibility.
Smaller files? MP4 with H.265 video, AAC audio. Make sure your target devices support HEVC. If they're from 2018 or later, they probably do.
Smallest files possible? MP4 with AV1 video, Opus audio. Encoding is slow and hardware support is still catching up, but the compression is hard to beat.
Editing later? MKV or MOV container, ProRes or FFV1 codec. Optimize for decode speed and quality, not file size.
Building a web player? Start with MP4 H.264 for compatibility, then add WebM VP9 or AV1 as progressive upgrades for browsers that support it.
Just changing the container? Use -c copy to remux without re-encoding. Takes seconds.
For more commands organized by task, the FFmpeg cheat sheet covers 50 common operations. Need to add watermarks or trim clips after converting? Those guides cover it.
If you'd rather not manage FFmpeg infrastructure yourself, sign up for RenderIO and run these commands in the cloud.
FAQ
What's the difference between a container and a codec?
A container (MP4, MKV, WebM) is the file format, the box. A codec (H.264, AAC, VP9) is the compression algorithm that encodes the actual video or audio data inside that box. You can put H.264 video inside an MP4, MKV, or MOV container. When you change only the container with -c copy, it takes seconds because the compressed data stays untouched.
How do I check what codec a video file uses?
Run ffprobe -v error -show_entries stream=codec_name,codec_type -of csv=p=0 input.mp4. This prints the codec name and type (video/audio) for each stream. The ffprobe tutorial covers more advanced inspection commands.
Can I convert MKV to MP4 without re-encoding?
Yes, if the codecs inside the MKV are MP4-compatible (H.264/H.265 video, AAC audio). Run ffmpeg -i input.mkv -c copy output.mp4. If the MKV contains codecs that MP4 doesn't support (like FLAC audio), you'll need to transcode those specific streams.
Which codec gives the smallest file size?
AV1 produces the smallest files at a given quality level, followed by H.265, then H.264. But smaller files come with tradeoffs: AV1 encodes slowly and has less hardware support than H.264. For most developers, H.264 is still the right default unless file size is the main concern.
Why won't my H.265 video play on some devices?
H.265 playback requires hardware decoder support. Devices from before 2017, some smart TVs, and Firefox without a system codec can't play it. Add -tag:v hvc1 when encoding for Apple devices. If compatibility matters more than file size, use H.264.
How do I add a codec that's missing from my FFmpeg build?
Check what's available with ffmpeg -encoders | grep <codec_name>. If it's missing, either download a static build from ffmpeg.org (which includes most codecs) or compile FFmpeg from source with the relevant --enable flag. See the build flags table above for common missing encoders.