FFmpeg Documentation


Table of Contents


FFmpeg Documentation

1. Introduction

FFmpeg is a very fast video and audio converter. It can also grab from a live audio/video source. The command line interface is designed to be intuitive, in the sense that ffmpeg tries to figure out all the parameters, when possible. You have usually to give only the target bitrate you want.

FFmpeg can also convert from any sample rate to any other, and resize video on the fly with a high quality polyphase filter.

2. Quick Start

2.1 Video and Audio grabbing

FFmpeg can use a video4linux compatible video source and any Open Sound System audio source:

  ffmpeg /tmp/out.mpg 

Note that you must activate the right video source and channel before launching ffmpeg. You can use any TV viewer such as xawtv by Gerd Knorr which I find very good. You must also set correctly the audio recording levels with a standard mixer.

2.2 Video and Audio file format convertion

* ffmpeg can use any supported file format and protocol as input:

Examples:

* You can input from YUV files:

  ffmpeg -i /tmp/test%d.Y /tmp/out.mpg 

It will use the files:

       /tmp/test0.Y, /tmp/test0.U, /tmp/test0.V,
       /tmp/test1.Y, /tmp/test1.U, /tmp/test1.V, etc...

The Y files use twice the resolution of the U and V files. They are raw files, without header. They can be generated by all decent video decoders. You must specify the size of the image with the '-s' option if ffmpeg cannot guess it.

* You can input from a RAW YUV420P file:

  ffmpeg -i /tmp/test.yuv /tmp/out.avi

The RAW YUV420P is a file containing RAW YUV planar, for each frame first come the Y plane followed by U and V planes, which are half vertical and horizontal resolution.

* You can output to a RAW YUV420P file:

  ffmpeg -i mydivx.avi -o hugefile.yuv

* You can set several input files and output files:

  ffmpeg -i /tmp/a.wav -s 640x480 -i /tmp/a.yuv /tmp/a.mpg

Convert the audio file a.wav and the raw yuv video file a.yuv to mpeg file a.mpg

* You can also do audio and video convertions at the same time:

  ffmpeg -i /tmp/a.wav -ar 22050 /tmp/a.mp2

Convert the sample rate of a.wav to 22050 Hz and encode it to MPEG audio.

* You can encode to several formats at the same time and define a mapping from input stream to output streams:

  ffmpeg -i /tmp/a.wav -ab 64 /tmp/a.mp2 -ab 128 /tmp/b.mp2 -map 0:0 -map 0:0

Convert a.wav to a.mp2 at 64 kbits and b.mp2 at 128 kbits. '-map file:index' specify which input stream is used for each output stream, in the order of the definition of output streams.

* You can transcode decrypted VOBs

  ffmpeg -i snatch_1.vob -f avi -vcodec mpeg4 -b 800 -g 300 -bf 2 -acodec mp3 -ab 128 snatch.avi

This is a typicall DVD ripper example, input from a VOB file, output to an AVI file with MPEG-4 video and MP3 audio, note that in this command we use B frames so the MPEG-4 stream is DivX5 compatible, GOP size is 300 that means an INTRA frame every 10 seconds for 29.97 fps input video. Also the audio stream is MP3 encoded so you need LAME support which is enabled using --enable-mp3lame when configuring. The mapping is particullary usefull for DVD transcoding to get the desired audio language.

NOTE: to see the supported input formats, use ffmpeg -formats.

3. Invocation

3.1 Syntax

The generic syntax is:

  ffmpeg [[options][-i input_file]]... {[options] output_file}...

If no input file is given, audio/video grabbing is done.

As a general rule, options are applied to the next specified file. For example, if you give the '-b 64' option, it sets the video bitrate of the next file. Format option may be needed for raw input files.

By default, ffmpeg tries to convert as losslessly as possible: it uses the same audio and video parameter fors the outputs as the one specified for the inputs.

3.2 Main options

`-L'
show license
`-h'
show help
`-formats'
show available formats, codecs, protocols, ...
`-f fmt'
force format
`-i filename'
input file name
`-y'
overwrite output files
`-t duration'
set the recording time in seconds. hh:mm:ss[.xxx] syntax is also supported.
`-title string'
set the title
`-author string'
set the author
`-copyright string'
set the copyright
`-comment string'
set the comment
`-b bitrate'
set video bitrate (in kbit/s)

3.3 Video Options

`-s size'
set frame size [160x128]
`-r fps'
set frame rate [25]
`-b bitrate'
set the video bitrate in kbit/s [200]
`-vn'
disable video recording [no]
`-bt tolerance'
set video bitrate tolerance (in kbit/s)
`-sameq'
use same video quality as source (implies VBR)
`-pass n'
select the pass number (1 or 2). It is useful to do two pass encoding. The statistics of the video are recorded in the first pass and the video at the exact requested bit rate is generated in the second pass.
`-passlogfile file'
select two pass log file name

3.4 Audio Options

`-ab bitrate'
set audio bitrate (in kbit/s)
`-ar freq'
set the audio sampling freq [44100]
`-ab bitrate'
set the audio bitrate in kbit/s [64]
`-ac channels'
set the number of audio channels [1]
`-an'
disable audio recording [no]

3.5 Advanced options

`-map file:stream'
set input stream mapping
`-g gop_size'
set the group of picture size
`-intra'
use only intra frames
`-qscale q'
use fixed video quantiser scale (VBR)
`-qmin q'
min video quantiser scale (VBR)
`-qmax q'
max video quantiser scale (VBR)
`-qdiff q'
max difference between the quantiser scale (VBR)
`-qblur blur'
video quantiser scale blur (VBR)
`-qcomp compression'
video quantiser scale compression (VBR)
`-vd device'
set video device
`-vcodec codec'
force video codec
`-me method'
set motion estimation method
`-bf frames'
use 'frames' B frames (only MPEG-4)
`-hq'
activate high quality settings
`-4mv'
use four motion vector by macroblock (only MPEG-4)
`-ad device'
set audio device
`-acodec codec'
force audio codec
`-deinterlace'
deinterlace pictures
`-benchmark'
add timings for benchmarking
`-hex'
dump each input packet
`-psnr'
calculate PSNR of compressed frames
`-vstats'
dump video coding statistics to file

3.6 Protocols

The filename can be `-' to read from the standard input or to write to the standard output.

ffmpeg handles also many protocols specified with the URL syntax.

Use 'ffmpeg -formats' to have a list of the supported protocols.

The protocol http: is currently used only to communicate with ffserver (see the ffserver documentation). When ffmpeg will be a video player it will also be used for streaming :-)

4. Tips

5. Supported File Formats and Codecs

You can use the -formats option to have an exhaustive list.

5.1 File Formats

FFmpeg supports the following file formats thru the libavformat library:
Supported File Format Encoding Decoding Comments
MPEG audio X X
MPEG1 systems X X muxed audio and video
MPEG2 PS X X also known as VOB file
MPEG2 TS X also known as DVB Transport Stream
ASF X X
AVI X X
WAV X X
Macromedia Flash X X Only embedded audio is decoded
Real Audio and Video X X
Raw AC3 X X
Raw MJPEG X X
Raw MPEG video X X
Raw PCM8/16 bits, mulaw/Alaw X X
SUN AU format X X
Quicktime X
MPEG4 X MPEG4 is a variant of Quicktime
Raw MPEG4 video X X
DV X Only the video track is decoded.

X means that the encoding (resp. decoding) is supported.

5.2 Image Formats

FFmpeg can read and write images for each frame of a video sequence. The following image formats are supported:
Supported Image Format Encoding Decoding Comments
PGM, PPM X X
PGMYUV X X PGM with U and V components in 420
JPEG X X Progressive JPEG is not supported
.Y.U.V X X One raw file per component
Animated GIF X Only uncompressed GIFs are generated

X means that the encoding (resp. decoding) is supported.

5.3 Video Codecs

Supported Codec Encoding Decoding Comments
MPEG1 video X X
MPEG2 video X
MPEG4 X X Also known as DIVX4/5
MSMPEG4 V1 X X
MSMPEG4 V2 X X
MSMPEG4 V3 X X Also known as DIVX3
WMV7 X X
H263(+) X X Also known as Real Video 1.0
MJPEG X X
DV X
Huff YUV X X

X means that the encoding (resp. decoding) is supported.

Check at http://www.mplayerhq.hu/~michael/codec-features.html to get a precise comparison of FFmpeg MPEG4 codec compared to the other solutions.

5.4 Audio Codecs

Supported Codec Encoding Decoding Comments
MPEG audio layer 2 IX IX
MPEG audio layer 1/3 IX IX MP3 encoding is supported thru the external library LAME
AC3 IX X liba52 is used internally for decoding.
Vorbis X encoding is supported thru the external library libvorbis.
WMA V1/V2 X

X means that the encoding (resp. decoding) is supported.

I means that an integer only version is available too (ensures highest performances on systems without hardware floating point support).

6. Platform Specific information

6.1 Linux

ffmpeg should be compiled with at least GCC 2.95.3. GCC 3.2 is the prefered compiler now for ffmpeg. All futur optimizations will depend on features only found in GCC 3.2.

6.2 BSD

6.3 Windows

6.4 MacOS X

6.5 BeOS

The configure script should guess the configuration itself. Networking support is currently not finished. errno issues fixed by Andrew Bachmann.

Old stuff:

François Revol - revol at free dot fr - April 2002

The configure script should guess the configuration itself, however I still didn't tested building on net_server version of BeOS.

ffserver is broken (needs poll() implementation).

There is still issues with errno codes, which are negative in BeOs, and that ffmpeg negates when returning. This ends up turning errors into valid results, then crashes. (To be fixed)

7. Developpers Guide

7.1 API

7.2 Integrating libavcodec or libavformat in your program

You can integrate all the source code of the libraries to link them statically to avoid any version problem. All you need is to provide a 'config.mak' and a 'config.h' in the parent directory. See the defines generated by ./configure to understand what is needed.

You can use libavcodec or libavformat in your commercial program, but any patch you make must be published. The best way to proceed is to send your patches to the ffmpeg mailing list.

7.3 Coding Rules

ffmpeg is programmed in ANSI C language. GCC extensions are tolerated. Indent size is 4. The TAB character should not be used.

The presentation is the one specified by 'indent -i4 -kr'.

Main priority in ffmpeg is simplicity and small code size (=less bugs).

Comments: for functions visible from other modules, use the JavaDoc format (see examples in `libav/utils.c') so that a documentation can be generated automatically.

7.4 Submitting patches

When you submit your patch, try to send a unified diff (diff '-u' option). I cannot read other diffs :-)

Run the regression tests before submitting a patch so that you can verify that there is no big problems.

Except if your patch is really big and adds an important feature, by submitting it to me, you accept implicitely to put it under my copyright. I prefer to do this to avoid potential problems if licensing of ffmpeg changes.

Patches should be posted as base64 encoded attachments (or any other encoding which ensures that the patch wont be trashed during transmission) to the ffmpeg-devel mailinglist, see http://lists.sourceforge.net/lists/listinfo/ffmpeg-devel

7.5 Regression tests

Before submitting a patch (or commiting with CVS), you should at least test that you did not break anything.

The regression test build a synthetic video stream and a synthetic audio stream. Then there are encoded then decoded with all codecs or formats. The CRC (or MD5) of each generated file is recorded in a result file. Then a 'diff' is launched with the reference results and the result file.

Run 'make test' to test all the codecs.

Run 'make libavtest' to test all the codecs.

[Of course, some patches may change the regression tests results. In this case, the regression tests reference results shall be modified accordingly].


This document was generated on 19 November 2002 using texi2html 1.56k.