| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Usually a HW decoder is expected when user specifies a HW acceleration
method via -hwaccel option, however the current implementation doesn't
take HW acceleration method into account, it is possible to select a SW
decoder.
For example:
$ ffmpeg -hwaccel vaapi -i av1.mp4 -f null -
$ ffmpeg -hwaccel nvdec -i av1.mp4 -f null -
$ ffmpeg -hwaccel vdpau -i av1.mp4 -f null -
[...]
Stream #0:0 -> #0:0 (av1 (libdav1d) -> wrapped_avframe (native))
libdav1d is selected in this case even if vaapi, nvdec or vdpau is
specified.
After applying this patch, the native av1 decoder (with vaapi, nvdec or
vdpau support) is selected for decoding(libdav1d is still used for
probing format).
$ ffmpeg -hwaccel vaapi -i av1.mp4 -f null -
$ ffmpeg -hwaccel nvdec -i av1.mp4 -f null -
$ ffmpeg -hwaccel vdpau -i av1.mp4 -f null -
[...]
Stream #0:0 -> #0:0 (av1 (native) -> wrapped_avframe (native))
Tested-by: Mario Roy <marioeroy@gmail.com>
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>
Signed-off-by: Anton Khirnov <anton@khirnov.net>
|
|
|
|
|
|
|
|
|
|
|
| |
After applying this patch, the desired HW acceleration method is known
before selecting decoder, so we may take HW acceleration method into
account when selecting decoder for input stream in the next commit
There should be no functional changes in this patch
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>
Signed-off-by: Anton Khirnov <anton@khirnov.net>
|
|
|
|
|
|
|
| |
Broken in 9c2b800203a5a8f3d83f3b8f28e8c50d28186b39.
Reviewed-by: Anton Khirnov <anton@khirnov.net>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
Fix indentation after the previous commit. Also use an early return to
save one extra indentation level.
|
| |
|
|
|
|
|
| |
Fix indentation after the previous commit. Also use an early return to
save one extra indentation level.
|
| |
|
|
|
|
|
| |
Fix indentation after the previous commit. Also use an early return to
save one extra indentation level.
|
| |
|
|
|
|
|
| |
That should only be done from inside the decoder. Log to NULL instead,
as is the current convention in ffmpeg.
|
| |
|
|
|
|
|
| |
Since the option it relates to is deprecated, it is highly unlikely to
become useful.
|
|
|
|
|
| |
It is now entirely redundant with audio filters, and is in fact
implemented by setting up a 'pan' filter instance.
|
|
|
|
| |
That is the only place where it is used. Also make it static.
|
| |
|
|
|
|
|
| |
The codec type will be set by avcodec_alloc_context3(), there is no
reason to set it manually.
|
| |
|
|
|
|
| |
It is entirely redundant with -flags +psnr.
|
|
|
|
|
|
| |
The streamcopy initialization code briefly needs an AVCodecContext to
apply AVOptions to. Allocate a temporary codec context, do not use the
encoding one.
|
|
|
|
|
| |
It serves no purpose, codec parameters can be written directly to
AVStream.codecpar with the same effect.
|
|
|
|
| |
It has been deprecated in favor of the volume filter since 2012.
|
|
|
|
|
|
|
| |
choose_pixel_fmt()
It only uses strict_std_compliance, so pass just that value. Makes it
more clear what fields are accessed.
|
| |
|
|
|
|
|
| |
No encoders can possibly be opened at this point. And even if some were,
they would be closed in ffmpeg_cleanup().
|
|
|
|
|
|
| |
The same information is available from AVStream.codecpar. This will
allow to stop allocating an encoder unless encoding is actually
performed.
|
|
|
|
| |
Mistakenly reintroduced in 4740fea7ddf.
|
|
|
|
|
|
| |
Fixes uninitialized reads in the sub-lrc-remux test.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Using tail calls with functions returning void is forbidden
(C99/C11 6.8.6.4: "A return statement with an expression shall not appear
in a function whose return type is void.") GCC emits a warning
because of this when using -pedantic: "ISO C forbids ‘return’ with
expression, in function returning void"
Reviewed-by: Hendrik Leppkes <h.leppkes@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
|
| |
|
|
|
|
|
| |
It is similar to AVThreadMessageQueue, but supports multiple streams,
each with its own EOF state.
|
|
|
|
|
| |
ffmpeg will be switched to a fully threaded architecture, starting with
muxers.
|
|
|
|
|
|
|
|
| |
It retrieves the muxer's internal timestamp with under-defined
semantics. Continuing to use this value would also require
synchronization once the muxer is moved to a separate thread.
Replace the value with last_mux_dts.
|
|
|
|
|
| |
Return an error instead, as is already done in other places in this
function.
|
|
|
|
|
| |
Do not call exit_program(), as that would conflict with moving this code
into a separate thread.
|
|
|
|
|
| |
Do not call exit_program(), as that would conflict with moving this code
into a separate thread.
|
|
|
|
|
| |
Do not call exit_program(), as that would conflict with moving this code
into a separate thread.
|
|
|
|
|
| |
Since the muxer will operate in a separate thread in the future, the
muxer context should not be accessed from the outside.
|
|
|
|
|
|
|
| |
It is unused otherwise.
Rename the field to vsync_frame_number to better reflect its current
purpose.
|
|
|
|
|
|
|
|
|
|
| |
This field means different things when the video is encoded (number of
frames emitted to the encoding sync queue/encoder by the video sync
code) or copied (number of packets sent to the muxer sync queue).
Print the value of packets_written instead, which means the same thing
in both cases. It is also more accurate, since packets may be dropped by
the sync queue or bitstream filters.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Same issues apply to it as to -shortest.
Changes the results of the following tests:
- matroska-flac-extradata-update
The test reencodes two input FLAC streams into three output FLAC
streams. The last output stream is limited to 8 frames. The current
code results in the first two output streams having 12 frames, after
this commit all three streams have 8 frames and are the same length.
This new result is better, since it is predictable.
- mkv-1242
The test streamcopies one video and one audio stream, video is limited
to 11 frames. The new result shortens the audio stream so that it is
not longer than the video.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The -shortest option (which finishes the output file at the time the
shortest stream ends) is currently implemented by faking the -t option
when an output stream ends. This approach is fragile, since it depends
on the frames/packets being processed in a specific order. E.g. there
are currently some situations in which the output file length will
depend unpredictably on unrelated factors like encoder delay. More
importantly, the present work aiming at splitting various ffmpeg
components into different threads will make this approach completely
unworkable, since the frames/packets will arrive in effectively random
order.
This commit introduces a "sync queue", which is essentially a collection
of FIFOs, one per stream. Frames/packets are submitted to these FIFOs
and are then released for further processing (encoding or muxing) when
it is ensured that the frame in question will not cause its stream to
get ahead of the other streams (the logic is similar to libavformat's
interleaving queue).
These sync queues are then used for encoding and/or muxing when the
-shortest option is specified.
A new option – -shortest_buf_duration – controls the maximum number of
queued packets, to avoid runaway memory usage.
This commit changes the results of the following tests:
- copy-shortest[12]: the last audio frame is now gone. This is
correct, since it actually outlasts the last video frame.
- shortest-sub: the video packets following the last subtitle packet are
now gone. This is also correct.
|
|
|
|
|
| |
Allows to avoid constantly allocating and freeing objects like AVFrame
or AVPacket.
|
|
|
|
|
|
|
|
|
|
| |
The following commits will add a new buffering stage after bitstream
filters, which should not be taken into account for choosing next
output.
OutputStream.last_mux_dts is also used by the muxing code to make up
missing DTS values - that field is now moved to the muxer-private
MuxStream object.
|
|
|
|
|
| |
This will be needed in following commits that will add new buffering
stages after encoding and bitstream filtering.
|
| |
|