| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
| |
Track the wallclock time at which each input packet is demuxed and
propagate it through decoding and encoding.
When the live mux option is used, drop all packets demuxed before the
muxer is opened. This is intended to avoid latency when opening the
muxer takes a long time.
|
|
|
|
| |
Its use is local to input_thread().
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
-stream_loop is currently handled by destroying the demuxer thread,
seeking, then recreating it anew. This is very messy and conflicts with
the future goal of moving each major ffmpeg component into its own
thread.
Handle -stream_loop directly in the demuxer thread. Looping requires the
demuxer to know the duration of the file, which takes into account the
duration of the last decoded audio frame (if any). Use a thread message
queue to communicate this information from the main thread to the
demuxer thread.
|
|
|
|
| |
Reduces the diff in the following commit.
|
|
|
|
| |
Also rename it to use the ifile_* namespace.
|
|
|
|
| |
It will contain more demuxing-specific code in the future.
|
| |
|
|
|
|
|
|
| |
Use it to simplify some code and fix two off-by-one errors.
Similar to what was previously done for OutputFile.
|
|
|
|
| |
It has not had any effect whatsoever for over 10 years.
|
|
|
|
| |
It is not actually used for anything.
|
|
|
|
| |
It is unnecessary, as it is always exactly equivalent to !!ost->enc_ctx
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There are currently three possible modes for an output stream:
1) The stream is produced by encoding output from some filtergraph. This
is true when ost->enc_ctx != NULL, or equivalently when
ost->encoding_needed != 0.
2) The stream is produced by copying some input stream's packets. This
is true when ost->enc_ctx == NULL && ost->source_index >= 0.
3) The stream is produced by attaching some file directly. This is true
when ost->enc_ctx == NULL && ost->source_index < 0.
OutputStream.stream_copy is currently used to identify case 2), and
sometimes to confusingly (or even incorrectly) identify case 1). Remove
it, replacing its usage with checking enc_ctx/source_index values.
|
|
|
|
|
| |
It is now entirely redundant with audio filters, and is in fact
implemented by setting up a 'pan' filter instance.
|
|
|
|
| |
That is the only place where it is used. Also make it static.
|
| |
|
|
|
|
| |
It is entirely redundant with -flags +psnr.
|
|
|
|
|
| |
It serves no purpose, codec parameters can be written directly to
AVStream.codecpar with the same effect.
|
| |
|
|
|
|
|
| |
ffmpeg will be switched to a fully threaded architecture, starting with
muxers.
|
|
|
|
|
| |
Do not call exit_program(), as that would conflict with moving this code
into a separate thread.
|
|
|
|
|
| |
Since the muxer will operate in a separate thread in the future, the
muxer context should not be accessed from the outside.
|
|
|
|
|
|
|
| |
It is unused otherwise.
Rename the field to vsync_frame_number to better reflect its current
purpose.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The -shortest option (which finishes the output file at the time the
shortest stream ends) is currently implemented by faking the -t option
when an output stream ends. This approach is fragile, since it depends
on the frames/packets being processed in a specific order. E.g. there
are currently some situations in which the output file length will
depend unpredictably on unrelated factors like encoder delay. More
importantly, the present work aiming at splitting various ffmpeg
components into different threads will make this approach completely
unworkable, since the frames/packets will arrive in effectively random
order.
This commit introduces a "sync queue", which is essentially a collection
of FIFOs, one per stream. Frames/packets are submitted to these FIFOs
and are then released for further processing (encoding or muxing) when
it is ensured that the frame in question will not cause its stream to
get ahead of the other streams (the logic is similar to libavformat's
interleaving queue).
These sync queues are then used for encoding and/or muxing when the
-shortest option is specified.
A new option – -shortest_buf_duration – controls the maximum number of
queued packets, to avoid runaway memory usage.
This commit changes the results of the following tests:
- copy-shortest[12]: the last audio frame is now gone. This is
correct, since it actually outlasts the last video frame.
- shortest-sub: the video packets following the last subtitle packet are
now gone. This is also correct.
|
|
|
|
|
|
|
|
|
|
| |
The following commits will add a new buffering stage after bitstream
filters, which should not be taken into account for choosing next
output.
OutputStream.last_mux_dts is also used by the muxing code to make up
missing DTS values - that field is now moved to the muxer-private
MuxStream object.
|
|
|
|
|
| |
This will be needed in following commits that will add new buffering
stages after encoding and bitstream filtering.
|
|
|
|
| |
It is private to the muxer, no reason to access it from outside.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It is currently called from two places:
- output_packet() in ffmpeg.c, which submits the newly available output
packet to the muxer
- from of_check_init() in ffmpeg_mux.c after the header has been
written, to flush the muxing queue
Some packets will thus be processed by this function twice, so it
requires an extra parameter to indicate the place it is called from and
avoid modifying some state twice.
This is fragile and hard to follow, so split this function into two.
Also rename of_write_packet() to of_submit_packet() to better reflect
its new purpose.
|
|
|
|
|
|
|
|
|
| |
The muxing queue currently lives in OutputStream, which is a very large
struct storing the state for both encoding and muxing. The muxing queue
is only used by the code in ffmpeg_mux, so it makes sense to restrict it
to that file.
This makes the first step towards reducing the scope of OutputStream.
|
|
|
|
|
| |
Avoid accessing the muxer context directly, as this will become
forbidden in future commits.
|
|
|
|
|
|
|
|
| |
Figure out earlier whether the output stream/file should be bitexact and
store this information in a flag in OutputFile/OutputStream.
Stop accessing the muxer in set_encoder_id(), which will become
forbidden in future commits.
|
|
|
|
| |
Allows making the variable local to ffmpeg_mux.
|
|
|
|
|
|
| |
Move the file size checking code to ffmpeg_mux. Use the recently
introduced of_filesize(), making this code consistent with the size
shown by print_report().
|
|
|
|
|
| |
The option is parsed as INT64 (signed). It is also compared to the
output of avio_tell(), which is also int64_t.
|
|
|
|
| |
Stop accessing muxer internals from outside of ffmpeg_mux.
|
|
|
|
|
|
|
|
| |
Move header_written into it, which is not (and should not be) used by
any code outside of ffmpeg_mux.
In the future this context will contain more muxer-private state that
should not be visible to other code.
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a per-file input option that adjusts an input's timestamps
with reference to another input, so that emitted packet timestamps
account for the difference between the start times of the two inputs.
Typical use case is to sync two or more live inputs such as from capture
devices. Both the target and reference input source timestamps should be
based on the same clock source.
If either input lacks starting timestamps, then no sync adjustment is made.
|
|
|
|
|
|
|
|
|
|
|
| |
Frame counters can overflow relatively easily (INT_MAX number of frames is
slightly more than 1 year for 60 fps content), so make sure we are always
using 64 bit values for them.
A live stream can easily run for more than a year and the framedup logic breaks
on an overflow.
Signed-off-by: Marton Balint <cus@passwd.hu>
|
|
|
|
|
|
|
| |
fps_mode sets video sync per output stream. Overrides vsync for matching
streams.
vsync is deprecated.
|
|
|
|
|
|
|
|
|
|
| |
Its use for muxing is not documented, in practice it is incremented per
each packet successfully passed to the muxer's write_packet(). Since
there is a lot of indirection between ffmpeg receiving a packet from the
encoder and it actually being written (e.g. bitstream filters, the
interleaving queue), using nb_frames here is incorrect.
Add a new counter for packets received from encoder instead.
|
|
|
|
|
|
| |
Allows accessing it without going through the muxer context. This will
be useful in the following commits, where the muxer context will be
hidden.
|
| |
|
| |
|
|
|
|
|
| |
This is a first step towards making muxers more independent from the
rest of the code.
|
|
|
|
|
| |
Use it to simplify check_init_output_file(). Will allow further
simplifications in the following commits.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This field is currently used by checks
- skipping packets before the first keyframe
- skipping packets before start time
to test whether any packets have been output already. But since
frame_number is incremented after the bitstream filters are applied
(which may involve delay), this use is incorrect. The keyframe check
works around this by adding an extra flag, the start-time check does
not.
Simplify both checks by replacing the seen_kf flag with a flag tracking
whether any packets have been output by do_streamcopy().
|
|
|
|
|
|
| |
This is cleaner and allows fine tuning which stream the option is applied to.
Signed-off-by: James Almer <jamrial@gmail.com>
|
|
|
|
| |
Signed-off-by: James Almer <jamrial@gmail.com>
|
|
|
|
|
|
|
| |
A keyframe could be buffered in the bsf and not be output until more packets
had been fed to it.
Signed-off-by: James Almer <jamrial@gmail.com>
|
|
|
|
|
|
|
| |
Bitstream filters inserted between the input and output were never drained,
resulting in packets being lost if the bsf had any buffered.
Signed-off-by: James Almer <jamrial@gmail.com>
|
| |
|