| Commit message (Collapse) | Author | Age |
... | |
| |
|
| |
|
|
|
|
|
| |
Bitstream filtering is done as a part of muxing, so this is the more
proper place for this.
|
|
|
|
|
| |
The code in question is muxing-specific and so belongs there. This will
allow make some objects private to the muxer in future commits.
|
|
|
|
|
| |
This function is common to both transcoding and streamcopy, so it
properly belongs into the muxing code.
|
|
|
|
|
|
|
| |
init_output_stream_encode()
The code is subtitle-encoding-specific, so this is a more appropriate
place for it.
|
|
|
|
| |
The current name is confusing.
|
|
|
|
|
| |
Stop setting OutputStream.sync_opts for subtitle encoding, as it is now
unused.
|
|
|
|
| |
It is not used for anything.
|
| |
|
|
|
|
| |
Reindent after previous commit, apply some style fixes.
|
| |
|
|
|
|
|
| |
in_picture->pts cannot be AV_NOPTS_VALUE, as it is set to ost->sync_opts
a few lines above. ost->sync_opts is never AV_NOPTS_VALUE.
|
|
|
|
|
|
|
|
|
|
|
|
| |
It has been deprecated in favor of the aresample filter for almost 10
years.
Another thing this option can do is drop audio timestamps and have them
generated by the encoding code or the muxer, but
- for encoding, this can already be done with the setpts filter
- for muxing this should almost never be done as timestamp generation by
the muxer is deprecated, but people who really want to do this can use
the setts bitstream filter
|
|
|
|
| |
Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
|
|
|
|
| |
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
update_video_stats() currently uses OutputStream.data_size to print the
total size of the encoded stream so far and the average bitrate.
However, that field is updated in the muxer thread, right before the
packet is sent to the muxer. Not only is this racy, but the numbers may
not match even if muxing was in the main thread due to bitstream
filters, filesize limiting, etc.
Introduce a new counter, data_size_enc, for total size of the packets
received from the encoder and use that in update_video_stats(). Rename
data_size to data_size_mux to indicate its semantics more clearly.
No synchronization is needed for data_size_mux, because it is only read
in the main thread in print_final_stats(), which runs after the muxer
threads are terminated.
|
|
|
|
|
|
| |
It is either equal to OutputStream.enc_ctx->codec, or NULL when enc_ctx
is NULL. Replace the use of enc with enc_ctx->codec, or the equivalent
enc_ctx->codec_* fields where more convenient.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It races with the demuxing thread. Instead, send the information along
with the demuxed packets.
Ideally, the code should stop using the stream-internal parsing
completely, but that requires considerably more effort.
Fixes races, e.g. in:
- fate-h264-brokensps-2580
- fate-h264-extradata-reload
- fate-iv8-demux
- fate-m4v-cfr
- fate-m4v
|
|
|
|
|
|
|
|
|
|
|
| |
Use it instead of AVStream.codecpar in the main thread. While
AVStream.codecpar is documented to only be updated when the stream is
added or avformat_find_stream_info(), it is actually updated during
demuxing. Accessing it from a different thread then constitutes a race.
Ideally, some mechanism should eventually be provided for signalling
parameter updates to the user. Then the demuxing thread could pick up
the changes and propagate them to the decoder.
|
|
|
|
|
|
|
|
|
| |
Discontinuity detection/correction is left in the main thread, as it is
entangled with InputStream.next_dts and related variables, which may be
set by decoding code.
Fixes races e.g. in fate-ffmpeg-streamloop after
aae9de0cb2887e6e0bbfda6ffdf85ab77d3390f0.
|
|
|
|
|
|
| |
This will allow to move normal offset handling to demuxer thread, since
discontinuities currently have to be processed in the main thread, as
the code uses some decoder-produced values.
|
| |
|
|
|
|
| |
ts_discontinuity_process()
|
| |
|
|
|
|
|
|
|
| |
InputFile.ts_offset can change during transcoding, due to discontinuity
correction. This should not affect the streamcopy starting timestamp.
Cf. bf2590aed3e64d44a5e2430fdbe89f91f5e55bfe
|
|
|
|
|
| |
Currently this code is located in the discontinuity handling block,
where it does not belong.
|
| |
|
| |
|
|
|
|
| |
Its use is local to input_thread().
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
-stream_loop is currently handled by destroying the demuxer thread,
seeking, then recreating it anew. This is very messy and conflicts with
the future goal of moving each major ffmpeg component into its own
thread.
Handle -stream_loop directly in the demuxer thread. Looping requires the
demuxer to know the duration of the file, which takes into account the
duration of the last decoded audio frame (if any). Use a thread message
queue to communicate this information from the main thread to the
demuxer thread.
|
|
|
|
| |
Reduces the diff in the following commit.
|
|
|
|
| |
Also rename it to use the ifile_* namespace.
|
|
|
|
|
|
| |
This avoids a potential race with the demuxer adding new streams. It is
also more efficient, since we no longer do inter-thread transfers of
packets that will be just discarded.
|
|
|
|
| |
This is a more appropriate place for this.
|
|
|
|
|
|
|
|
|
| |
This undocumented feature runtime-enables dumping input packets. I can
think of no reasonable real-world use case that cannot also be
accomplished in a different way. Keeping this functionality would
interfere with the following commit moving it to the input thread (then
setting the variable would require locking or atomics, which would be
unnecessarily complicated for a feature that probably nobody uses).
|
|
|
|
| |
It will contain more demuxing-specific code in the future.
|
| |
|
|
|
|
| |
This will be required by the following architecture changes.
|
| |
|
|
|
|
| |
It is not actually used for anything.
|
|
|
|
| |
It is unnecessary, as it is always exactly equivalent to !!ost->enc_ctx
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There are currently three possible modes for an output stream:
1) The stream is produced by encoding output from some filtergraph. This
is true when ost->enc_ctx != NULL, or equivalently when
ost->encoding_needed != 0.
2) The stream is produced by copying some input stream's packets. This
is true when ost->enc_ctx == NULL && ost->source_index >= 0.
3) The stream is produced by attaching some file directly. This is true
when ost->enc_ctx == NULL && ost->source_index < 0.
OutputStream.stream_copy is currently used to identify case 2), and
sometimes to confusingly (or even incorrectly) identify case 1). Remove
it, replacing its usage with checking enc_ctx/source_index values.
|
|
|
|
|
|
| |
The same information is available from AVStream.codecpar. This will
allow to stop allocating a decoder unless decoding is actually
performed.
|
|
|
|
|
| |
That should only be done from inside the decoder. Log to NULL instead,
as is the current convention in ffmpeg.
|
|
|
|
|
| |
It is now entirely redundant with audio filters, and is in fact
implemented by setting up a 'pan' filter instance.
|
|
|
|
| |
That is the only place where it is used. Also make it static.
|
| |
|
| |
|
|
|
|
|
|
| |
The streamcopy initialization code briefly needs an AVCodecContext to
apply AVOptions to. Allocate a temporary codec context, do not use the
encoding one.
|