| Commit message (Collapse) | Author | Age |
|\
| |
| |
| |
| |
| |
| | |
* commit '24b5cff01bbac4e08acfd6d19c499e880988f520':
lavc: handle hw_frames_ctx where necessary
Merged-by: Hendrik Leppkes <h.leppkes@gmail.com>
|
| |
| |
| |
| |
| |
| |
| | |
avcodec_copy_context() didn't handle hw_frames_ctx references correctly
which could cause crashes.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
|
|\|
| |
| |
| |
| |
| |
| | |
* commit '4024b566d664a4b161d677554be52f32e7ad4236':
golomb: Give svq3_get_se_golomb()/svq3_get_ue_golomb() better names
Merged-by: Hendrik Leppkes <h.leppkes@gmail.com>
|
| | |
|
|\|
| |
| |
| |
| |
| |
| | |
* commit 'e47b8bbf0b54599d44b9330eb4d68cdde4f6d298':
avcodec: Bump micro version after changing public JPEG 2000 defines
Merged-by: Hendrik Leppkes <h.leppkes@gmail.com>
|
| | |
|
|\|
| |
| |
| |
| |
| |
| | |
* commit 'ad61da054bd8c74a5d5b38d80846228fc6147108':
jpeg2000: Fix profile define values
Merged-by: Hendrik Leppkes <h.leppkes@gmail.com>
|
| |
| |
| |
| | |
Signed-off-by: Diego Biurrun <diego@biurrun.de>
|
|\|
| |
| |
| |
| |
| |
| | |
* commit '2ef6dab0a79a9852a92ed80b07f9e32a37530d9e':
lavc: document that avcodec_close() should not be used
Merged-by: Hendrik Leppkes <h.leppkes@gmail.com>
|
| |
| |
| |
| |
| |
| |
| | |
We cannot deprecate it until the new parser API is in place, because of
the way libavformat works. But the majority of the users can already
simply replace it with avcodec_free_context(), which will simplify the
transition once it is finally deprecated.
|
|\|
| |
| |
| |
| |
| |
| | |
* commit '04fc8e24a091ed1d77d7a3c0cbcfe60baec19a9f':
lavc: deprecate avcodec_get_context_defaults3()
Merged-by: Hendrik Leppkes <h.leppkes@gmail.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This function is supposed to "reset" a codec context to a clean state so
that it can be opened again. The only reason it exists is to allow using
AVStream.codec as a decoding context (after it was already
opened/used/closed by avformat_find_stream_info()). Since that behaviour
is now deprecated, there is no reason for this function to exist
anymore.
|
|\|
| |
| |
| |
| |
| |
| | |
* commit '5f30ac27795f9f98043e8582ccaad8813104adc4':
lavc: deprecate avcodec_copy_context()
Merged-by: Hendrik Leppkes <h.leppkes@gmail.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Since AVCodecContext contains a lot of complex state, copying a codec
context is not a well-defined operation. The purpose for which it is
typically used (which is well-defined) is copying the stream parameters
from one codec context to another. That is now possible with through the
AVCodecParameters API. Therefore, there is no reason for
avcodec_copy_context() to exist.
|
|\|
| |
| |
| |
| |
| |
| | |
* commit '74b1bf632f125a795e66e5fd0a060b9c7c55b7a3':
mp3: Make the extrasize explicit
Merged-by: Hendrik Leppkes <h.leppkes@gmail.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Initialize the bit buffer with the correct size (amount of bits that will
be read) instead of relying on the bitstream reader overreading the
correct values.
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
Signed-off-by: Diego Biurrun <diego@biurrun.de>
|
|\|
| |
| |
| |
| |
| |
| | |
* commit '52567e8198669a1e7493c75771613f87a90466c3':
get_bits: Drop some TRACE-level debug code
Merged-by: Hendrik Leppkes <h.leppkes@gmail.com>
|
| |
| |
| |
| | |
It will not be provided by the new bit reader anyway.
|
| |
| |
| |
| | |
Based on a patch by Agatha Hu <ahu@nvidia.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
For reasons we are not privy to, nvidia decided that the nvenc encoder
should apply aspect ratio compensation to 'DVD like' content, assuming that
the content is not BT.601 compliant, but needs to be BT.601 compliant. In
this context, that means that they make the following, questionable,
assumptions:
1) If the input dimensions are 720x480 or 720x576, assume the content has
an active area of 704x480 or 704x576.
2) Assume that whatever the input sample aspect ratio is, it does not account
for the difference between 'physical' and 'active' dimensions.
From these assumptions, they then conclude that they can 'help', by adjusting
the sample aspect ratio by a factor of 45/44. And indeed, if you wanted to
display only the 704 wide active area with the same aspect ratio as the full
720 wide image - this would be the correct adjustment factor, but what if you
don't? And more importantly, what if you're used to lavc not making this kind
of adjustment at encode time - because none of the other encoders do this!
And, what if you had already accounted for BT.601 and your input had the
correct attributes? Well, it's going to apply the compensation anyway!
So, if you take some content, and feed it through nvenc repeatedly, it
will keep scaling the aspect ratio every time, stretching your video out
more and more and more.
So, clearly, regardless of whether you want to apply bt.601 aspect ratio
adjustments or not, this is not the way to do it. With any other lavc
encoder, you would do it as part of defining your input parameters or do
the adjustment at playback time, and there's no reason by nvenc should
be any different.
This change adds some logic to undo the compensation that nvenc would
otherwise do.
nvidia engineers have told us that they will work to make this
compensation mechanism optional in a future release of the nvenc
SDK. At that point, we can adapt accordingly.
Signed-off-by: Philip Langdale <philipl@overt.org>
Reviewed-by: Timo Rothenpieler <timo@rothenpieler.org>
Signed-off-by: Anton Khirnov <anton@khirnov.net>
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| | |
The code needs only a few definitions from cuda.h, so define them
directly when CUDA is not enabled. CUDA is still required for accepting
HW frames as input.
Based on the code by Timo Rothenpieler <timo@rothenpieler.org>.
|
| |
| |
| |
| |
| | |
hwcontext_cuda.h includes cuda.h, so this will allow building nvenc
without depending on cuda.h
|
| |
| |
| |
| |
| |
| | |
Bump the API version requirement to 6.
Based on a patch by Agatha Hu <ahu@nvidia.com>.
|
| |
| |
| |
| | |
Based on a patch by Agatha Hu <ahu@nvidia.com>.
|
| |
| |
| |
| |
| |
| |
| | |
For some unknown reason enabling these causes proper CBR padding,
so as there are no known downsides just always enable them in CBR mode.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
|
| |
| |
| |
| | |
Signed-off-by: Anton Khirnov <anton@khirnov.net>
|
| |
| |
| |
| | |
Based on a patch by Philip Langdale <philipl@overt.org>
|
| |
| |
| |
| | |
Signed-off-by: Anton Khirnov <anton@khirnov.net>
|
| | |
|
| |
| |
| |
| | |
Signed-off-by: Paul B Mahol <onemda@gmail.com>
|
| |
| |
| |
| |
| |
| | |
by missing parameter sets.
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| | |
The encode function is supposed to just return 0 on success.
This stems from a mixup with the return value of decode functions.
Reviewed-by: Jan Gerber <j@v2v.cc>
Signed-off-by: Martin Storsjö <martin@martin.st>
|
| |
| |
| |
| |
| | |
Tested-by: Jan Gerber <j@v2v.cc>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
|
| | |
|
| |
| |
| |
| | |
in a file
|
| | |
|
|\|
| |
| |
| |
| |
| |
| | |
* commit 'd68fb1475856cf93199e2bc4eee3063902c35df7':
mjpegdec: Properly fail on malloc failure
Merged-by: Clément Bœsch <u@pkh.me>
|
| |
| |
| |
| |
| | |
Signed-off-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
Signed-off-by: Anton Khirnov <anton@khirnov.net>
|
|\|
| |
| |
| |
| |
| |
| | |
* commit '0d95d88fbd1aeadafb8b0b1bfb880bf21b33132c':
lavc: revert the Makefile part of 330177b
Merged-by: Clément Bœsch <u@pkh.me>
|
| |
| |
| |
| |
| |
| |
| | |
There is no real advantage to listing some codecs or subsystems
separately simply because they are somehow "hw-accelerated", on the
contrary it makes them harder to find than in a plain alphabetically
ordered list.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This commit fixes a broken build when compiling libavcodec with LLVM
compiler. These assembly files use non-standard format that is only
supported by GCC compiler. It would be nice to use a common standard
format. With this patch, both GCC and LLVM can build and generate the
same objects.
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
|
| |
| |
| |
| |
| |
| | |
Fixes Ticket5343
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
|
| |
| |
| |
| | |
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
drained
Increase buffer dequeue timeout when the codec needs to be drained as it
could happen that no input buffer is available when we receive a null
packet for the first time (meaning we are unable to signal end of stream
and mark the codec as draining).
Fixes potential loss of last frames after sending a null packet.
|
| | |
|
| | |
|
| |
| |
| |
| | |
Their only purpose is to carry the end of stream flag.
|
| |
| |
| |
| |
| |
| | |
The 10-bit decoding support is available now in native decoder.
Signed-off-by: Paul B Mahol <onemda@gmail.com>
|