summaryrefslogtreecommitdiff
path: root/libavcodec/nvenc.c
diff options
context:
space:
mode:
authorPhilip Langdale <philipl@overt.org>2015-01-28 09:05:53 -0800
committerMichael Niedermayer <michaelni@gmx.at>2015-02-05 02:32:33 +0100
commitd20df2601f029fd96bb61c0954de299be87c3d0d (patch)
tree32c40ea5aa5a3100f4becb97903f469e68a5b8f5 /libavcodec/nvenc.c
parent692b22626ec9a9585f667c124a186b1a9796e432 (diff)
avcodec/nvenc: De-compensate aspect ratio compensation of DVD-like content.
For reasons we are not privy to, nvidia decided that the nvenc encoder should apply aspect ratio compensation to 'DVD like' content, assuming that the content is not bt.601 compliant, but needs to be bt.601 compliant. In this context, that means that they make the following, questionable, assumptions: 1) If the input dimensions are 720x480 or 720x576, assume the content has an active area of 704x480 or 704x576. 2) Assume that whatever the input sample aspect ratio is, it does not account for the difference between 'physical' and 'active' dimensions. From, these assumptions, they then conclude that they can 'help', by adjusting the sample aspect ratio by a factor of 45/44. And indeed, if you wanted to display only the 704 wide active area with the same aspect ratio as the full 720 wide image - this would be the correct adjustment factor, but what if you don't? And more importantly, what if you're used to ffmpeg not making this kind of adjustment at encode time - because none of the other encoders do this! And, what if you had already accounted for bt.601 and your input had the correct attributes? Well, it's going to apply the compensation anyway! So, if you take some content, and feed it through nvenc repeatedly, it will keep scaling the aspect ratio every time, stretching your video out more and more and more. So, clearly, regardless of whether you want to apply bt.601 aspect ratio adjustments or not, this is not the way to do it. With any other ffmpeg encoder, you would do it as part of defining your input paramters or do the adjustment at playback time, and there's no reason by nvenc should be any different. This change adds some logic to undo the compensation that nvenc would otherwise do. nvidia engineers have told us that they will work to make this compensation mechanism optional in a future release of the nvenc SDK. At that point, we can adapt accordingly. Signed-off-by: Philip Langdale <philipl@overt.org> Reviewed-by: Timo Rothenpieler <timo@rothenpieler.org> Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
Diffstat (limited to 'libavcodec/nvenc.c')
-rw-r--r--libavcodec/nvenc.c12
1 files changed, 12 insertions, 0 deletions
diff --git a/libavcodec/nvenc.c b/libavcodec/nvenc.c
index 2cfc06a6b9..22956b9c0b 100644
--- a/libavcodec/nvenc.c
+++ b/libavcodec/nvenc.c
@@ -587,6 +587,18 @@ static av_cold int nvenc_encode_init(AVCodecContext *avctx)
ctx->init_encode_params.darWidth = avctx->width;
}
+ // De-compensate for hardware, dubiously, trying to compensate for
+ // playback at 704 pixel width.
+ if (avctx->width == 720 &&
+ (avctx->height == 480 || avctx->height == 576)) {
+ av_reduce(&dw, &dh,
+ ctx->init_encode_params.darWidth * 44,
+ ctx->init_encode_params.darHeight * 45,
+ 1024 * 1204);
+ ctx->init_encode_params.darHeight = dh;
+ ctx->init_encode_params.darWidth = dw;
+ }
+
ctx->init_encode_params.frameRateNum = avctx->time_base.den;
ctx->init_encode_params.frameRateDen = avctx->time_base.num * avctx->ticks_per_frame;