summaryrefslogtreecommitdiff
path: root/libavutil/pixdesc.h
diff options
context:
space:
mode:
authorPhilip Langdale <philipl@overt.org>2022-09-07 21:03:15 -0700
committerPhilip Langdale <philipl@overt.org>2022-09-17 15:11:13 -0700
commited83a3a5bddf4c209157dc9f041eda0721b4c3e0 (patch)
tree8cb223116a5c4697ebeeb56c8b53d6c5c78b5ccb /libavutil/pixdesc.h
parent7c60badbedf26a6f3c96014697f96befcbbbace3 (diff)
lavu/pixdesc: favour formats where depth and subsampling exactly match
Since introducing the various packed formats used by VAAPI (and p012), we've noticed that there's actually a gap in how av_find_best_pix_fmt_of_2 works. It doesn't actually assign any value to having the same bit depth as the source format, when comparing against formats with a higher bit depth. This usually doesn't matter, because av_get_padded_bits_per_pixel() will account for it. However, as many of these formats use padding internally, we find that av_get_padded_bits_per_pixel() actually returns the same value for the 10 bit, 12 bit, 16 bit flavours, etc. In these tied situations, we end up just picking the first of the two provided formats, even if the second one should be preferred because it matches the actual bit depth. This bug already existed if you tried to compare yuv420p10 against p016 and p010, for example, but it simply hadn't come up before so we never noticed. But now, we actually got a situation in the VAAPI VP9 decoder where it offers both p010 and p012 because Profile 3 could be either depth and ends up picking p012 for 10 bit content due to the ordering of the testing. In addition, in the process of testing the fix, I realised we have the same gap when it comes to chroma subsampling - we do not favour a format that has exactly the same subsampling vs one with less subsampling when all else is equal. To fix this, I'm introducing a small score penalty if the bit depth or subsampling doesn't exactly match the source format. This will break the tie in favour of the format with the exact match, but not offset any of the other scoring penalties we already have. I have added a set of tests around these formats which will fail without this fix.
Diffstat (limited to 'libavutil/pixdesc.h')
-rw-r--r--libavutil/pixdesc.h15
1 files changed, 9 insertions, 6 deletions
diff --git a/libavutil/pixdesc.h b/libavutil/pixdesc.h
index f8a195ffcd..48d9300bfe 100644
--- a/libavutil/pixdesc.h
+++ b/libavutil/pixdesc.h
@@ -357,12 +357,15 @@ void av_write_image_line(const uint16_t *src, uint8_t *data[4],
*/
enum AVPixelFormat av_pix_fmt_swap_endianness(enum AVPixelFormat pix_fmt);
-#define FF_LOSS_RESOLUTION 0x0001 /**< loss due to resolution change */
-#define FF_LOSS_DEPTH 0x0002 /**< loss due to color depth change */
-#define FF_LOSS_COLORSPACE 0x0004 /**< loss due to color space conversion */
-#define FF_LOSS_ALPHA 0x0008 /**< loss of alpha bits */
-#define FF_LOSS_COLORQUANT 0x0010 /**< loss due to color quantization */
-#define FF_LOSS_CHROMA 0x0020 /**< loss of chroma (e.g. RGB to gray conversion) */
+#define FF_LOSS_RESOLUTION 0x0001 /**< loss due to resolution change */
+#define FF_LOSS_DEPTH 0x0002 /**< loss due to color depth change */
+#define FF_LOSS_COLORSPACE 0x0004 /**< loss due to color space conversion */
+#define FF_LOSS_ALPHA 0x0008 /**< loss of alpha bits */
+#define FF_LOSS_COLORQUANT 0x0010 /**< loss due to color quantization */
+#define FF_LOSS_CHROMA 0x0020 /**< loss of chroma (e.g. RGB to gray conversion) */
+#define FF_LOSS_EXCESS_RESOLUTION 0x0040 /**< loss due to unneeded extra resolution */
+#define FF_LOSS_EXCESS_DEPTH 0x0080 /**< loss due to unneeded extra color depth */
+
/**
* Compute what kind of losses will occur when converting from one specific