summaryrefslogtreecommitdiff
path: root/libswscale
diff options
context:
space:
mode:
authorMartin Storsjö <martin@martin.st>2022-04-20 11:21:28 +0300
committerMartin Storsjö <martin@martin.st>2022-04-22 10:49:46 +0300
commit70db14376c206f0fdcbff11c17875f95885c73d9 (patch)
tree23fa3490b6567882bfcc5fed34fbdb0477b1a6ed /libswscale
parentd1a44f261aad2da5d2d4172b3782ee1b6402c3dc (diff)
swscale: aarch64: Optimize the final summation in the hscale routine
Before: Cortex A53 A72 A73 Graviton 2 Graviton 3 hscale_8_to_15_width8_neon: 8273.0 4602.5 4289.5 2429.7 1629.1 hscale_8_to_15_width16_neon: 12405.7 6803.0 6359.0 3549.0 2378.4 hscale_8_to_15_width32_neon: 21258.7 11491.7 11469.2 5797.2 3919.6 hscale_8_to_15_width40_neon: 25652.0 14173.7 12488.2 6893.5 4810.4 After: hscale_8_to_15_width8_neon: 7633.0 3981.5 3350.2 1980.7 1261.1 hscale_8_to_15_width16_neon: 11666.7 5951.0 5512.0 3080.7 2131.4 hscale_8_to_15_width32_neon: 20900.7 10733.2 9481.7 5275.2 3862.1 hscale_8_to_15_width40_neon: 24826.0 13536.2 11502.0 6397.2 4731.9 Thus, this gives overall a 8-29% speedup for the smaller filter sizes, around 1-8% for the larger filter sizes. Inspired by a patch by Jonathan Swinney <jswinney@amazon.com>. Signed-off-by: Martin Storsjö <martin@martin.st>
Diffstat (limited to 'libswscale')
-rw-r--r--libswscale/aarch64/hscale.S14
1 files changed, 3 insertions, 11 deletions
diff --git a/libswscale/aarch64/hscale.S b/libswscale/aarch64/hscale.S
index af55ffe2b7..da34f1cb8d 100644
--- a/libswscale/aarch64/hscale.S
+++ b/libswscale/aarch64/hscale.S
@@ -61,17 +61,9 @@ function ff_hscale_8_to_15_neon, export=1
smlal v3.4S, v18.4H, v19.4H // v3 accumulates srcp[filterPos[3] + {0..3}] * filter[{0..3}]
smlal2 v3.4S, v18.8H, v19.8H // v3 accumulates srcp[filterPos[3] + {4..7}] * filter[{4..7}]
b.gt 2b // inner loop if filterSize not consumed completely
- addp v0.4S, v0.4S, v0.4S // part0 horizontal pair adding
- addp v1.4S, v1.4S, v1.4S // part1 horizontal pair adding
- addp v2.4S, v2.4S, v2.4S // part2 horizontal pair adding
- addp v3.4S, v3.4S, v3.4S // part3 horizontal pair adding
- addp v0.4S, v0.4S, v0.4S // part0 horizontal pair adding
- addp v1.4S, v1.4S, v1.4S // part1 horizontal pair adding
- addp v2.4S, v2.4S, v2.4S // part2 horizontal pair adding
- addp v3.4S, v3.4S, v3.4S // part3 horizontal pair adding
- zip1 v0.4S, v0.4S, v1.4S // part01 = zip values from part0 and part1
- zip1 v2.4S, v2.4S, v3.4S // part23 = zip values from part2 and part3
- mov v0.d[1], v2.d[0] // part0123 = zip values from part01 and part23
+ addp v0.4S, v0.4S, v1.4S // part01 horizontal pair adding
+ addp v2.4S, v2.4S, v3.4S // part23 horizontal pair adding
+ addp v0.4S, v0.4S, v2.4S // part0123 horizontal pair adding
subs w2, w2, #4 // dstW -= 4
sqshrn v0.4H, v0.4S, #7 // shift and clip the 2x16-bit final values
st1 {v0.4H}, [x1], #8 // write to destination part0123