summaryrefslogtreecommitdiff
path: root/libavfilter
Commit message (Collapse)AuthorAge
* avfilter/af_ladspa: add latency compensationPaul B Mahol2020-06-21
|
* avfilter/af_ladspa: check another directory for pluginsPaul B Mahol2020-06-21
|
* avfilter: add D2TS, TS2D, TS2T as a common macro in internal.hLimin Wang2020-06-19
| | | | Signed-off-by: Limin Wang <lance.lmwang@gmail.com>
* avfilter/vf_overlay: add yuv420p10 and yuv422p10 10bit format supportLimin Wang2020-06-19
| | | | Signed-off-by: Limin Wang <lance.lmwang@gmail.com>
* avfilter/vf_overlay: support for 8bit and 10bit overlay with macro-based ↵Limin Wang2020-06-19
| | | | | | function Signed-off-by: Limin Wang <lance.lmwang@gmail.com>
* dnn_backend_native: check operand indexGuo Yejun2020-06-17
| | | | it fixed the issue in https://trac.ffmpeg.org/ticket/8716
* dnn_backend_native.c: refine code for fail caseGuo Yejun2020-06-17
|
* avfilter/vf_showinfo: display H.26[45] user data unregistered sei messageLimin Wang2020-06-15
| | | | Signed-off-by: Limin Wang <lance.lmwang@gmail.com>
* avfilter/vf_vaguedenoiser: fix small typo in option explanationPaul B Mahol2020-06-13
|
* avfilter/af_rubberband: adjust nb_samples after every commandPaul B Mahol2020-06-13
|
* dnn_backend_native_layer_mathunary: add tan supportTing Fu2020-06-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It can be tested with the model generated with below python scripy import tensorflow as tf import numpy as np import imageio in_img = imageio.imread('input.jpeg') in_img = in_img.astype(np.float32)/255.0 in_data = in_img[np.newaxis, :] x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in') x1 = tf.multiply(x, 0.78) x2 = tf.tan(x1) y = tf.identity(x2, name='dnn_out') sess=tf.Session() sess.run(tf.global_variables_initializer()) graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out']) tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False) print("image_process.pb generated, please use \ path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n") output = sess.run(y, feed_dict={x: in_data}) imageio.imsave("out.jpg", np.squeeze(output)) Signed-off-by: Ting Fu <ting.fu@intel.com> Signed-off-by: Guo Yejun <yejun.guo@intel.com>
* dnn_backend_native_layer_mathunary: add cos supportTing Fu2020-06-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It can be tested with the model generated with below python scripy import tensorflow as tf import numpy as np import imageio in_img = imageio.imread('input.jpeg') in_img = in_img.astype(np.float32)/255.0 in_data = in_img[np.newaxis, :] x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in') x1 = tf.multiply(x, 1.5) x2 = tf.cos(x1) y = tf.identity(x2, name='dnn_out') sess=tf.Session() sess.run(tf.global_variables_initializer()) graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out']) tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False) print("image_process.pb generated, please use \ path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n") output = sess.run(y, feed_dict={x: in_data}) imageio.imsave("out.jpg", np.squeeze(output)) Signed-off-by: Ting Fu <ting.fu@intel.com> Signed-off-by: Guo Yejun <yejun.guo@intel.com>
* dnn_backend_native_layer_mathunary: add sin supportTing Fu2020-06-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It can be tested with the model file generated with below python scripy: import tensorflow as tf import numpy as np import imageio in_img = imageio.imread('input.jpeg') in_img = in_img.astype(np.float32)/255.0 in_data = in_img[np.newaxis, :] x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in') x1 = tf.multiply(x, 3.14) x2 = tf.sin(x1) y = tf.identity(x2, name='dnn_out') sess=tf.Session() sess.run(tf.global_variables_initializer()) graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out']) tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False) print("image_process.pb generated, please use \ path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n") output = sess.run(y, feed_dict={x: in_data}) imageio.imsave("out.jpg", np.squeeze(output)) Signed-off-by: Ting Fu <ting.fu@intel.com> Signed-off-by: Guo Yejun <yejun.guo@intel.com>
* vf_spp: switch to child_class_iterate()Anton Khirnov2020-06-10
|
* vf_scale: switch to child_class_iterate()Anton Khirnov2020-06-10
|
* framesync: switch to child_class_iterate()Anton Khirnov2020-06-10
|
* avfilter: switch to child_class_iterate()Anton Khirnov2020-06-10
|
* af_resample: switch to child_class_iterate()Anton Khirnov2020-06-10
|
* af_aresample: switch to child_class_iterate()Anton Khirnov2020-06-10
|
* Remove unnecessary use of avcodec_close().Anton Khirnov2020-06-10
| | | | | Replace it with avcodec_free_context() or drop it completely as appropriate.
* Bump minor versions after branching 4.3Michael Niedermayer2020-06-08
| | | | Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
* Bump minor versions to separate 4.3 from masterMichael Niedermayer2020-06-08
| | | | Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
* avfilter/vf_vaguedenoiser: add new type of thresholdPaul B Mahol2020-06-07
|
* avfilter/vf_vaguedenoiser: remove excessive code from soft thresholdingPaul B Mahol2020-06-07
|
* avfilter/avf_showspectrum: properly handle EOF casePaul B Mahol2020-06-06
|
* avfilter/asrc_anoisesrc: switch to activatePaul B Mahol2020-06-06
| | | | Allows to set EOF timestamp.
* dnn/native: fix typo for definition of DOT_INTERMEDIATEWu Zhiwen2020-06-03
| | | | | Signed-off-by: Wu Zhiwen <zhiwen.wu@intel.com> Reviewed-by: Guo Yejun <yejun.guo@intel.com>
* avfilter/vf_lut3d: Fix mixed declaration and codeAndreas Rheinhardt2020-06-01
| | | | | Reviewed-by: Paul B Mahol <onemda@gmail.com> Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
* avfilter/vf_lut3d: prelut support for 3d cinespace lutsMark Reid2020-05-31
| | | | | Reviewed-by: Paul B Mahol <onemda@gmail.com> Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
* avfilter/af_aiir: simplify polynomial evaluationPaul B Mahol2020-05-30
|
* avfilter/af_aiir: use correct size when allocating in zp2tfPaul B Mahol2020-05-30
|
* avfilter: add dblur video filterPaul B Mahol2020-05-30
|
* lavfi/aiir: Refine the pad/vpad related operationJun Zhao2020-05-30
| | | | | | | move the pad/vpad related operation with more natural coding style. Signed-off-by: Jun Zhao <barryjzhao@tencent.com>
* lavfi/afir: fix vpad.name leakJun Zhao2020-05-30
| | | | | | | Fix vpad.name leak in error path, move the vpad related operation only if enabled show IR frequency response. Signed-off-by: Jun Zhao <barryjzhao@tencent.com>
* Revert "avfilter/af_aiir: move response drawing as last step"Paul B Mahol2020-05-30
| | | | This reverts commit ca7095a9072fab4cdb41af12da9d94752e082e34.
* avfilter/af_aiir: improve response calculation with zp coefficientsPaul B Mahol2020-05-30
|
* avfilter/af_aiir: add S-plane supportPaul B Mahol2020-05-30
|
* avfilter/af_aiir: make it clear that transfer function is digital onePaul B Mahol2020-05-30
|
* avfilter/af_biquads: implement 1st order allpassPaul B Mahol2020-05-30
|
* lavfi/vulkan: use av_get_random_seed instead of randLynne2020-05-29
| | | | | | | | We need at least a few bits of entropy to determine the start index of each queue, in order to let filters run in parallel as much as possible, and rand() is not thread safe and disrupts any external API's usage of rand, so instead replace it with av_get_random_seed. While it has more overhead than rand, we only run it once per filter upon init.
* dnn_backend_native_layer_mathunary: add abs supportTing Fu2020-05-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | more math unary operations will be added here It can be tested with the model file generated with below python scripy: import tensorflow as tf import numpy as np import imageio in_img = imageio.imread('input.jpeg') in_img = in_img.astype(np.float32)/255.0 in_data = in_img[np.newaxis, :] x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in') x1 = tf.subtract(x, 0.5) x2 = tf.abs(x1) y = tf.identity(x2, name='dnn_out') sess=tf.Session() sess.run(tf.global_variables_initializer()) graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out']) tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False) print("image_process.pb generated, please use \ path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n") output = sess.run(y, feed_dict={x: in_data}) imageio.imsave("out.jpg", np.squeeze(output)) Signed-off-by: Ting Fu <ting.fu@intel.com> Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
* avfilter/vf_blend: add support for float formatsPaul B Mahol2020-05-26
|
* lavfi/vulkan: fix queue counts and set indicesLynne2020-05-26
|
* lavfi/vulkan: use dedicated allocation for buffers when necessaryLynne2020-05-26
|
* lavfi/vulkan: use all enabled queues in the queue familyLynne2020-05-23
| | | | | This should significantly improve the performance with certain filterchains.
* lavfi/vulkan: fix 2 minor memory leaksLynne2020-05-23
|
* lavfi: add untile filter.Nicolas George2020-05-23
|
* lavfi/framesync: use av_gcd_q().Nicolas George2020-05-23
|
* lavfi/tests/formats: reindent.Nicolas George2020-05-23
|
* lavfi/formats: remove dead code.Nicolas George2020-05-23
| | | | | Move the contents of all_channel_layouts.inc directly into libavfilter/tests/formats.c.