summaryrefslogtreecommitdiff
path: root/libavfilter
Commit message (Collapse)AuthorAge
* x86/vf_blend: fix warnings about trailing empty parametersJames Almer2020-07-12
| | | | | | Finishes fixing ticket #8771 Signed-off-by: James Almer <jamrial@gmail.com>
* lavfi/setpts: fix setpts/asetpts option dump errorJun Zhao2020-07-12
| | | | | | | | fix the command ffmpeg -h filter=setpts/asetpts both dump the expr option with "FVA" flags. Reviewed-by: Paul B Mahol <onemda@gmail.com> Signed-off-by: Jun Zhao <barryjzhao@tencent.com>
* libavfilter/glslang: Remove unused headerBen Clayton2020-07-11
| | | | | | The <glslang/Include/revision.h> include was not used, and revision.h has been removed from glslang master. See: https://github.com/KhronosGroup/glslang/pull/2277
* avfilter/vf_chromanr: move thres calculation to filter_frame()Paul B Mahol2020-07-10
|
* avfilter/vf_showinfo: add dump_s12m_timecode() helper functionLimin Wang2020-07-08
| | | | Signed-off-by: Limin Wang <lance.lmwang@gmail.com>
* avfilter/vf_showinfo: check sd->size before reference the sd->dataLimin Wang2020-07-08
| | | | | | | Or it'll cause null pointer dereference if size < sizeof(uint32_t), also in case tc[0] > 3, the code will report error directly. Signed-off-by: Limin Wang <lance.lmwang@gmail.com>
* avfilter: add chromanr video filterPaul B Mahol2020-07-08
|
* avfilter/vf_edgedetect: properly implement double_threshold()Valery Kot2020-07-06
| | | | | | | | | | | | | | Important part of this algorithm is the double threshold step: pixels above "high" threshold being kept, pixels below "low" threshold dropped, pixels in between (weak edges) are kept if they are neighboring "high" pixels. The weak edge check uses a neighboring context and should not be applied on the plane's border. The condition was incorrect and has been fixed in the commit. Signed-off-by: Andriy Gelman <andriy.gelman@gmail.com> Reviewed-by: Andriy Gelman <andriy.gelman@gmail.com>
* dnn_backend_native: Add overflow check for length calculation.Reimar Döffinger2020-07-06
| | | | | | | | | We should not silently allocate an incorrect sized buffer. Fixes trac issue #8718. Signed-off-by: Reimar Döffinger <Reimar.Doeffinger@gmx.de> Reviewed-by: Michael Niedermayer <michael@niedermayer.cc> Reviewed-by: Guo, Yejun <yejun.guo@intel.com>
* dnn_backend_native_layer_mathunary: add atanh supportTing Fu2020-07-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It can be tested with the model generated with below python script: import tensorflow as tf import numpy as np import imageio in_img = imageio.imread('input.jpeg') in_img = in_img.astype(np.float32)/255.0 in_data = in_img[np.newaxis, :] x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in') please uncomment the part you want to test x_sinh_1 = tf.sinh(x) x_out = tf.divide(x_sinh_1, 1.176) # sinh(1.0) x_cosh_1 = tf.cosh(x) x_out = tf.divide(x_cosh_1, 1.55) # cosh(1.0) x_tanh_1 = tf.tanh(x) x__out = tf.divide(x_tanh_1, 0.77) # tanh(1.0) x_asinh_1 = tf.asinh(x) x_out = tf.divide(x_asinh_1, 0.89) # asinh(1.0/1.1) x_acosh_1 = tf.add(x, 1.1) x_acosh_2 = tf.acosh(x_acosh_1) # accept (1, inf) x_out = tf.divide(x_acosh_2, 1.4) # acosh(2.1) x_atanh_1 = tf.divide(x, 1.1) x_atanh_2 = tf.atanh(x_atanh_1) # accept (-1, 1) x_out = tf.divide(x_atanh_2, 1.55) # atanhh(1.0/1.1) y = tf.identity(x_out, name='dnn_out') #please only preserve the x_out you want to test sess=tf.Session() sess.run(tf.global_variables_initializer()) graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out']) tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False) print("image_process.pb generated, please use \ path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n") output = sess.run(y, feed_dict={x: in_data}) imageio.imsave("out.jpg", np.squeeze(output)) Signed-off-by: Ting Fu <ting.fu@intel.com>
* dnn_backend_native_layer_mathunary: add acosh supportTing Fu2020-07-06
| | | | Signed-off-by: Ting Fu <ting.fu@intel.com>
* dnn_backend_native_layer_mathunary: add asinh supportTing Fu2020-07-06
| | | | Signed-off-by: Ting Fu <ting.fu@intel.com>
* dnn_backend_native_layer_mathunary: add tanh supportTing Fu2020-07-06
| | | | Signed-off-by: Ting Fu <ting.fu@intel.com>
* dnn_backend_native_layer_mathunary: add cosh supportTing Fu2020-07-06
| | | | Signed-off-by: Ting Fu <ting.fu@intel.com>
* dnn_backend_native_layer_mathunary: add sinh supportTing Fu2020-07-06
| | | | Signed-off-by: Ting Fu <ting.fu@intel.com>
* FATE: fix colorbalance fate test failed on x86_32Limin Wang2020-07-02
| | | | | | | | | floating point precision will cause rgb*max generate different value on x86_32 and x86_64. have pass fate test on x86_32 and x86_64 by using lrintf to get the nearest integral value for rgb * max before av_clip. Reviewed-by: Paul B Mahol <onemda@gmail.com> Signed-off-by: Limin Wang <lance.lmwang@gmail.com>
* vf_dnn_processing.c: add dnn backend openvinoGuo, Yejun2020-07-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We can try with the srcnn model from sr filter. 1) get srcnn.pb model file, see filter sr 2) convert srcnn.pb into openvino model with command: python mo_tf.py --input_model srcnn.pb --data_type=FP32 --input_shape [1,960,1440,1] --keep_shape_ops See the script at https://github.com/openvinotoolkit/openvino/tree/master/model-optimizer We'll see srcnn.xml and srcnn.bin at current path, copy them to the directory where ffmpeg is. I have also uploaded the model files at https://github.com/guoyejun/dnn_processing/tree/master/models 3) run with openvino backend: ffmpeg -i input.jpg -vf format=yuv420p,scale=w=iw*2:h=ih*2,dnn_processing=dnn_backend=openvino:model=srcnn.xml:input=x:output=srcnn/Maximum -y srcnn.ov.jpg (The input.jpg resolution is 720*480) Also copy the logs on my skylake machine (4 cpus) locally with openvino backend and tensorflow backend. just for your information. $ time ./ffmpeg -i 480p.mp4 -vf format=yuv420p,scale=w=iw*2:h=ih*2,dnn_processing=dnn_backend=tensorflow:model=srcnn.pb:input=x:output=y -y srcnn.tf.mp4 … frame= 343 fps=2.1 q=31.0 Lsize= 2172kB time=00:00:11.76 bitrate=1511.9kbits/s speed=0.0706x video:1973kB audio:187kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.517637% [aac @ 0x2f5db80] Qavg: 454.353 real 2m46.781s user 9m48.590s sys 0m55.290s $ time ./ffmpeg -i 480p.mp4 -vf format=yuv420p,scale=w=iw*2:h=ih*2,dnn_processing=dnn_backend=openvino:model=srcnn.xml:input=x:output=srcnn/Maximum -y srcnn.ov.mp4 … frame= 343 fps=4.0 q=31.0 Lsize= 2172kB time=00:00:11.76 bitrate=1511.9kbits/s speed=0.137x video:1973kB audio:187kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.517640% [aac @ 0x31a9040] Qavg: 454.353 real 1m25.882s user 5m27.004s sys 0m0.640s Signed-off-by: Guo, Yejun <yejun.guo@intel.com> Signed-off-by: Pedro Arthur <bygrandao@gmail.com>
* dnn: add openvino as one of dnn backendGuo, Yejun2020-07-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | OpenVINO is a Deep Learning Deployment Toolkit at https://github.com/openvinotoolkit/openvino, it supports CPU, GPU and heterogeneous plugins to accelerate deep learning inferencing. Please refer to https://github.com/openvinotoolkit/openvino/blob/master/build-instruction.md to build openvino (c library is built at the same time). Please add option -DENABLE_MKL_DNN=ON for cmake to enable CPU path. The header files and libraries are installed to /usr/local/deployment_tools/inference_engine/ with default options on my system. To build FFmpeg with openvion, take my system as an example, run with: $ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/deployment_tools/inference_engine/lib/intel64/:/usr/local/deployment_tools/inference_engine/external/tbb/lib/ $ ../ffmpeg/configure --enable-libopenvino --extra-cflags=-I/usr/local/deployment_tools/inference_engine/include/ --extra-ldflags=-L/usr/local/deployment_tools/inference_engine/lib/intel64 $ make Here are the features provided by OpenVINO inference engine: - support more DNN model formats It supports TensorFlow, Caffe, ONNX, MXNet and Kaldi by converting them into OpenVINO format with a python script. And torth model can be first converted into ONNX and then to OpenVINO format. see the script at https://github.com/openvinotoolkit/openvino/tree/master/model-optimizer/mo.py which also does some optimization at model level. - optimize at inference stage It optimizes for X86 CPUs with SSE, AVX etc. It also optimizes based on OpenCL for Intel GPUs. (only Intel GPU supported becuase Intel OpenCL extension is used for optimization) Signed-off-by: Guo, Yejun <yejun.guo@intel.com> Signed-off-by: Pedro Arthur <bygrandao@gmail.com>
* avfilter/vf_colorbalance: remove wrong additionPaul B Mahol2020-06-29
|
* avfilter/vf_showinfo: add a \n for end of ERROR and WARNNING logLimin Wang2020-06-28
| | | | | | | Note for info level, one extra \n will be print after the log. Reviewed-by: Paul B Mahol <onemda@gmail.com> Signed-off-by: Limin Wang <lance.lmwang@gmail.com>
* avfilter/zoompan: add in_time variableexwm2020-06-25
| | | | | | | | | | | | | | | | | | | | | | | | | | Currently, the zoompan filter exposes a 'time' variable (missing from docs) for use in the 'zoom', 'x', and 'y' expressions. This variable is perhaps better named 'out_time' as it represents the timestamp in seconds of each output frame produced by zoompan. This patch adds aliases 'out_time' and 'ot' for 'time'. This patch also adds an 'in_time' (alias 'it') variable that provides access to the timestamp in seconds of each input frame to the zoompan filter. This helps to design zoompan filters that depend on the input video timestamps. For example, it makes it easy to zoom in instantly for only some portion of a video. Both the 'out_time' and 'in_time' variables have been added in the documentation for zoompan. Example usage of 'in_time' in the zoompan filter to zoom in 2x for the first second of the input video and 1x for the rest: zoompan=z='if(between(in_time,0,1),2,1):d=1' V2: Fix zoompan filter documentation stating that the time variable would be NAN if the input timestamp is unknown. V3: Add 'it' alias for 'in_time. Add 'out_time' and 'ot' aliases for 'time'. Minor corrections to zoompan docs. Signed-off-by: exwm <thighsman@protonmail.com>
* dnn_backend_native_layer_mathunary: add atan supportTing Fu2020-06-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It can be tested with the model generated with below python script: import tensorflow as tf import numpy as np import imageio in_img = imageio.imread('input.jpeg') in_img = in_img.astype(np.float32)/255.0 in_data = in_img[np.newaxis, :] x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in') x1 = tf.atan(x) x2 = tf.divide(x1, 3.1416/4) # pi/4 y = tf.identity(x2, name='dnn_out') sess=tf.Session() sess.run(tf.global_variables_initializer()) graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out']) tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False) print("image_process.pb generated, please use \ path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n") output = sess.run(y, feed_dict={x: in_data}) imageio.imsave("out.jpg", np.squeeze(output)) Signed-off-by: Ting Fu <ting.fu@intel.com> Signed-off-by: Guo Yejun <yejun.guo@intel.com>
* dnn_backend_native_layer_mathunary: add acos supportTing Fu2020-06-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It can be tested with the model generated with below python script: import tensorflow as tf import numpy as np import imageio in_img = imageio.imread('input.jpeg') in_img = in_img.astype(np.float32)/255.0 in_data = in_img[np.newaxis, :] x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in') x1 = tf.acos(x) x2 = tf.divide(x1, 3.1416/2) # pi/2 y = tf.identity(x2, name='dnn_out') sess=tf.Session() sess.run(tf.global_variables_initializer()) graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out']) tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False) print("image_process.pb generated, please use \ path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n") output = sess.run(y, feed_dict={x: in_data}) imageio.imsave("out.jpg", np.squeeze(output)) Signed-off-by: Ting Fu <ting.fu@intel.com> Signed-off-by: Guo Yejun <yejun.guo@intel.com>
* dnn_backend_native_layer_mathunary: add asin supportTing Fu2020-06-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It can be tested with the model generated with below python script: import tensorflow as tf import numpy as np import imageio in_img = imageio.imread('input.jpeg') in_img = in_img.astype(np.float32)/255.0 in_data = in_img[np.newaxis, :] x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in') x1 = tf.asin(x) x2 = tf.divide(x1, 3.1416/2) # pi/2 y = tf.identity(x2, name='dnn_out') sess=tf.Session() sess.run(tf.global_variables_initializer()) graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out']) tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False) print("image_process.pb generated, please use \ path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n") output = sess.run(y, feed_dict={x: in_data}) imageio.imsave("out.jpg", np.squeeze(output)) Signed-off-by: Ting Fu <ting.fu@intel.com> Signed-off-by: Guo Yejun <yejun.guo@intel.com>
* avfilter/vf_v360: do not ignore return value of allocate_plane()Paul B Mahol2020-06-23
|
* avfilter/vf_v360: add orthographic projection supportPaul B Mahol2020-06-23
|
* avfilters/vf_v360: add equisolid projection supportPaul B Mahol2020-06-22
|
* avfilter/vf_showpalette: Don't pretend disp_palette can failAndreas Rheinhardt2020-06-22
| | | | | | | | | It can't fail, yet it returns an int and other code checks whether it failed; yet if it did fail, an AVFrame would leak. One could of course add an av_frame_free for this (that compilers could optimize away), yet it is easier to simply stop pretending that disp_palette could fail. Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
* avfilter/af_ladspa: check return value of getenv()Paul B Mahol2020-06-21
|
* avfilter/af_ladspa: add latency compensationPaul B Mahol2020-06-21
|
* avfilter/af_ladspa: check another directory for pluginsPaul B Mahol2020-06-21
|
* avfilter: add D2TS, TS2D, TS2T as a common macro in internal.hLimin Wang2020-06-19
| | | | Signed-off-by: Limin Wang <lance.lmwang@gmail.com>
* avfilter/vf_overlay: add yuv420p10 and yuv422p10 10bit format supportLimin Wang2020-06-19
| | | | Signed-off-by: Limin Wang <lance.lmwang@gmail.com>
* avfilter/vf_overlay: support for 8bit and 10bit overlay with macro-based ↵Limin Wang2020-06-19
| | | | | | function Signed-off-by: Limin Wang <lance.lmwang@gmail.com>
* dnn_backend_native: check operand indexGuo Yejun2020-06-17
| | | | it fixed the issue in https://trac.ffmpeg.org/ticket/8716
* dnn_backend_native.c: refine code for fail caseGuo Yejun2020-06-17
|
* avfilter/vf_showinfo: display H.26[45] user data unregistered sei messageLimin Wang2020-06-15
| | | | Signed-off-by: Limin Wang <lance.lmwang@gmail.com>
* avfilter/vf_vaguedenoiser: fix small typo in option explanationPaul B Mahol2020-06-13
|
* avfilter/af_rubberband: adjust nb_samples after every commandPaul B Mahol2020-06-13
|
* dnn_backend_native_layer_mathunary: add tan supportTing Fu2020-06-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It can be tested with the model generated with below python scripy import tensorflow as tf import numpy as np import imageio in_img = imageio.imread('input.jpeg') in_img = in_img.astype(np.float32)/255.0 in_data = in_img[np.newaxis, :] x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in') x1 = tf.multiply(x, 0.78) x2 = tf.tan(x1) y = tf.identity(x2, name='dnn_out') sess=tf.Session() sess.run(tf.global_variables_initializer()) graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out']) tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False) print("image_process.pb generated, please use \ path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n") output = sess.run(y, feed_dict={x: in_data}) imageio.imsave("out.jpg", np.squeeze(output)) Signed-off-by: Ting Fu <ting.fu@intel.com> Signed-off-by: Guo Yejun <yejun.guo@intel.com>
* dnn_backend_native_layer_mathunary: add cos supportTing Fu2020-06-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It can be tested with the model generated with below python scripy import tensorflow as tf import numpy as np import imageio in_img = imageio.imread('input.jpeg') in_img = in_img.astype(np.float32)/255.0 in_data = in_img[np.newaxis, :] x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in') x1 = tf.multiply(x, 1.5) x2 = tf.cos(x1) y = tf.identity(x2, name='dnn_out') sess=tf.Session() sess.run(tf.global_variables_initializer()) graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out']) tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False) print("image_process.pb generated, please use \ path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n") output = sess.run(y, feed_dict={x: in_data}) imageio.imsave("out.jpg", np.squeeze(output)) Signed-off-by: Ting Fu <ting.fu@intel.com> Signed-off-by: Guo Yejun <yejun.guo@intel.com>
* dnn_backend_native_layer_mathunary: add sin supportTing Fu2020-06-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It can be tested with the model file generated with below python scripy: import tensorflow as tf import numpy as np import imageio in_img = imageio.imread('input.jpeg') in_img = in_img.astype(np.float32)/255.0 in_data = in_img[np.newaxis, :] x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in') x1 = tf.multiply(x, 3.14) x2 = tf.sin(x1) y = tf.identity(x2, name='dnn_out') sess=tf.Session() sess.run(tf.global_variables_initializer()) graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out']) tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False) print("image_process.pb generated, please use \ path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n") output = sess.run(y, feed_dict={x: in_data}) imageio.imsave("out.jpg", np.squeeze(output)) Signed-off-by: Ting Fu <ting.fu@intel.com> Signed-off-by: Guo Yejun <yejun.guo@intel.com>
* vf_spp: switch to child_class_iterate()Anton Khirnov2020-06-10
|
* vf_scale: switch to child_class_iterate()Anton Khirnov2020-06-10
|
* framesync: switch to child_class_iterate()Anton Khirnov2020-06-10
|
* avfilter: switch to child_class_iterate()Anton Khirnov2020-06-10
|
* af_resample: switch to child_class_iterate()Anton Khirnov2020-06-10
|
* af_aresample: switch to child_class_iterate()Anton Khirnov2020-06-10
|
* Remove unnecessary use of avcodec_close().Anton Khirnov2020-06-10
| | | | | Replace it with avcodec_free_context() or drop it completely as appropriate.
* Bump minor versions after branching 4.3Michael Niedermayer2020-06-08
| | | | Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>