summaryrefslogtreecommitdiff
path: root/tools
Commit message (Collapse)AuthorAge
* tools/target_dem_fuzzer: Set format independent of cMichael Niedermayer2020-10-16
| | | | Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
* Add support for building fuzzer tools for an individual demuxerMichael Niedermayer2020-10-12
| | | | Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
* dnn/native: add native support for denseMingyu Yin2020-09-29
| | | | Signed-off-by: Mingyu Yin <mingyu.yin@intel.com>
* tools/target_dec_fuzzer: Adjust VQA thresholdMichael Niedermayer2020-09-19
| | | | | | | | Fixes: Timeout (169sec -> 9sec) Fixes: 23745/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_VQA_fuzzer-5638172179693568 Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
* tools:target_dem_fuzzer: Split into a fuzzer fuzzing at the protocol level ↵Michael Niedermayer2020-09-13
| | | | | | | | and one fuzzing a fixed demuxer input This should improve coverage and should improve the efficiency of seed files Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
* tools/target_dec_fuzzer: Adjust threshold for WMV3IMAGEMichael Niedermayer2020-09-07
| | | | | | | | Fixes: Timeout (1131sec -> 1sec) Fixes: 24727/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_WMV3IMAGE_fuzzer-5754167793287168 Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
* dnn_backend_native_layer_mathbinary: add floormod supportMingyu Yin2020-08-24
| | | | Signed-off-by: Mingyu Yin <mingyu.yin@intel.com>
* tools/target_dec_fuzzer: Adjust threshold for DSTMichael Niedermayer2020-08-18
| | | | | | | | | Fixes: Timeout (too long -> 3sec) Fixes: 24239/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_DST_fuzzer-5189061015502848 Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg Reviewed-by: Peter Ross <pross@xvid.org> Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
* dnn_backend_native_layer_mathunary: add round supportMingyu Yin2020-08-12
| | | | | Signed-off-by: Mingyu Yin <mingyu.yin@intel.com> Reviewed-by: Guo, Yejun <yejun.guo@intel.com>
* tools/target_dec_fuzzer: Adjust threshold for AGMMichael Niedermayer2020-08-11
| | | | | | | | Fixes: Timeout (142sec -> 2sec) Fixes: 24426/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_AGM_fuzzer-5639724379930624 Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
* dnn/native: add native support for avg_poolTing Fu2020-08-10
| | | | | | | Not support pooling strides in channel dimension yet. Signed-off-by: Ting Fu <ting.fu@intel.com> Reviewed-by: Guo, Yejun <yejun.guo@intel.com>
* dnn_backend_native_layer_mathunary: add floor supportMingyu Yin2020-08-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It can be tested with the model generated with below python script: import tensorflow as tf import os import numpy as np import imageio from tensorflow.python.framework import graph_util name = 'floor' pb_file_path = os.getcwd() if not os.path.exists(pb_file_path+'/{}_savemodel/'.format(name)): os.mkdir(pb_file_path+'/{}_savemodel/'.format(name)) with tf.Session(graph=tf.Graph()) as sess: in_img = imageio.imread('detection.jpg') in_img = in_img.astype(np.float32) in_data = in_img[np.newaxis, :] input_x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in') y_ = tf.math.floor(input_x*255)/255 y = tf.identity(y_, name='dnn_out') sess.run(tf.global_variables_initializer()) constant_graph = graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out']) with tf.gfile.FastGFile(pb_file_path+'/{}_savemodel/model.pb'.format(name), mode='wb') as f: f.write(constant_graph.SerializeToString()) print("model.pb generated, please in ffmpeg path use\n \n \ python tools/python/convert.py {}_savemodel/model.pb --outdir={}_savemodel/ \n \nto generate model.model\n".format(name,name)) output = sess.run(y, feed_dict={ input_x: in_data}) imageio.imsave("out.jpg", np.squeeze(output)) print("To verify, please ffmpeg path use\n \n \ ./ffmpeg -i detection.jpg -vf format=rgb24,dnn_processing=model={}_savemodel/model.pb:input=dnn_in:output=dnn_out:dnn_backend=tensorflow -f framemd5 {}_savemodel/tensorflow_out.md5\n \ or\n \ ./ffmpeg -i detection.jpg -vf format=rgb24,dnn_processing=model={}_savemodel/model.pb:input=dnn_in:output=dnn_out:dnn_backend=tensorflow {}_savemodel/out_tensorflow.jpg\n \nto generate output result of tensorflow model\n".format(name, name, name, name)) print("To verify, please ffmpeg path use\n \n \ ./ffmpeg -i detection.jpg -vf format=rgb24,dnn_processing=model={}_savemodel/model.model:input=dnn_in:output=dnn_out:dnn_backend=native -f framemd5 {}_savemodel/native_out.md5\n \ or \n \ ./ffmpeg -i detection.jpg -vf format=rgb24,dnn_processing=model={}_savemodel/model.model:input=dnn_in:output=dnn_out:dnn_backend=native {}_savemodel/out_native.jpg\n \nto generate output result of native model\n".format(name, name, name, name)) Signed-off-by: Mingyu Yin <mingyu.yin@intel.com>
* dnn_backend_native_layer_mathunary: add ceil supportMingyu Yin2020-08-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It can be tested with the model generated with below python script: import tensorflow as tf import os import numpy as np import imageio from tensorflow.python.framework import graph_util name = 'ceil' pb_file_path = os.getcwd() if not os.path.exists(pb_file_path+'/{}_savemodel/'.format(name)): os.mkdir(pb_file_path+'/{}_savemodel/'.format(name)) with tf.Session(graph=tf.Graph()) as sess: in_img = imageio.imread('detection.jpg') in_img = in_img.astype(np.float32) in_data = in_img[np.newaxis, :] input_x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in') y = tf.math.ceil( input_x, name='dnn_out') sess.run(tf.global_variables_initializer()) constant_graph = graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out']) with tf.gfile.FastGFile(pb_file_path+'/{}_savemodel/model.pb'.format(name), mode='wb') as f: f.write(constant_graph.SerializeToString()) print("model.pb generated, please in ffmpeg path use\n \n \ python tools/python/convert.py ceil_savemodel/model.pb --outdir=ceil_savemodel/ \n \n \ to generate model.model\n") output = sess.run(y, feed_dict={ input_x: in_data}) imageio.imsave("out.jpg", np.squeeze(output)) print("To verify, please ffmpeg path use\n \n \ ./ffmpeg -i detection.jpg -vf format=rgb24,dnn_processing=model=ceil_savemodel/model.pb:input=dnn_in:output=dnn_out:dnn_backend=tensorflow -f framemd5 ceil_savemodel/tensorflow_out.md5\n \n \ to generate output result of tensorflow model\n") print("To verify, please ffmpeg path use\n \n \ ./ffmpeg -i detection.jpg -vf format=rgb24,dnn_processing=model=ceil_savemodel/model.model:input=dnn_in:output=dnn_out:dnn_backend=native -f framemd5 ceil_savemodel/native_out.md5\n \n \ to generate output result of native model\n") Signed-off-by: Mingyu Yin <mingyu.yin@intel.com> Reviewed-by: Guo, Yejun <yejun.guo@intel.com>
* dnn_backend_native_layer_mathunary: add atanh supportTing Fu2020-07-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It can be tested with the model generated with below python script: import tensorflow as tf import numpy as np import imageio in_img = imageio.imread('input.jpeg') in_img = in_img.astype(np.float32)/255.0 in_data = in_img[np.newaxis, :] x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in') please uncomment the part you want to test x_sinh_1 = tf.sinh(x) x_out = tf.divide(x_sinh_1, 1.176) # sinh(1.0) x_cosh_1 = tf.cosh(x) x_out = tf.divide(x_cosh_1, 1.55) # cosh(1.0) x_tanh_1 = tf.tanh(x) x__out = tf.divide(x_tanh_1, 0.77) # tanh(1.0) x_asinh_1 = tf.asinh(x) x_out = tf.divide(x_asinh_1, 0.89) # asinh(1.0/1.1) x_acosh_1 = tf.add(x, 1.1) x_acosh_2 = tf.acosh(x_acosh_1) # accept (1, inf) x_out = tf.divide(x_acosh_2, 1.4) # acosh(2.1) x_atanh_1 = tf.divide(x, 1.1) x_atanh_2 = tf.atanh(x_atanh_1) # accept (-1, 1) x_out = tf.divide(x_atanh_2, 1.55) # atanhh(1.0/1.1) y = tf.identity(x_out, name='dnn_out') #please only preserve the x_out you want to test sess=tf.Session() sess.run(tf.global_variables_initializer()) graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out']) tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False) print("image_process.pb generated, please use \ path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n") output = sess.run(y, feed_dict={x: in_data}) imageio.imsave("out.jpg", np.squeeze(output)) Signed-off-by: Ting Fu <ting.fu@intel.com>
* dnn_backend_native_layer_mathunary: add acosh supportTing Fu2020-07-06
| | | | Signed-off-by: Ting Fu <ting.fu@intel.com>
* dnn_backend_native_layer_mathunary: add asinh supportTing Fu2020-07-06
| | | | Signed-off-by: Ting Fu <ting.fu@intel.com>
* dnn_backend_native_layer_mathunary: add tanh supportTing Fu2020-07-06
| | | | Signed-off-by: Ting Fu <ting.fu@intel.com>
* dnn_backend_native_layer_mathunary: add cosh supportTing Fu2020-07-06
| | | | Signed-off-by: Ting Fu <ting.fu@intel.com>
* dnn_backend_native_layer_mathunary: add sinh supportTing Fu2020-07-06
| | | | Signed-off-by: Ting Fu <ting.fu@intel.com>
* dnn_backend_native_layer_mathunary: add atan supportTing Fu2020-06-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It can be tested with the model generated with below python script: import tensorflow as tf import numpy as np import imageio in_img = imageio.imread('input.jpeg') in_img = in_img.astype(np.float32)/255.0 in_data = in_img[np.newaxis, :] x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in') x1 = tf.atan(x) x2 = tf.divide(x1, 3.1416/4) # pi/4 y = tf.identity(x2, name='dnn_out') sess=tf.Session() sess.run(tf.global_variables_initializer()) graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out']) tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False) print("image_process.pb generated, please use \ path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n") output = sess.run(y, feed_dict={x: in_data}) imageio.imsave("out.jpg", np.squeeze(output)) Signed-off-by: Ting Fu <ting.fu@intel.com> Signed-off-by: Guo Yejun <yejun.guo@intel.com>
* dnn_backend_native_layer_mathunary: add acos supportTing Fu2020-06-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It can be tested with the model generated with below python script: import tensorflow as tf import numpy as np import imageio in_img = imageio.imread('input.jpeg') in_img = in_img.astype(np.float32)/255.0 in_data = in_img[np.newaxis, :] x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in') x1 = tf.acos(x) x2 = tf.divide(x1, 3.1416/2) # pi/2 y = tf.identity(x2, name='dnn_out') sess=tf.Session() sess.run(tf.global_variables_initializer()) graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out']) tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False) print("image_process.pb generated, please use \ path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n") output = sess.run(y, feed_dict={x: in_data}) imageio.imsave("out.jpg", np.squeeze(output)) Signed-off-by: Ting Fu <ting.fu@intel.com> Signed-off-by: Guo Yejun <yejun.guo@intel.com>
* dnn_backend_native_layer_mathunary: add asin supportTing Fu2020-06-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It can be tested with the model generated with below python script: import tensorflow as tf import numpy as np import imageio in_img = imageio.imread('input.jpeg') in_img = in_img.astype(np.float32)/255.0 in_data = in_img[np.newaxis, :] x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in') x1 = tf.asin(x) x2 = tf.divide(x1, 3.1416/2) # pi/2 y = tf.identity(x2, name='dnn_out') sess=tf.Session() sess.run(tf.global_variables_initializer()) graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out']) tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False) print("image_process.pb generated, please use \ path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n") output = sess.run(y, feed_dict={x: in_data}) imageio.imsave("out.jpg", np.squeeze(output)) Signed-off-by: Ting Fu <ting.fu@intel.com> Signed-off-by: Guo Yejun <yejun.guo@intel.com>
* tools/target_dec_fuzzer: Adjust threshold for lagarithMichael Niedermayer2020-06-11
| | | | | | | | Fixes: Timeout (3minute 49 sec -> 3sec) Fixes: 22020/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_LAGARITH_fuzzer-5708544679870464 Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
* tools/target_dem_fuzzer: Use file extensions listed in input formatsMichael Niedermayer2020-06-11
| | | | | | | This should make it easier for the fuzzer to fuzz formats being detected only by file extension and thus increase coverage Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
* dnn_backend_native_layer_mathunary: add tan supportTing Fu2020-06-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It can be tested with the model generated with below python scripy import tensorflow as tf import numpy as np import imageio in_img = imageio.imread('input.jpeg') in_img = in_img.astype(np.float32)/255.0 in_data = in_img[np.newaxis, :] x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in') x1 = tf.multiply(x, 0.78) x2 = tf.tan(x1) y = tf.identity(x2, name='dnn_out') sess=tf.Session() sess.run(tf.global_variables_initializer()) graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out']) tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False) print("image_process.pb generated, please use \ path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n") output = sess.run(y, feed_dict={x: in_data}) imageio.imsave("out.jpg", np.squeeze(output)) Signed-off-by: Ting Fu <ting.fu@intel.com> Signed-off-by: Guo Yejun <yejun.guo@intel.com>
* dnn_backend_native_layer_mathunary: add cos supportTing Fu2020-06-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It can be tested with the model generated with below python scripy import tensorflow as tf import numpy as np import imageio in_img = imageio.imread('input.jpeg') in_img = in_img.astype(np.float32)/255.0 in_data = in_img[np.newaxis, :] x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in') x1 = tf.multiply(x, 1.5) x2 = tf.cos(x1) y = tf.identity(x2, name='dnn_out') sess=tf.Session() sess.run(tf.global_variables_initializer()) graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out']) tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False) print("image_process.pb generated, please use \ path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n") output = sess.run(y, feed_dict={x: in_data}) imageio.imsave("out.jpg", np.squeeze(output)) Signed-off-by: Ting Fu <ting.fu@intel.com> Signed-off-by: Guo Yejun <yejun.guo@intel.com>
* dnn_backend_native_layer_mathunary: add sin supportTing Fu2020-06-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It can be tested with the model file generated with below python scripy: import tensorflow as tf import numpy as np import imageio in_img = imageio.imread('input.jpeg') in_img = in_img.astype(np.float32)/255.0 in_data = in_img[np.newaxis, :] x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in') x1 = tf.multiply(x, 3.14) x2 = tf.sin(x1) y = tf.identity(x2, name='dnn_out') sess=tf.Session() sess.run(tf.global_variables_initializer()) graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out']) tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False) print("image_process.pb generated, please use \ path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n") output = sess.run(y, feed_dict={x: in_data}) imageio.imsave("out.jpg", np.squeeze(output)) Signed-off-by: Ting Fu <ting.fu@intel.com> Signed-off-by: Guo Yejun <yejun.guo@intel.com>
* tools/target_dec_fuzzer: enable mjpeg for tiff or tdscMichael Niedermayer2020-06-08
| | | | | | This is needed for fuzzing tiff/tdsc and should increase coverage Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
* tools/target_dem_fuzzer: Implement AVSEEK_SIZEMichael Niedermayer2020-06-08
| | | | Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
* dnn_backend_native_layer_mathunary: add abs supportTing Fu2020-05-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | more math unary operations will be added here It can be tested with the model file generated with below python scripy: import tensorflow as tf import numpy as np import imageio in_img = imageio.imread('input.jpeg') in_img = in_img.astype(np.float32)/255.0 in_data = in_img[np.newaxis, :] x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in') x1 = tf.subtract(x, 0.5) x2 = tf.abs(x1) y = tf.identity(x2, name='dnn_out') sess=tf.Session() sess.run(tf.global_variables_initializer()) graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out']) tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False) print("image_process.pb generated, please use \ path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n") output = sess.run(y, feed_dict={x: in_data}) imageio.imsave("out.jpg", np.squeeze(output)) Signed-off-by: Ting Fu <ting.fu@intel.com> Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
* tools/target_dec_fuzzer: Adjust max_pixels for AV_CODEC_ID_HAPMichael Niedermayer2020-05-27
| | | | | | | | Fixes: Timeout (170sec -> 6sec) Fixes: 20956/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_HAP_fuzzer-5713643025203200 Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
* tools/target_dec_fuzzer: Reduce maxpixels for HEVCMichael Niedermayer2020-05-27
| | | | | | | | | | | | high resolutions with only small blocks appear to be rather slow with the fuzzer + sanitizers. A solution which makes this run faster is welcome. Fixes: Timeout (did not wait -> 17sec) Fixes: 21006/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_HEVC_fuzzer-6002552539971584 Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
* tools/target_dec_fuzzer: Do not test AV_CODEC_FLAG2_FAST with AV_CODEC_ID_H264Michael Niedermayer2020-05-27
| | | | | | | | | This combination skips allocating large padding which can read out of array Fixes: 20978/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_H264_fuzzer-5746381832847360 Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
* fate: add tests for h264 and vp9 video enc parameters exportAnton Khirnov2020-05-25
|
* lavc: rename bsf.h to bsf_internal.hAnton Khirnov2020-05-22
| | | | This will allow adding a public header named bsf.h
* tools/target_dec_fuzzer: Adjust threshold for PNG and APNGMichael Niedermayer2020-05-10
| | | | | | | | Fixes: Timeout (84sec -> 2sec) Fixes: 21127/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_APNG_fuzzer-5098412367413248 Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
* dnn/native: add native support for minimumGuo, Yejun2020-05-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | it can be tested with model file generated with below python script: import tensorflow as tf import numpy as np import imageio in_img = imageio.imread('input.jpg') in_img = in_img.astype(np.float32)/255.0 in_data = in_img[np.newaxis, :] x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in') x1 = tf.minimum(0.7, x) x2 = tf.maximum(x1, 0.4) y = tf.identity(x2, name='dnn_out') sess=tf.Session() sess.run(tf.global_variables_initializer()) graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out']) tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False) print("image_process.pb generated, please use \ path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n") output = sess.run(y, feed_dict={x: in_data}) imageio.imsave("out.jpg", np.squeeze(output)) Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
* tools: fix const specifier for AVInputFormatJosh de Kock2020-04-30
| | | | Signed-off-by: Josh de Kock <josh@itanimul.li>
* dnn/native: add native support for divideGuo, Yejun2020-04-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | it can be tested with model file generated with below python script: import tensorflow as tf import numpy as np import imageio in_img = imageio.imread('input.jpg') in_img = in_img.astype(np.float32)/255.0 in_data = in_img[np.newaxis, :] x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in') z1 = 2 / x z2 = 1 / z1 z3 = z2 / 0.25 + 0.3 z4 = z3 - x * 1.5 - 0.3 y = tf.identity(z4, name='dnn_out') sess=tf.Session() sess.run(tf.global_variables_initializer()) graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out']) tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False) print("image_process.pb generated, please use \ path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n") output = sess.run(y, feed_dict={x: in_data}) imageio.imsave("out.jpg", np.squeeze(output)) Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
* dnn/native: add native support for 'mul'Guo, Yejun2020-04-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | it can be tested with model file generated from above python script: import tensorflow as tf import numpy as np import imageio in_img = imageio.imread('input.jpg') in_img = in_img.astype(np.float32)/255.0 in_data = in_img[np.newaxis, :] x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in') z1 = 0.5 + 0.3 * x z2 = z1 * 4 z3 = z2 - x - 2.0 y = tf.identity(z3, name='dnn_out') sess=tf.Session() sess.run(tf.global_variables_initializer()) graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out']) tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False) print("image_process.pb generated, please use \ path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n") output = sess.run(y, feed_dict={x: in_data}) imageio.imsave("out.jpg", np.squeeze(output)) Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
* dnn/native: add native support for 'add'Guo, Yejun2020-04-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It can be tested with the model file generated with below python script: import tensorflow as tf import numpy as np import imageio in_img = imageio.imread('input.jpg') in_img = in_img.astype(np.float32)/255.0 in_data = in_img[np.newaxis, :] x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in') z1 = 0.039 + x z2 = x + 0.042 z3 = z1 + z2 z4 = z3 - 0.381 z5 = z4 - x y = tf.math.maximum(z5, 0.0, name='dnn_out') sess=tf.Session() sess.run(tf.global_variables_initializer()) graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out']) tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False) print("image_process.pb generated, please use \ path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n") output = sess.run(y, feed_dict={x: in_data}) imageio.imsave("out.jpg", np.squeeze(output)) Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
* tools: stop using deprecated av_codec_next()Josh de Kock2020-04-20
| | | | Signed-off-by: Josh de Kock <josh@itanimul.li>
* tools/target_dec_fuzzer: Adjust threshold for zerocodecMichael Niedermayer2020-04-12
| | | | | | | | Fixes: Timeout (147sec -> 1sec) Fixes: 20764/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_ZEROCODEC_fuzzer-5068274603917312 Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
* tools/target_dec_fuzzer: Adjust threshold for screenpressoMichael Niedermayer2020-04-07
| | | | | | | | Fixes: Timeout (332 -> 21 sec) Fixes: 20280/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_SCREENPRESSO_fuzzer-6238663432470528 Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
* dnn_backend_native_layer_mathbinary: add sub supportGuo, Yejun2020-04-07
| | | | | | more math binary operations will be added here Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
* tools/target_dec_fuzzer: limit per frame samples for APEMichael Niedermayer2020-01-30
| | | | | | | | | | APE in its highest compression mode is really slow so even one frame of millions of samples takes a long time Fixes: Timeout (too long -> 3sec) Fixes: 19937/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_APE_fuzzer-5751668818051072 Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
* tools/target_dec_fuzzer: Add threshold for ALSMichael Niedermayer2020-01-30
| | | | | | | | Fixes: Timeout (253sec -> 16sec) Fixes: 18668/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_ALS_fuzzer-6227155369590784 Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
* tools/target_dec_fuzzer: Add threshold for IFF_ILBMMichael Niedermayer2020-01-29
| | | | | | | | | Fixes: Timeout (32 -> 1sec) Fixes: 20138/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_IFF_ILBM_fuzzer-5634665251864576 Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg Reviewed-by: Peter Ross <pross@xvid.org> Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
* tools/target_dec_fuzzer: Sort threshold list alphabeticallyMichael Niedermayer2020-01-29
| | | | | | | This also removes the comments as they are hard to maintain together with sorted lists Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
* tools/target_dec_fuzzer: Use codec_tags listMichael Niedermayer2020-01-22
| | | | | | | This should make it much quicker for the fuzzer to test real relevant codec_tags Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>