summaryrefslogtreecommitdiff
path: root/libavfilter/vf_dnn_processing.c
Commit message (Collapse)AuthorAge
* libavfilter: Remove DNNReturnType from DNN ModuleShubhanshu Saxena2022-03-12
| | | | | | | | | | This patch removes all occurences of DNNReturnType from the DNN module. This commit replaces DNN_SUCCESS by 0 (essentially the same), so the functions with DNNReturnType now return 0 in case of success, the negative values otherwise. Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com> Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
* libavfilter: Prepare to handle specific error codes in DNN FiltersShubhanshu Saxena2022-03-12
| | | | | | | This commit prepares the filter side to handle specific error codes from the DNN backends instead of current DNN_ERROR. Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
* avfilter: Reindentation after query_formats changesAndreas Rheinhardt2021-10-05
| | | | Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
* avfilter/vf_dnn_processing: Use formats list instead of query functionAndreas Rheinhardt2021-10-05
| | | | Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
* avfilter: Replace query_formats callback with union of list and callbackAndreas Rheinhardt2021-10-05
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If one looks at the many query_formats callbacks in existence, one will immediately recognize that there is one type of default callback for video and a slightly different default callback for audio: It is "return ff_set_common_formats_from_list(ctx, pix_fmts);" for video with a filter-specific pix_fmts list. For audio, it is the same with a filter-specific sample_fmts list together with ff_set_common_all_samplerates() and ff_set_common_all_channel_counts(). This commit allows to remove the boilerplate query_formats callbacks by replacing said callback with a union consisting the old callback and pointers for pixel and sample format arrays. For the not uncommon case in which these lists only contain a single entry (besides the sentinel) enum AVPixelFormat and enum AVSampleFormat fields are also added to the union to store them directly in the AVFilter, thereby avoiding a relocation. The state of said union will be contained in a new, dedicated AVFilter field (the nb_inputs and nb_outputs fields have been shrunk to uint8_t in order to create a hole for this new field; this is no problem, as the maximum of all the nb_inputs is four; for nb_outputs it is only two). The state's default value coincides with the earlier default of query_formats being unset, namely that the filter accepts all formats (and also sample rates and channel counts/layouts for audio) provided that these properties agree coincide for all inputs and outputs. By using different union members for audio and video filters the type-unsafety of using the same functions for audio and video lists will furthermore be more confined to formats.c than before. When the new fields are used, they will also avoid allocations: Currently something nearly equivalent to ff_default_query_formats() is called after every successful call to a query_formats callback; yet in the common case that the newly allocated AVFilterFormats are not used at all (namely if there are no free links) these newly allocated AVFilterFormats are freed again without ever being used. Filters no longer using the callback will not exhibit this any more. Reviewed-by: Paul B Mahol <onemda@gmail.com> Reviewed-by: Nicolas George <george@nsup.org> Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
* libavfilter: Remove synchronous functions from DNN filtersShubhanshu Saxena2021-08-28
| | | | | | | | This commit removes the unused sync mode specific code from the DNN filters since the sync and async mode are now unified from the filters' perspective. Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
* libavfilter: Unify Execution Modes in DNN FiltersShubhanshu Saxena2021-08-28
| | | | | | | | | | | | | | | | | | | | | | | This commit unifies the async and sync mode from the DNN filters' perspective. As of this commit, the Native backend only supports synchronous execution mode. Now the user can switch between async and sync mode by using the 'async' option in the backend_configs. The values can be 1 for async and 0 for sync mode of execution. This commit affects the following filters: 1. vf_dnn_classify 2. vf_dnn_detect 3. vf_dnn_processing 4. vf_sr 5. vf_derain This commit also updates the filters vf_dnn_detect and vf_dnn_classify to send only the input frame and send NULL as output frame instead of input frame to the DNN backends. Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
* avfilter/avfilter: Add numbers of (in|out)pads directly to AVFilterAndreas Rheinhardt2021-08-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Up until now, an AVFilter's lists of input and output AVFilterPads were terminated by a sentinel and the only way to get the length of these lists was by using avfilter_pad_count(). This has two drawbacks: first, sizeof(AVFilterPad) is not negligible (i.e. 64B on 64bit systems); second, getting the size involves a function call instead of just reading the data. This commit therefore changes this. The sentinels are removed and new private fields nb_inputs and nb_outputs are added to AVFilter that contain the number of elements of the respective AVFilterPad array. Given that AVFilter.(in|out)puts are the only arrays of zero-terminated AVFilterPads an API user has access to (AVFilterContext.(in|out)put_pads are not zero-terminated and they already have a size field) the argument to avfilter_pad_count() is always one of these lists, so it just has to find the filter the list belongs to and read said number. This is slower than before, but a replacement function that just reads the internal numbers that users are expected to switch to will be added soon; and furthermore, avfilter_pad_count() is probably never called in hot loops anyway. This saves about 49KiB from the binary; notice that these sentinels are not in .bss despite being zeroed: they are in .data.rel.ro due to the non-sentinels. Reviewed-by: Nicolas George <george@nsup.org> Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
* avfilter/formats: Factor common function combinations outAndreas Rheinhardt2021-08-13
| | | | | | | | | | | Several combinations of functions happen quite often in query_format functions; e.g. ff_set_common_formats(ctx, ff_make_format_list(sample_fmts)) is very common. This commit therefore adds functions that are equivalent to commonly used function combinations in order to reduce code duplication. Reviewed-by: Nicolas George <george@nsup.org> Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
* lavfi/vf_dnn_processing.c: fix CID 1460603Guo, Yejun2021-05-18
| | | | CID 1460603 (#1 of 1): Improper use of negative value (NEGATIVE_RETURNS)
* avfilter: Constify all AVFiltersAndreas Rheinhardt2021-04-27
| | | | | | | This is possible now that the next-API is gone. Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com> Signed-off-by: James Almer <jamrial@gmail.com>
* dnn: add function type for modelGuo, Yejun2021-02-18
| | | | | | | | So the backend knows the usage of model is for frame processing, detect, classify, etc. Each function type has different behavior in backend when handling the input/output data of the model. Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
* dnn: extract common functions used by different filtersGuo, Yejun2021-02-18
| | | | Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
* libavfilter/dnn: use avpriv_report_missing_feature for unsupported featuresGuo, Yejun2021-01-22
| | | | Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
* libavfilter/dnn: add batch mode for async executionGuo, Yejun2021-01-15
| | | | | | | | the default number of batch_size is 1 Signed-off-by: Xie, Lin <lin.xie@intel.com> Signed-off-by: Wu Zhiwen <zhiwen.wu@intel.com> Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
* dnn: fix issue when pthread is not supportedGuo, Yejun2020-12-31
| | | | Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
* vf_dnn_processing.c: add async supportGuo, Yejun2020-12-29
| | | | | | Signed-off-by: Xie, Lin <lin.xie@intel.com> Signed-off-by: Wu Zhiwen <zhiwen.wu@intel.com> Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
* dnn_interface: change from 'void *userdata' to 'AVFilterContext *filter_ctx'Guo, Yejun2020-12-29
| | | | | | | | | | 'void *' is too flexible, since we can derive info from AVFilterContext*, so we just unify the interface with this data structure. Signed-off-by: Xie, Lin <lin.xie@intel.com> Signed-off-by: Wu Zhiwen <zhiwen.wu@intel.com> Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
* vf_dnn_processing.c: replace filter_frame with activate funcGuo, Yejun2020-12-29
| | | | | | | | with this change, dnn_processing can use DNN async interface later. Signed-off-by: Xie, Lin <lin.xie@intel.com> Signed-off-by: Wu Zhiwen <zhiwen.wu@intel.com> Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
* dnn: add NV12 pixel format supportTing Fu2020-12-22
| | | | | Signed-off-by: Ting Fu <ting.fu@intel.com> Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
* dnn: add a new interface DNNModel.get_outputGuo, Yejun2020-09-21
| | | | | | | | | | for some cases (for example, super resolution), the DNN model changes the frame size which impacts the filter behavior, so the filter needs to know the out frame size at very beginning. Currently, the filter reuses DNNModule.execute_model to query the out frame size, it is not clear from interface perspective, so add a new explict interface DNNModel.get_output for such query.
* dnn: put DNNModel.set_input and DNNModule.execute_model togetherGuo, Yejun2020-09-21
| | | | | | | | | | | | | | suppose we have a detect and classify filter in the future, the detect filter generates some bounding boxes (BBox) as AVFrame sidedata, and the classify filter executes DNN model for each BBox. For each BBox, we need to crop the AVFrame, copy data to DNN model input and do the model execution. So we have to save the in_frame at DNNModel.set_input and use it at DNNModule.execute_model, such saving is not feasible when we support async execute_model. This patch sets the in_frame as execution_model parameter, and so all the information are put together within the same function for each inference. It also makes easy to support BBox async inference.
* dnn: change dnn interface to replace DNNData* with AVFrame*Guo, Yejun2020-09-21
| | | | | | | | | | | | Currently, every filter needs to provide code to transfer data from AVFrame* to model input (DNNData*), and also from model output (DNNData*) to AVFrame*. Actually, such transfer can be implemented within DNN module, and so filter can focus on its own business logic. DNN module also exports the function pointer pre_proc and post_proc in struct DNNModel, just in case that a filter has its special logic to transfer data between AVFrame* and DNNData*. The default implementation within DNN module is used if the filter does not set pre/post_proc.
* dnn: add userdata for load model parameterGuo, Yejun2020-09-21
| | | | the userdata will be used for the interaction between AVFrame and DNNData
* dnn: move output name from DNNModel.set_input_output to DNNModule.execute_modelGuo, Yejun2020-08-25
| | | | | | | | | | | currently, output is set both at DNNModel.set_input_output and DNNModule.execute_model, it makes sense that the output name is provided at model inference time so all the output info is set at a single place. and so DNNModel.set_input_output is renamed to DNNModel.set_input Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
* dnn: add backend options when load the modelGuo, Yejun2020-08-12
| | | | | | | different backend might need different options for a better performance, so, add the parameter into dnn interface, as a preparation. Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
* vf_dnn_processing.c: add dnn backend openvinoGuo, Yejun2020-07-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We can try with the srcnn model from sr filter. 1) get srcnn.pb model file, see filter sr 2) convert srcnn.pb into openvino model with command: python mo_tf.py --input_model srcnn.pb --data_type=FP32 --input_shape [1,960,1440,1] --keep_shape_ops See the script at https://github.com/openvinotoolkit/openvino/tree/master/model-optimizer We'll see srcnn.xml and srcnn.bin at current path, copy them to the directory where ffmpeg is. I have also uploaded the model files at https://github.com/guoyejun/dnn_processing/tree/master/models 3) run with openvino backend: ffmpeg -i input.jpg -vf format=yuv420p,scale=w=iw*2:h=ih*2,dnn_processing=dnn_backend=openvino:model=srcnn.xml:input=x:output=srcnn/Maximum -y srcnn.ov.jpg (The input.jpg resolution is 720*480) Also copy the logs on my skylake machine (4 cpus) locally with openvino backend and tensorflow backend. just for your information. $ time ./ffmpeg -i 480p.mp4 -vf format=yuv420p,scale=w=iw*2:h=ih*2,dnn_processing=dnn_backend=tensorflow:model=srcnn.pb:input=x:output=y -y srcnn.tf.mp4 … frame= 343 fps=2.1 q=31.0 Lsize= 2172kB time=00:00:11.76 bitrate=1511.9kbits/s speed=0.0706x video:1973kB audio:187kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.517637% [aac @ 0x2f5db80] Qavg: 454.353 real 2m46.781s user 9m48.590s sys 0m55.290s $ time ./ffmpeg -i 480p.mp4 -vf format=yuv420p,scale=w=iw*2:h=ih*2,dnn_processing=dnn_backend=openvino:model=srcnn.xml:input=x:output=srcnn/Maximum -y srcnn.ov.mp4 … frame= 343 fps=4.0 q=31.0 Lsize= 2172kB time=00:00:11.76 bitrate=1511.9kbits/s speed=0.137x video:1973kB audio:187kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.517640% [aac @ 0x31a9040] Qavg: 454.353 real 1m25.882s user 5m27.004s sys 0m0.640s Signed-off-by: Guo, Yejun <yejun.guo@intel.com> Signed-off-by: Pedro Arthur <bygrandao@gmail.com>
* avfilter/vf_dnn_processing.c: fix typo for the linesize of dnn dataGuo, Yejun2020-04-07
| | | | Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
* lavfi/vf_dnn_processing: Fix compile warning of mixed declarations and codeLinjie Fu2020-03-19
| | | | | Signed-off-by: Linjie Fu <linjie.fu@intel.com> Reviewed-by: Guo, Yejun <yejun.guo@intel.com>
* avfilter/vf_dnn_processing.c: add frame size change support for planar yuv ↵Guo, Yejun2020-03-12
| | | | | | | | | | | | | format The Y channel is handled by dnn, and also resized by dnn. The UV channels are resized with swscale. The command to use espcn.pb (see vf_sr) looks like: ./ffmpeg -i 480p.jpg -vf format=yuv420p,dnn_processing=dnn_backend=tensorflow:model=espcn.pb:input=x:output=y -y tmp.espcn.jpg Signed-off-by: Guo, Yejun <yejun.guo@intel.com> Reviewed-by: Pedro Arthur <bygrandao@gmail.com>
* avfilter/vf_dnn_processing.c: add planar yuv format supportGuo, Yejun2020-03-12
| | | | | | | | | | | Only the Y channel is handled by dnn, the UV channels are copied without changes. The command to use srcnn.pb (see vf_sr) looks like: ./ffmpeg -i 480p.jpg -vf format=yuv420p,scale=w=iw*2:h=ih*2,dnn_processing=dnn_backend=tensorflow:model=srcnn.pb:input=x:output=y -y srcnn.jpg Signed-off-by: Guo, Yejun <yejun.guo@intel.com> Reviewed-by: Pedro Arthur <bygrandao@gmail.com>
* avfilter/vf_dnn_processing.c: use swscale for uint8<->float32 convertGuo, Yejun2020-03-12
| | | | | Signed-off-by: Guo, Yejun <yejun.guo@intel.com> Reviewed-by: Pedro Arthur <bygrandao@gmail.com>
* lavfi/dnn_processing: refine code to use function av_image_copy_plane for ↵Guo, Yejun2020-01-14
| | | | | | | data copy Signed-off-by: Guo, Yejun <yejun.guo@intel.com> Signed-off-by: Pedro Arthur <bygrandao@gmail.com>
* vf_dnn_processing: add support for more formats gray8 and grayf32Guo, Yejun2020-01-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The following is a python script to halve the value of the gray image. It demos how to setup and execute dnn model with python+tensorflow. It also generates .pb file which will be used by ffmpeg. import tensorflow as tf import numpy as np from skimage import color from skimage import io in_img = io.imread('input.jpg') in_img = color.rgb2gray(in_img) io.imsave('ori_gray.jpg', np.squeeze(in_img)) in_data = np.expand_dims(in_img, axis=0) in_data = np.expand_dims(in_data, axis=3) filter_data = np.array([0.5]).reshape(1,1,1,1).astype(np.float32) filter = tf.Variable(filter_data) x = tf.placeholder(tf.float32, shape=[1, None, None, 1], name='dnn_in') y = tf.nn.conv2d(x, filter, strides=[1, 1, 1, 1], padding='VALID', name='dnn_out') sess=tf.Session() sess.run(tf.global_variables_initializer()) graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out']) tf.train.write_graph(graph_def, '.', 'halve_gray_float.pb', as_text=False) print("halve_gray_float.pb generated, please use \ path_to_ffmpeg/tools/python/convert.py to generate halve_gray_float.model\n") output = sess.run(y, feed_dict={x: in_data}) output = output * 255.0 output = output.astype(np.uint8) io.imsave("out.jpg", np.squeeze(output)) To do the same thing with ffmpeg: - generate halve_gray_float.pb with the above script - generate halve_gray_float.model with tools/python/convert.py - try with following commands ./ffmpeg -i input.jpg -vf format=grayf32,dnn_processing=model=halve_gray_float.model:input=dnn_in:output=dnn_out:dnn_backend=native out.native.png ./ffmpeg -i input.jpg -vf format=grayf32,dnn_processing=model=halve_gray_float.pb:input=dnn_in:output=dnn_out:dnn_backend=tensorflow out.tf.png Signed-off-by: Guo, Yejun <yejun.guo@intel.com> Signed-off-by: Pedro Arthur <bygrandao@gmail.com>
* vf_dnn_processing: remove parameter 'fmt'Guo, Yejun2020-01-07
| | | | | | | | | | | | do not request AVFrame's format in vf_ddn_processing with 'fmt', but to add another filter for the format. command examples: ./ffmpeg -i input.jpg -vf format=bgr24,dnn_processing=model=halve_first_channel.model:input=dnn_in:output=dnn_out:dnn_backend=native -y out.native.png ./ffmpeg -i input.jpg -vf format=rgb24,dnn_processing=model=halve_first_channel.model:input=dnn_in:output=dnn_out:dnn_backend=native -y out.native.png Signed-off-by: Guo, Yejun <yejun.guo@intel.com> Signed-off-by: Pedro Arthur <bygrandao@gmail.com>
* avfilter/vf_dnn_processing: refine code for better namingGuo, Yejun2019-12-13
| | | | | Signed-off-by: Guo, Yejun <yejun.guo@intel.com> Signed-off-by: Pedro Arthur <bygrandao@gmail.com>
* avfilter/vf_dnn_processing: correct duplicate statementleozhang2019-11-08
| | | | | Signed-off-by: leozhang <leozhang@qiyi.com> Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
* avfilter/vf_dnn_processing: fix fate-sourceGuo, Yejun2019-11-08
| | | | | Signed-off-by: Guo, Yejun <yejun.guo@intel.com> Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
* avfilter/vf_dnn_processing: add a generic filter for image proccessing with ↵Guo, Yejun2019-11-07
dnn networks This filter accepts all the dnn networks which do image processing. Currently, frame with formats rgb24 and bgr24 are supported. Other formats such as gray and YUV will be supported next. The dnn network can accept data in float32 or uint8 format. And the dnn network can change frame size. The following is a python script to halve the value of the first channel of the pixel. It demos how to setup and execute dnn model with python+tensorflow. It also generates .pb file which will be used by ffmpeg. import tensorflow as tf import numpy as np import imageio in_img = imageio.imread('in.bmp') in_img = in_img.astype(np.float32)/255.0 in_data = in_img[np.newaxis, :] filter_data = np.array([0.5, 0, 0, 0, 1., 0, 0, 0, 1.]).reshape(1,1,3,3).astype(np.float32) filter = tf.Variable(filter_data) x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in') y = tf.nn.conv2d(x, filter, strides=[1, 1, 1, 1], padding='VALID', name='dnn_out') sess=tf.Session() sess.run(tf.global_variables_initializer()) output = sess.run(y, feed_dict={x: in_data}) graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out']) tf.train.write_graph(graph_def, '.', 'halve_first_channel.pb', as_text=False) output = output * 255.0 output = output.astype(np.uint8) imageio.imsave("out.bmp", np.squeeze(output)) To do the same thing with ffmpeg: - generate halve_first_channel.pb with the above script - generate halve_first_channel.model with tools/python/convert.py - try with following commands ./ffmpeg -i input.jpg -vf dnn_processing=model=halve_first_channel.model:input=dnn_in:output=dnn_out:fmt=rgb24:dnn_backend=native -y out.native.png ./ffmpeg -i input.jpg -vf dnn_processing=model=halve_first_channel.pb:input=dnn_in:output=dnn_out:fmt=rgb24:dnn_backend=tensorflow -y out.tf.png Signed-off-by: Guo, Yejun <yejun.guo@intel.com> Signed-off-by: Pedro Arthur <bygrandao@gmail.com>