summaryrefslogtreecommitdiff
path: root/libavfilter/vf_dnn_detect.c
Commit message (Collapse)AuthorAge
* Switch uses of av_fopen_utf8 to avpriv_fopen_utf8Martin Storsjö2022-05-23
| | | | | | The former has been deprecated. Signed-off-by: Martin Storsjö <martin@martin.st>
* libavfilter: Remove DNNReturnType from DNN ModuleShubhanshu Saxena2022-03-12
| | | | | | | | | | This patch removes all occurences of DNNReturnType from the DNN module. This commit replaces DNN_SUCCESS by 0 (essentially the same), so the functions with DNNReturnType now return 0 in case of success, the negative values otherwise. Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com> Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
* avfilter: Reindentation after query_formats changesAndreas Rheinhardt2021-10-05
| | | | Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
* avfilter/vf_dnn_detect: Use formats list instead of query functionAndreas Rheinhardt2021-10-05
| | | | Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
* avfilter: Replace query_formats callback with union of list and callbackAndreas Rheinhardt2021-10-05
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If one looks at the many query_formats callbacks in existence, one will immediately recognize that there is one type of default callback for video and a slightly different default callback for audio: It is "return ff_set_common_formats_from_list(ctx, pix_fmts);" for video with a filter-specific pix_fmts list. For audio, it is the same with a filter-specific sample_fmts list together with ff_set_common_all_samplerates() and ff_set_common_all_channel_counts(). This commit allows to remove the boilerplate query_formats callbacks by replacing said callback with a union consisting the old callback and pointers for pixel and sample format arrays. For the not uncommon case in which these lists only contain a single entry (besides the sentinel) enum AVPixelFormat and enum AVSampleFormat fields are also added to the union to store them directly in the AVFilter, thereby avoiding a relocation. The state of said union will be contained in a new, dedicated AVFilter field (the nb_inputs and nb_outputs fields have been shrunk to uint8_t in order to create a hole for this new field; this is no problem, as the maximum of all the nb_inputs is four; for nb_outputs it is only two). The state's default value coincides with the earlier default of query_formats being unset, namely that the filter accepts all formats (and also sample rates and channel counts/layouts for audio) provided that these properties agree coincide for all inputs and outputs. By using different union members for audio and video filters the type-unsafety of using the same functions for audio and video lists will furthermore be more confined to formats.c than before. When the new fields are used, they will also avoid allocations: Currently something nearly equivalent to ff_default_query_formats() is called after every successful call to a query_formats callback; yet in the common case that the newly allocated AVFilterFormats are not used at all (namely if there are no free links) these newly allocated AVFilterFormats are freed again without ever being used. Filters no longer using the callback will not exhibit this any more. Reviewed-by: Paul B Mahol <onemda@gmail.com> Reviewed-by: Nicolas George <george@nsup.org> Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
* libavfilter: Remove synchronous functions from DNN filtersShubhanshu Saxena2021-08-28
| | | | | | | | This commit removes the unused sync mode specific code from the DNN filters since the sync and async mode are now unified from the filters' perspective. Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
* libavfilter: Unify Execution Modes in DNN FiltersShubhanshu Saxena2021-08-28
| | | | | | | | | | | | | | | | | | | | | | | This commit unifies the async and sync mode from the DNN filters' perspective. As of this commit, the Native backend only supports synchronous execution mode. Now the user can switch between async and sync mode by using the 'async' option in the backend_configs. The values can be 1 for async and 0 for sync mode of execution. This commit affects the following filters: 1. vf_dnn_classify 2. vf_dnn_detect 3. vf_dnn_processing 4. vf_sr 5. vf_derain This commit also updates the filters vf_dnn_detect and vf_dnn_classify to send only the input frame and send NULL as output frame instead of input frame to the DNN backends. Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
* avfilter/avfilter: Add numbers of (in|out)pads directly to AVFilterAndreas Rheinhardt2021-08-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Up until now, an AVFilter's lists of input and output AVFilterPads were terminated by a sentinel and the only way to get the length of these lists was by using avfilter_pad_count(). This has two drawbacks: first, sizeof(AVFilterPad) is not negligible (i.e. 64B on 64bit systems); second, getting the size involves a function call instead of just reading the data. This commit therefore changes this. The sentinels are removed and new private fields nb_inputs and nb_outputs are added to AVFilter that contain the number of elements of the respective AVFilterPad array. Given that AVFilter.(in|out)puts are the only arrays of zero-terminated AVFilterPads an API user has access to (AVFilterContext.(in|out)put_pads are not zero-terminated and they already have a size field) the argument to avfilter_pad_count() is always one of these lists, so it just has to find the filter the list belongs to and read said number. This is slower than before, but a replacement function that just reads the internal numbers that users are expected to switch to will be added soon; and furthermore, avfilter_pad_count() is probably never called in hot loops anyway. This saves about 49KiB from the binary; notice that these sentinels are not in .bss despite being zeroed: they are in .data.rel.ro due to the non-sentinels. Reviewed-by: Nicolas George <george@nsup.org> Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
* avfilter/formats: Factor common function combinations outAndreas Rheinhardt2021-08-13
| | | | | | | | | | | Several combinations of functions happen quite often in query_format functions; e.g. ff_set_common_formats(ctx, ff_make_format_list(sample_fmts)) is very common. This commit therefore adds functions that are equivalent to commonly used function combinations in order to reduce code duplication. Reviewed-by: Nicolas George <george@nsup.org> Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
* dnn/vf_dnn_detect.c: add tensorflow output parse supportTing Fu2021-05-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | Testing model is tensorflow offical model in github repo, please refer https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md to download the detect model as you need. For example, local testing was carried on with 'ssd_mobilenet_v2_coco_2018_03_29.tar.gz', and used one image of dog in https://github.com/tensorflow/models/blob/master/research/object_detection/test_images/image1.jpg Testing command is: ./ffmpeg -i image1.jpg -vf dnn_detect=dnn_backend=tensorflow:input=image_tensor:output=\ "num_detections&detection_scores&detection_classes&detection_boxes":model=ssd_mobilenet_v2_coco.pb,\ showinfo -f null - We will see the result similar as below: [Parsed_showinfo_1 @ 0x33e65f0] side data - detection bounding boxes: [Parsed_showinfo_1 @ 0x33e65f0] source: ssd_mobilenet_v2_coco.pb [Parsed_showinfo_1 @ 0x33e65f0] index: 0, region: (382, 60) -> (1005, 593), label: 18, confidence: 9834/10000. [Parsed_showinfo_1 @ 0x33e65f0] index: 1, region: (12, 8) -> (328, 549), label: 18, confidence: 8555/10000. [Parsed_showinfo_1 @ 0x33e65f0] index: 2, region: (293, 7) -> (682, 458), label: 1, confidence: 8033/10000. [Parsed_showinfo_1 @ 0x33e65f0] index: 3, region: (342, 0) -> (690, 325), label: 1, confidence: 5878/10000. There are two boxes of dog with cores 94.05% & 93.45% and two boxes of person with scores 80.33% & 58.78%. Signed-off-by: Ting Fu <ting.fu@intel.com> Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
* lavfi/dnn_backend_tensorflow: support detect modelTing Fu2021-05-11
| | | | Signed-off-by: Ting Fu <ting.fu@intel.com>
* avfilter: Constify all AVFiltersAndreas Rheinhardt2021-04-27
| | | | | | | This is possible now that the next-API is gone. Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com> Signed-off-by: James Almer <jamrial@gmail.com>
* lavfi: add filter dnn_detect for object detectionGuo, Yejun2021-04-17
Below are the example steps to do object detection: 1. download and install l_openvino_toolkit_p_2021.1.110.tgz from https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit/download.html or, we can get source code (tag 2021.1), build and install. 2. export LD_LIBRARY_PATH with openvino settings, for example: .../deployment_tools/inference_engine/lib/intel64/:.../deployment_tools/inference_engine/external/tbb/lib/ 3. rebuild ffmpeg from source code with configure option: --enable-libopenvino --extra-cflags='-I.../deployment_tools/inference_engine/include/' --extra-ldflags='-L.../deployment_tools/inference_engine/lib/intel64' 4. download model files and test image wget https://github.com/guoyejun/ffmpeg_dnn/raw/main/models/openvino/2021.1/face-detection-adas-0001.bin wget https://github.com/guoyejun/ffmpeg_dnn/raw/main/models/openvino/2021.1/face-detection-adas-0001.xml wget https://github.com/guoyejun/ffmpeg_dnn/raw/main/models/openvino/2021.1/face-detection-adas-0001.label wget https://github.com/guoyejun/ffmpeg_dnn/raw/main/images/cici.jpg 5. run ffmpeg with: ./ffmpeg -i cici.jpg -vf dnn_detect=dnn_backend=openvino:model=face-detection-adas-0001.xml:input=data:output=detection_out:confidence=0.6:labels=face-detection-adas-0001.label,showinfo -f null - We'll see the detect result as below: [Parsed_showinfo_1 @ 0x560c21ecbe40] side data - detection bounding boxes: [Parsed_showinfo_1 @ 0x560c21ecbe40] source: face-detection-adas-0001.xml [Parsed_showinfo_1 @ 0x560c21ecbe40] index: 0, region: (1005, 813) -> (1086, 905), label: face, confidence: 10000/10000. [Parsed_showinfo_1 @ 0x560c21ecbe40] index: 1, region: (888, 839) -> (967, 926), label: face, confidence: 6917/10000. There are two faces detected with confidence 100% and 69.17%. Signed-off-by: Guo, Yejun <yejun.guo@intel.com>