summaryrefslogtreecommitdiff
path: root/libavfilter/dnn/dnn_backend_tf.c
Commit message (Collapse)AuthorAge
* libavfilter: Remove DNNReturnType from DNN ModuleShubhanshu Saxena2022-03-12
| | | | | | | | | | This patch removes all occurences of DNNReturnType from the DNN module. This commit replaces DNN_SUCCESS by 0 (essentially the same), so the functions with DNNReturnType now return 0 in case of success, the negative values otherwise. Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com> Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
* lavfi/dnn_backend_tf: Return Specific Error CodesShubhanshu Saxena2022-03-12
| | | | | | | | Switch to returning specific error codes or DNN_GENERIC_ERROR when an error is encountered. For TensorFlow C API errors, currently DNN_GENERIC_ERROR is returned. Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
* Replace all occurences of av_mallocz_array() by av_calloc()Andreas Rheinhardt2021-09-20
| | | | | | | They do the same. Reviewed-by: Paul B Mahol <onemda@gmail.com> Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
* lavfi/dnn: Rename InferenceItem to LastLevelTaskItemShubhanshu Saxena2021-08-28
| | | | | | | | | | | | | | | This patch renames the InferenceItem to LastLevelTaskItem in the three backends to avoid confusion among the meanings of these structs. The following are the renames done in this patch: 1. extract_inference_from_task -> extract_lltask_from_task 2. InferenceItem -> LastLevelTaskItem 3. inference_queue -> lltask_queue 4. inference -> lltask 5. inference_count -> lltask_count Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
* libavfilter: Remove Async Flag from DNN Filter SideShubhanshu Saxena2021-08-28
| | | | | | | Remove async flag from filter's perspective after the unification of async and sync modes in the DNN backend. Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
* libavfilter: Unify Execution Modes in DNN FiltersShubhanshu Saxena2021-08-28
| | | | | | | | | | | | | | | | | | | | | | | This commit unifies the async and sync mode from the DNN filters' perspective. As of this commit, the Native backend only supports synchronous execution mode. Now the user can switch between async and sync mode by using the 'async' option in the backend_configs. The values can be 1 for async and 0 for sync mode of execution. This commit affects the following filters: 1. vf_dnn_classify 2. vf_dnn_detect 3. vf_dnn_processing 4. vf_sr 5. vf_derain This commit also updates the filters vf_dnn_detect and vf_dnn_classify to send only the input frame and send NULL as output frame instead of input frame to the DNN backends. Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
* lavfi/dnn: DNNAsyncExecModule Execution Failure HandlingShubhanshu Saxena2021-08-10
| | | | | | | | | This commit adds the case handling if the asynchronous execution of a request fails by checking the exit status of the thread when joining before starting another execution. On failure, it does the cleanup as well. Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
* lavfi/dnn_backend_tf: Error Handling for tf_create_inference_requestShubhanshu Saxena2021-08-10
| | | | | | | This commit includes the check for the case when the newly created TFInferRequest is NULL. Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
* lavfi/dnn: Extract Common Parts from get_output functionsShubhanshu Saxena2021-08-10
| | | | | | | | The frame allocation and filling the TaskItem with execution parameters is common in the three backends. This commit shifts this logic to dnn_backend_common. Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
* lavfi/dnn_backend_tf: Add TF_Status to TFRequestItemShubhanshu Saxena2021-08-10
| | | | | | | | | Since requests are running in parallel, there is inconsistency in the status of the execution. To resolve it, we avoid using mutex as it would result in single TF_Session running at a time. So add TF_Status to the TFRequestItem Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
* lavfi/dnn_backend_tf: Error Handling for execute_model_tfShubhanshu Saxena2021-08-10
| | | | | | | | This patch adds error handling for cases where the execute_model_tf fails, clears the used memory in the TFRequestItem and finally pushes it back to the request queue. Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
* lavfi/dnn: Async Support for TensorFlow BackendShubhanshu Saxena2021-08-10
| | | | | | | | | | | | | | | | | | | This commit enables async execution in the TensorFlow backend and adds function to flush extra frames. The async execution mechanism executes the TFInferRequests on a separate thread which is joined before the next execution of same TFRequestItem/while freeing the model. The following is the comparison of this mechanism with the existing sync mechanism on TensorFlow C API 2.5 CPU variant. Async Mode: 4m32.846s Sync Mode: 5m17.582s The above was performed on super resolution filter using SRCNN model. Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
* lavfi/dnn_backend_tf: TFInferRequest Execution and DocumentationShubhanshu Saxena2021-08-10
| | | | | | | This commit adds a function for execution of TFInferRequest and documentation for functions related to TFInferRequest. Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
* avfilter/internal: Don't include libavcodec/(avcodec|internal).hAndreas Rheinhardt2021-08-05
| | | | | | | | | The reasons for including them don't exist any longer: ff_tlog() has been moved to libavutil/internal.h and FF_QSCALE_TYPE_* has been moved to qp_table.h. Reviewed-by: Nicolas George <george@nsup.org> Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
* avcodec/avcodec: Don't include cpu.hAndreas Rheinhardt2021-07-22
| | | | | | | It is not used here at all; instead, add it where it is used without including it or any of the arch-specific CPU headers. Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
* lavfi/dnn_backend_tf: Error HandlingShubhanshu Saxena2021-07-11
| | | | | | | This commit adds handling for cases where an error may occur, clearing the allocated memory resources. Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
* lavfi/dnn_backend_tf: Separate function for Completion CallbackShubhanshu Saxena2021-07-11
| | | | | | | This commit rearranges the existing code to create a separate function for the completion callback in execute_model_tf. Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
* lavfi/dnn_backend_tf: Separate function for filling RequestItemShubhanshu Saxena2021-07-11
| | | | | | | This commit rearranges the existing code to create separate function for filling request with execution data. Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
* lavfi/dnn_backend_tf: Request-based ExecutionShubhanshu Saxena2021-07-11
| | | | | | | | This commit uses TFRequestItem and the existing sync execution mechanism to use request-based execution. It will help in adding async functionality to the TensorFlow backend later. Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
* lavfi/dnn_backend_tf: Add TFInferRequest and TFRequestItemShubhanshu Saxena2021-07-11
| | | | | | | | This commit introduces a typedef TFInferRequest to store execution parameters for a single call to the TensorFlow C API. This typedef is used in the TFRequestItem. Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
* lavfi/dnn_backend_tf: TaskItem Based InferenceShubhanshu Saxena2021-07-11
| | | | | | | This commit uses the common TaskItem and InferenceItem typedefs for execution in TensorFlow backend. Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
* lavfi/dnn: refine code to separate processing and detection in backendsGuo, Yejun2021-05-24
|
* avfilter/dnn/dnn_backend_tf: fix cross library usageLimin Wang2021-05-11
| | | | | | | | duplicate ff_hex_to_data() function from avformat and rename it to hex_to_data() as static function. Reviewed-by: Guo, Yejun <yejun.guo@intel.com> Signed-off-by: Limin Wang <lance.lmwang@gmail.com>
* lavfi/dnn_backend_tensorflow: support detect modelTing Fu2021-05-11
| | | | Signed-off-by: Ting Fu <ting.fu@intel.com>
* lavfi/dnn_backend_tensorflow: add multiple outputs supportTing Fu2021-05-11
| | | | Signed-off-by: Ting Fu <ting.fu@intel.com>
* dnn: add DCO_RGB color order to enum DNNColorOrderTing Fu2021-05-11
| | | | | | | Adding DCO_RGB color order to DNNColorOrder, since tensorflow model needs this kind of color oder as input. Signed-off-by: Ting Fu <ting.fu@intel.com>
* lavfi/dnn: refine dnn interface to add DNNExecBaseParamsGuo, Yejun2021-05-06
| | | | | | | | | | | | | Different function type of model requires different parameters, for example, object detection detects lots of objects (cat/dog/...) in the frame, and classifcation needs to know which object (cat or dog) it is going to classify. The current interface needs to add a new function with more parameters to support new requirement, with this change, we can just add a new struct (for example DNNExecClassifyParams) based on DNNExecBaseParams, and so we can continue to use the current interface execute_model just with params changed.
* avfilter/dnn/dnn_backend_tf: simplify the code with ff_hex_to_dataLimin Wang2021-04-29
| | | | | | | | please use tools/python/tf_sess_config.py to get the sess_config after that. note the byte order of session config is in normal order. bump the MICRO version for the config change. Signed-off-by: Limin Wang <lance.lmwang@gmail.com>
* lavfi/dnn: refine code for frame pre/proc processingGuo, Yejun2021-04-08
|
* lavfi/dnn_backend_tensorflow.c: fix mem leak in execute_model_tfTing Fu2021-03-25
| | | | Signed-off-by: Ting Fu <ting.fu@intel.com>
* lavfi/dnn_backend_tensorflow.c: fix mem leak in load_native_modelTing Fu2021-03-25
| | | | Signed-off-by: Ting Fu <ting.fu@intel.com>
* lavfi/dnn_backend_tensorflow.c: fix mem leak in load_tf_modelTing Fu2021-03-25
| | | | Signed-off-by: Ting Fu <ting.fu@intel.com>
* dnn: add color conversion for analytic caseGuo, Yejun2021-02-18
| | | | Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
* dnn: add function type for modelGuo, Yejun2021-02-18
| | | | | | | | So the backend knows the usage of model is for frame processing, detect, classify, etc. Each function type has different behavior in backend when handling the input/output data of the model. Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
* dnn: remove type cast which is not necessaryGuo, Yejun2021-01-28
|
* libavfilter/dnn: add prefix ff_ for internal functionsGuo, Yejun2021-01-22
| | | | | | | from proc_from_frame_to_dnn to ff_proc_from_frame_to_dnn, and from proc_from_dnn_to_frame to ff_proc_from_dnn_to_frame. Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
* libavfilter/dnn: use avpriv_report_missing_feature for unsupported featuresGuo, Yejun2021-01-22
| | | | Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
* dnn_interface: change from 'void *userdata' to 'AVFilterContext *filter_ctx'Guo, Yejun2020-12-29
| | | | | | | | | | 'void *' is too flexible, since we can derive info from AVFilterContext*, so we just unify the interface with this data structure. Signed-off-by: Xie, Lin <lin.xie@intel.com> Signed-off-by: Wu Zhiwen <zhiwen.wu@intel.com> Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
* dnn_backend_tf.c: add option sess_config for tf backendGuo, Yejun2020-10-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | TensorFlow C library accepts config for session options to set different parameters for the inference. This patch exports this interface. The config is a serialized tensorflow.ConfigProto proto, so we need two steps to use it: 1. generate the serialized proto with python (see script example below) the output looks like: 0xab...cd where 0xcd is the least significant byte and 0xab is the most significant byte. 2. pass the python script output into ffmpeg with dnn_processing=options=sess_config=0xab...cd The following script is an example to specify one GPU. If the system contains 3 GPU cards, the visible_device_list could be '0', '1', '2', '0,1' etc. '0' does not mean physical GPU card 0, we need to try and see. And we can also add more opitions here to generate more serialized proto. script example to generate serialized proto which specifies one GPU: import tensorflow as tf gpu_options = tf.GPUOptions(visible_device_list='0') config = tf.ConfigProto(gpu_options=gpu_options) s = config.SerializeToString() b = ''.join("%02x" % int(ord(b)) for b in s[::-1]) print('0x%s' % b)
* libavfilter/dnn/dnn_backend{openvino, tf}: check memory alloc non-NULLChris Miceli2020-10-14
| | | | | These previously would not check that the return value was non-null meaning it was susceptible to a sigsegv. This checks those values.
* dnn: add a new interface DNNModel.get_outputGuo, Yejun2020-09-21
| | | | | | | | | | for some cases (for example, super resolution), the DNN model changes the frame size which impacts the filter behavior, so the filter needs to know the out frame size at very beginning. Currently, the filter reuses DNNModule.execute_model to query the out frame size, it is not clear from interface perspective, so add a new explict interface DNNModel.get_output for such query.
* dnn: put DNNModel.set_input and DNNModule.execute_model togetherGuo, Yejun2020-09-21
| | | | | | | | | | | | | | suppose we have a detect and classify filter in the future, the detect filter generates some bounding boxes (BBox) as AVFrame sidedata, and the classify filter executes DNN model for each BBox. For each BBox, we need to crop the AVFrame, copy data to DNN model input and do the model execution. So we have to save the in_frame at DNNModel.set_input and use it at DNNModule.execute_model, such saving is not feasible when we support async execute_model. This patch sets the in_frame as execution_model parameter, and so all the information are put together within the same function for each inference. It also makes easy to support BBox async inference.
* dnn: change dnn interface to replace DNNData* with AVFrame*Guo, Yejun2020-09-21
| | | | | | | | | | | | Currently, every filter needs to provide code to transfer data from AVFrame* to model input (DNNData*), and also from model output (DNNData*) to AVFrame*. Actually, such transfer can be implemented within DNN module, and so filter can focus on its own business logic. DNN module also exports the function pointer pre_proc and post_proc in struct DNNModel, just in case that a filter has its special logic to transfer data between AVFrame* and DNNData*. The default implementation within DNN module is used if the filter does not set pre/post_proc.
* dnn: add userdata for load model parameterGuo, Yejun2020-09-21
| | | | the userdata will be used for the interaction between AVFrame and DNNData
* dnn/tensorflow: add log error messageTing Fu2020-08-31
| | | | Signed-off-by: Ting Fu <ting.fu@intel.com>
* dnn: move output name from DNNModel.set_input_output to DNNModule.execute_modelGuo, Yejun2020-08-25
| | | | | | | | | | | currently, output is set both at DNNModel.set_input_output and DNNModule.execute_model, it makes sense that the output name is provided at model inference time so all the output info is set at a single place. and so DNNModel.set_input_output is renamed to DNNModel.set_input Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
* dnn/native: rename struct ConvolutionalNetwork to NativeModelTing Fu2020-08-21
| | | | | Signed-off-by: Ting Fu <ting.fu@intel.com> Reviewed-by: Guo, Yejun <yejun.guo@intel.com>
* dnn_backend_tf.c: fix build issue for tensorflow backendGuo, Yejun2020-08-14
| | | | Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
* dnn: add backend options when load the modelGuo, Yejun2020-08-12
| | | | | | | different backend might need different options for a better performance, so, add the parameter into dnn interface, as a preparation. Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
* lavf, lavfi: Remove uses of sizeof(char).Carl Eugen Hoyos2020-04-04
| | | | The C standard requires sizeof(char) == 1.