In this challenge, we added a small algorithm that utilizes a point matching algorithm and affine matrix to solve the error accumulation problem in Opticalflow. However, when I checked the STIRHoloscan benchmark page, I found the following statement: (?We expect the entire model to be exported into an onnx or tensorrt file,,?) Of course, we prepared a deep learning model for point flow estimation, but is it possible to add or modify the code to add other algorithms for post-processing?

Created by Mingang Jang mkjang0531
We have updated the submission instructions: "For methods participating in the latency component, please put everything into one docker container. Provide instructions for where to find and how to run flow2d (/3d, if 3D) for accuracy evaluation in your submission email. There will be two separate runs by organizers. One with pointtracker_holoscan.py on a clip to determine efficiency, and a separate one using flow2d (/3d) over the whole test dataset to get your accuracy score." Let us know if this causes any unforeseen issues.
PyTorch is allowed! We recommend TensorRT or ONNX since they are fast, but if you already meet the threshold, that's great. Make sure to embed the inference calls in a holoscan operator so it can be timed. As for containers, if you can put the code for both (accuracy+efficiency) in the same container, please do so, just as long as you provide instructions for how to generate the output using flow2d.
Hi, I have a similar question about the submission method for the efficiency challenge. Does the model need to be exported to ONNX or TensorRT, or is it acceptable to submit it in PyTorch? If the method already meets the performance threshold, is it possible to directly call the track function (e.g., flow2d) from the tracker class instead of using the InferenceOp? Additionally, where will the accuracy for the efficiency test be evaluated? Do we need to include the flow2d in the container for the efficiency submission?
Totally, including pre or post-processing not provided as tensorrt or onnx is alright. We will update the quoted statement. If you are doing so, add a postprocessing (or preprocessing) operator after (or before) the inference operator in the holoscan graph. When you submit, make sure to note this, as we will time additional pre or post-processing. Thank you!

Questions about modifying codes for post-processing page is loading…