The model has ran successfully and has also produced the output files but after there the following error after ``` > Model execution complete ``` ```bash > Model execution complete ERROR: message: DockerRun runner failed to run MLCube. description: Error occurred while generating mount points for docker run command (task=evaluate). See context for details and check your MLCube configuration file. context: {'error': 'Invalid task: task=evaluate, param=predictions, type=unknown. Type is unknown and unable to identify it (/home/ubuntu/.medperf/api_medperf_org/tmp36776/1288776615).'} ? Metrics MLCube failed. WARNING: Failed to premanently cleanup some files. Consider deleting '/home/ubuntu/.medperf/.trash/36776' manually to avoid unnecessary storage. ``` Can I submit this docker file or do I need to do something to resolve this error. If so please help me with this. @vchung @ujjwalbaid

Created by SAHAJ MISTRY i_sahajmistry
I see. The aim of the MLCubes is to have pre-defined input/output names. We should have noted in our documentation that certain parameters names defined in the template shouldn't be altered, my appologies. In `mlcube.yaml`, I see you have changed the input data parameter's name from `data_path` to `data`, and changed the output folder parameter's name from `output_path` to `results`. These naming conventions are not compatible with MedPerf. Also, please don't use absolute paths in `mlcube.yaml`, since this file will be used to run your MLCube on other machines. Don't worry about the values of `data_path` and `output_path` inside the `mlcube.yaml`, since these will be set dynamically by MedPerf. However, `ckpt_path`'s value should be a relative path to the `workspace` folder, since it will be used as-is by MedPerf. In summary, please follow the provided guide carefully, and re-check if you have done other "extra" changes not mentioned in the guide. For a quick guidance about the section you provided, it should look like this: ```yaml tasks: infer: # Computes predictions on input data parameters: inputs: { data_path: , ckpt_path: additional_files/gli-folds=0-epoch=42-dice=89.80.ckpt, # Feel free to include other files required for inference. # These files MUST go inside the additional_files path. # e.g. model weights # weights: additional_files/weights.pt, } outputs: {output_path: {type: directory, default: }} ``` **Note**: You may need to modify your `mlcube.py`'s `infer` command signature to accommodate for the modified `data_path` and `output_path` parameters names. Let us know if this works for you!
- Following is the content of ```mlcube.yaml```'s task section: ``` tasks: infer: # Computes predictions on input data parameters: inputs: { data: /home/ubuntu/Documents/MICCAI_2023/mlcube/sample_data, ckpt_path: /home/ubuntu/Documents/MICCAI_2023/mlcube/nnunet_afr/mlcube/workspace/additional_files/gli-folds=0-epoch=42-dice=89.80.ckpt, # Feel free to include other files required for inference. # These files MUST go inside the additional_files path. # e.g. model weights # weights: additional_files/weights.pt, } outputs: {results: {type: directory, default: /home/ubuntu/Documents/MICCAI_2023/mlcube/results}} ``` - The log file is uploaded. I have again made sure that the results are getting saved.
@i_sahajmistry Hmm... This may mean that your model MLCube is not creating the output predictions folder, but you told me it is creating the output files. Let's do the following: 1. Please show me your `mlcube.yaml`'s `tasks` section 2. Let's run `medperf test run` with debug loglevel: a. Run the same command but like this: `medperf --loglevel debug test run ...`. b. After the command finishes, go to this file `/home/ubuntu/.medperf/api_medperf_org/logs/medperf.log`. The file is very big. Please upload it to the following Synapse project folder: https://www.synapse.org/#!Synapse:syn52322738
I am using the following command to run the test: ```bash medperf test run \ --demo_dataset_url synapse:syn52276402 \ --demo_dataset_hash "16526543134396b0c8fd0f0428be7c96f2142a66" \ -p ./test_mlcubes/prep_segmentation \ -m ./nnunet_ped/mlcube \ -e ./test_mlcubes/eval_segmentation \ --offline --no-cache ``` The dir structure looks as follows (printed using command ``` tree -L 2 ```): ```bash . ??? nnunet_ped ? ??? mlcube ? ??? project ??? readme.txt ??? test_mlcubes ? ??? eval_inpainting ? ??? eval_segmentation ? ??? eval_synthesis ? ??? prep_inpainting ? ??? prep_segmentation ? ??? prep_synthesis ??? test_mlcubes.tar.gz ``` _**Note**_ : I tried running the test again with freshly extracted ```test_mlcubes```but the output is consistent. ``` . . . {Some Logs} > Model execution complete ERROR: message: DockerRun runner failed to run MLCube. description: Error occurred while generating mount points for docker run command (task=evaluate). See context for details and check your MLCube configuration file. context: {'error': 'Invalid task: task=evaluate, param=predictions, type=unknown. Type is unknown and unable to identify it (/home/ubuntu/.medperf/api_medperf_org/tmp39586/2840740613).'} ? Metrics MLCube failed. WARNING: Failed to premanently cleanup some files. Consider deleting '/home/ubuntu/.medperf/.trash/39586' manually to avoid unnecessary storage. ```
@i_sahajmistry After `Model execution complete`, there is an evaluation script that runs to make sure your outputs are structured as expected. This hasn't been executed based on your errors. It seems that you are not passing the correct evaluation MLCube to the `medperf test run` command. (Parameter `-e`) Did you modify the MLCubes found in `test_mlcubes`? Or change the `-e` parameter?
@i_sahajmistry , Thank you for sharing your error message. Based on the message, my initial thoughts would be that you can submit, but let's confirm with @hasank just in case.

[Error] Error after Model Execution Complete page is loading…