For the MLCube Submission Phase, you will need to: * Install the [MedPerf CLI from source](https://docs.medperf.org/getting_started/installation/) * Follow the [Model MLCube guide](https://docs.medperf.org/mlcubes/mlcube_models/) to containerize your model * Note that your model inside the MLCube is responsible for iterating through all cases in the dataset * Test your MLCube's compatibility (see [Submission Tutorial](https://www.synapse.org/#!Synapse:syn51156910/wiki/622674) under "Test your MLCube's Compatibility") If you are experiencing any issues related to setting up MedPerf or creating and/or testing your MLCubes, let us know below! CC: @hasank @aristizabal95

Created by Verena Chung vchung
@vchung , I?ve just replied the latest question email, but it looks like the mail didn't make it to your mailbox, but Rachit Saluja got it, can you get the yaml file from her? Or can you give me another email like gmail, I'm very sorry to trouble you.
@ShadowTwin41, Apologies - I misread your intended question. You can add the flag to `gpu_args`, for example: ```yaml docker: image: docker.synapse.org/.../... ... gpu_args: --gpus=all --shm-size=2g ```
@vchung, I'm not sure if that is the problem. I'm trying to add that argument --shm-size 8G to the mlcube.yaml so I can test it locally. Is it possible somehow?
@ShadowTwin41 , That's great news! We will gladly accept your corrected config tarball.
Dear all, I just realized that maybe the problem with my submission is the shared memory. When I run the docker image with the argument --shm-size 8G it works perfectly. It there any way I can add this argument to the MLCube? Best regards, André Ferreira
@Baraka , Apologies for that - can you try again?
Hello I am still having difficulties submitting. Kindly grant me permission to submit.
@Baraka , Yes, the organizers have confirmed that that will be okay!
Hello @vchung and @hasank I hope you are well amidst the chaos of the BraTS Challenge. As can be see in the image below, we have just passed the medperf test and are currently in the process of uploading the docker image. However, the docker image weighs around 20 GB, which takes a long time for uploading. I was wondering if submitting a few hours past the deadline would still be valid? proof of tests passing ${imageLink?synapseId=syn52348439&align=None&scale=100&responsive=true&altText=medperf pass} current upload status ${imageLink?synapseId=syn52348445&align=None&scale=100&responsive=true&altText=docker push status} Thank you Team Kilimanjaro
@ShadowTwin41 , In case it was missed by email, @hasank suggests: > This error happens if medperf fails to cleanup some file, if you run medperf with no cleanup, or if the medperf process exited unnaturally (killing the terminal for example). Otherwise, this must be a bug from MedPerf so please let us know to tackle it later. > > For now, please manually do the following: > 1. Delete the folder ~/.medperf > 2. Delete `mlcube-meta.yaml` files in your MLCube directory as well as in all the MLCubes inside `test_mlcubes` folder. ( shorthand for this could be: `cd test_mlcubes` then run `rm **/mlcube-meta.yaml`. Also, run `rm /mlcube-meta.yaml` > > Let us know if this resolves the problem.
Hello @hasank, @aristizabal95, @vchung, I'm getting the error ``` Invalid resource input: mock_url. A Resource must be a url or in the following format: ':'. Run `medperf mlcube submit --help` for more details. ``` After running the ``` medperf test run ``` command. My version of the MedPerf is 0.1.0.
Thanks for the rapid reply. I figured out what's wrong. inside my test function, there is a parse_args() being called. removing it fixed this issue.
@Baraka , Thank you for sharing your mlcube config files - this will help! @hasank is our MLCube expert, so I will defer your question over to him.
Hi @vchung ! I have installed MedPerf and followed the Model MLCube guide. My docker image is created successfully, but when I call ``` mlcube run --task=infer ``` I get the following error: ``` mlcube.py: error: unrecognized arguments: infer --data_path=/mlcube_io0 --parameters_file=/mlcube_io1/parameters.yaml --weights=/mlcube_io2/model_final.pt --output_path=/mlcube_io3 2023-08-24 14:49:03 Odysseus mlcube.shell[59838] ERROR Shell.run command='docker run --volume /home/odcus/Data/BraTS_Africa_data/ASNR-MICCAI-BraTS2023-SSA-Challenge-ValidationData:/mlcube_io0 --volume /home/odcus/Software/swinUnetr_final_mlcube/swinUnetr_final/mlcube/workspace:/mlcube_io1 --volume /home/odcus/Software/swinUnetr_final_mlcube/swinUnetr_final/mlcube/workspace/additional_files:/mlcube_io2 --volume /home/odcus/Software/swinUnetr_final_mlcube/swinUnetr_final/mlcube/workspace/predictions:/mlcube_io3 docker/image:latest infer --data_path=/mlcube_io0 --parameters_file=/mlcube_io1/parameters.yaml --weights=/mlcube_io2/model_final.pt --output_path=/mlcube_io3' status=512 exit_status=exited exit_code=2 on_error=raise 2023-08-24 14:49:03 Odysseus mlcube.__main__[59838] ERROR run failed to run MLCube with error code 2. Traceback (most recent call last): File "/home/odcus/miniconda3/envs/swin_medperf/lib/python3.9/site-packages/mlcube_docker/docker_run.py", line 333, in run Shell.run( File "/home/odcus/miniconda3/envs/swin_medperf/lib/python3.9/site-packages/mlcube/shell.py", line 99, in run raise ExecutionError( mlcube.errors.ExecutionError: Failed to execute shell command. ``` I wonder if you know how to resolve the issue above. The following zip file contains my setup: [mlcube project folder](https://www.synapse.org/#!Synapse:syn52341603.1) Thank you!
Great, thought as much! Thank you for confirming!
@amodar7 , The logs you have attached are not related to your MLCube, so your submission is fine! Those are standard warnings from the submission system's workflow.
### MLCube Submission Log Edited: Added code Hi there, We submitted the docker image and MLCube tarball last night. The dashboard records that submission was successful, and docker was found. However, when reviewing the provided log.txt file, there were a few warnings I am concerned about, although I do not know how the submission is checked and these could be standard for all submissions. We have checked our Cube on the medperf example provided in the tutorial and did not receive any errors. I would like to confirm that there are no errors with the final submission that need to attend to before tomorrows deadline. Below I provide relevant details and extracts from the logs.txt. Synapse project ID is syn51705606 Please may you review these and let me know if there is anything problematic. I am particularly concerned about warning point 1. Thank you in advance! #### Log File summary: 19 jobs issued **Warnings Received:** 1. Source filepath may be incompatible
Workflow checker warning: ``` MainThread WARNING cwltool: Workflow checker warning: brats2023-main/wf-mlcubes/mlcube-validation.cwl:59:9: Source 'filepath' of type ["null", "File"] may be incompatible brats2023-main/wf-mlcubes/mlcube-validation.cwl:69:9: with sink 'input_file' of type "File" MainThread INFO toil: Running Toil version 4.1.0-5ad5e77d98e1456b4f70f5b00e688a43cdce2ebe. ```
2. This same warning is produced 4 times regarding disk space, but then stops:
This happens only once ``` Thread-51 WARNING toil.statsAndLogging: Got message from job at time 08-24-2023 04:10:28: _**Job used more disk than requested. Consider modifying the user script to avoid the chance of failure due to incorrectly requested resources. Job files/for-job/kind-CWLWorkflow/instance-b7aaphwg/cleanup/file-2qjzv3ih/stream used 755.06% (7.6 GB [8114925568B] used, 1.0 GB [1074741824B] requested) at the end of its run.**_ MainThread INFO toil.leader: Job ended: 'file:///var/lib/docker/volumes/workflow_orchestrator_shared/_data/66394a42-736a-41bb-a2fd-9560f4c7a928/brats2023-main/shared/extract_config.cwl' python3 kind-file_var_lib_docker_volumes_workflow_orchestrator_shared__data_66394a42-736a-41bb-a2fd-9560f4c7a928_brats2023-main_shared_extract_config.cwl/instance-rslgqgf7 ```
This is what the rest of the disk space warning look like ``` Job used more disk than requested. Consider modifying the user script to avoid the chance of failure due to incorrectly requested resources. Job files/for-job/kind-CWLWorkflow/instance-b7aaphwg/cleanup/file-vnb9xppm/stream used 377.25% (3.8 GB [4054515712B] used, 1.0 GB [1074741824B] requested) at the end of its run. ```
The log file seems to indicate that the rest of the submission check ran fine, with each job producing logs similar to:
Standard Job Issue and Job End logs ``` MainThread INFO toil.leader: Issued job 'file:///var/lib/docker/volumes/workflow_orchestrator_shared/_data/66394a42-736a-41bb-a2fd-9560f4c7a928/brats2023-main/shared/extract_config.cwl' python3 kind-file_var_lib_docker_volumes_workflow_orchestrator_shared__data_66394a42-736a-41bb-a2fd-9560f4c7a928_brats2023-main_shared_extract_config.cwl/instance-rslgqgf7 with job batch system ID: 18 and cores: 1, disk: 1.0 G, and memory: 100.0 M INFO:toil.worker:Redirecting logging to /var/lib/docker/volumes/workflow_orchestrator_shared/_data/66394a42-736a-41bb-a2fd-9560f4c7a928/node-309063b1-ccff-4a68-9bb1-eea05f8b48f1-1cc402dd0e11d5ae18db04a6de87223d/tmpgszzn_an/worker_log.txt MainThread INFO toil.leader: Job ended: 'file:///var/lib/docker/volumes/workflow_orchestrator_shared/_data/66394a42-736a-41bb-a2fd-9560f4c7a928/brats2023-main/shared/extract_config.cwl' python3 kind-file_var_lib_docker_volumes_workflow_orchestrator_shared__data_66394a42-736a-41bb-a2fd-9560f4c7a928_brats2023-main_shared_extract_config.cwl/instance-rslgqgf7 ```
and the final exit lines indicating successful run:
Exit line: Toil run successful ``` MainThread INFO toil.leader: Finished toil run successfully. MainThread INFO toil.common: Successfully deleted the job store: FileJobStore(/var/lib/docker/volumes/workflow_orchestrator_shared/_data/66394a42-736a-41bb-a2fd-9560f4c7a928/tmpjv58san8) STDOUT: 2023-08-24T04:12:22.202030800Z {} ```

?[Mega-thread] MedPerf and MLCube Q&A page is loading…