Dear organizers, i am trying to submit prediction files for task of missing modality synthesis and after uploading a zip file i am trying to submit it to the challenge. However in the list of evaluation queues there is no Global Synthesis task. Can you add it please?

Created by Dasha Trofimova dtrofimo
Hello! Yes you are correct, there is no need for the subfolder! Thank you! I have another question regarding your repo. In there you say that: "**post-processed algorithm pads the images back to original dimension (256x256x256)**." Is this correct? Also, as far I understand from your code, the predicted modality is saved in a range of intensities between 0 and 1. Am I understanding it correctly? Thank you for your time!
Hi @euler_q, can you specify how can i check whether my path and format are correct when using your repository? in addition, you mentioned in the other thread, that following output option is correct: ``` Option 2: | output_path | | BraTS-GLI-00000-000 | | | BraTS-GLI-00000-000-t2w.nii.gz ``` However here you suggest removing the subfolder. is there a consensus on that?
Hi @ShadowTwin41 , Can you try to get rid of the subfolder and try again? Moreover, medperf can help you check the path and format before you submit your mlcube. For more details, you can check [my repo](https://github.com/WinstonHuTiger/BraSyn_tutorial) for more.
Dear all, Is there any news regarding this issue? I am having the same problem: ``` Traceback (most recent call last): File "/mlcube_project/mlcube.py", line 29, in app() File "/mlcube_project/mlcube.py", line 19, in evaluate calculate_metrics(labels, predictions, parameters, output_path) File "/mlcube_project/metrics.py", line 99, in calculate_metrics check_for_synthesis(labels, predictions, parameters) File "/mlcube_project/metrics.py", line 65, in check_for_synthesis raise ValueError("Predictions don't match submission criteria") ValueError: Predictions don't match submission criteria ``` I have the files like this: ``` Saved to /mlcube_io1/BraTS-GLI-00015-000/BraTS-GLI-00015-000-t1c.nii.gz ``` Thank you!
hi @dtrofimo , Sorry. We are contacting the evaluation team to fix this. If your model can do synthesis with the same naming format of the training set, we should be able to change the file names for the segmentation model we use for evaluation. So to me, it is almost there. Please wait for our response. Thanks.
hi @branhongweili, question for the MLCube, are we responsible for changing the naming convention of the output files compatible to Fets model? I have an issue with when running MLCube compatibility test (task7 missing mri) and my model inference is complete, but for calculate_metrics() phase i have "ValueError: Predictions don't match submission criteria" error for Metrics MLCube, although the format of the output folder is identical to validation folder and the namings are following *{ID}-{timepoint}-{missing modality}.nii.gz. Would appreciate your help. dasha here is the output i have: 100%|??????????| 5/5 [00:06<00:00, 1.34s/it] > Model execution complete 2024-08-01 15:44:09 e230-pc002 mlcube.__main__[54960] INFO Running task = evaluate Traceback (most recent call last): File "/mlcube_project/mlcube.py", line 29, in app() File "/mlcube_project/mlcube.py", line 19, in evaluate calculate_metrics(labels, predictions, parameters, output_path) File "/mlcube_project/metrics.py", line 99, in calculate_metrics check_for_synthesis(labels, predictions, parameters) File "/mlcube_project/metrics.py", line 65, in check_for_synthesis raise ValueError("Predictions don't match submission criteria") ValueError: Predictions don't match submission criteria
hi @dtrofimo yes, we will return the test set results (i.e., values of the evaluation metrics) to you if you would like to include them in future publications.
hi @branhongweili . thanks for the hint. quick question: as its hard to judge without validation phase for the task 7 (Global Synthesis), would it be possible to make some adjustments in the paper after testing phase based on performance of final submission?
hi @dtrofimo , thank you! And also please use the latest version of the script to drop modalities (we sorted the folder before dropping modalities) to ensure that it's consistent for each time.
@dtrofimo , Thank you for your interests. We did not initially plan for there to be a validation submission queue for the Missing MRI challenge. However, due to increasing interests, we will do our best to set one up before the 7/31 submission deadline. Stay tuned!

Validation files for Global Synthesis Task page is loading…