In order to facilitate improvements to participants' models while also preventing potential overfitting, we have increased the number of valid submissions allowed and how their scores are reported. Teams will now have a quota of five valid submissions in the validation phase for each sub-challenge (previously it was two). However, instead of receiving their exact scores, participants will receive an estimate of their score computed by taking the mean of 10 bootstrapped samples from the Checkmate-026 validation dataset. Please check out ["Challenge"](https://www.synapse.org/#!Synapse:syn18404605/wiki/607226) sub-wiki section labeled "Assessment" for more details about the score estimates returned to participants. It also includes distributions of scores based on 5000 random models that can provide a sense of where a model's performance lies. Kind regards, Mike

Created by Mike Mason Michael.Mason
@adeoluokiki -- Your error indicates that `dream_challenge1.py` is not found when running the Docker container. I would advise going back to your Dockerfile to ensure that 1) the script is copied over to the image; and 2) the command running the script is calling it with the correct path. For example, if dream_challenge1.py is copied to `/usr/local/bin/`, then the command should be: ``` python /usr/local/bin/dream_challenge1.py ``` Also, as your question is unrelated to the topic of this thread, please create a new thread in the future. This will help keep the Discussion Forums organized for yourself and others. Thank you!
Dear @kaufmajm, Thanks for reaching out regarding the baseline model performances we made available. Our motivation for providing these examples was to facilitate model debugging not feature selection. As you note these models reflect the performances seen in the Carbone *et al* paper. If a participant's writeup makes a compelling case for investigating a particular additional feature it may be included in the post Challenge phase analysis on a case-by-case basis. Kind regards, Mike
My docker image is successfully running on my local machine, when I submitted to the challenge I got invalid report with "python: can't open file 'dream_subchallenge1.py': [Errno 2] No such file or directory". Kindly help
Thanks @vchung Issue resolved.
Dear @Michael.Mason, When building the models I don't think we were made aware that the performance of the baseline models was going to be made available. Since the scores were published on the site after a few days, this seems to penalize participants who submitted their models early. It looks like TMB scores highly but PDL1, and the TIDE model were not discriminatory, consistent with the conclusions of the CM026 NEJM paper, Carbone et al. My question is that I built a 'baseline model' taking that simplistically combined TMB, PDL1, and the Szabo inflammation score, and this model was non-discriminatory. However, the submission model added a hypothesis driven variable that seemed to improve the prediction. Knowing now what the baseline models look like I would add TMB + additional variables. Is there a way to address this and see whether TMB + additional variables improves the model compared to TMB alone? Or if not, this could presumably be addressed after March 9 during the consortium analysis and write up phase? Best, Jacob Kaufman
Hi @adeoluokiki , To best assist you, can you share what steps/commands you took leading up to the push? e.g. did you login to the Synapse Docker registry beforehand with: `docker login docker.synapse.org` Let us know! Verena
I tried to push Docker image to synapse repository but my access was denied. Kindly help
@Eimanahmed and @andreabc -- Apologies for the late response. Invalid submissions will _not_ count towards the submissions limit. In other words, you are allowed up to **5 total valid submissions** for this phase.
Following up on this
Dear @Michael.Mason Will the five submissions be counted based on the number of valid submissions only, i.e if we submitted invalid model will we loose one attempt? Thank you, Eiman Ahmed

Submission Quota and Score Reporting Change page is loading…