The [slides](syn11559759) and [recording](https://drive.google.com/open?id=1DKcPyUOG0aKcEiZBBq2yRMQGydcy6EZl) from this morning's webinar are now available, and the complete rankings have been posted on the challenge website.
Congratulations to the challenge winners (Balint Armin Pataki, Jennifer Schaff, team Vision) and runner-ups (team Vision, Max Wang, Yuanfang Guan and Marlena Duda)!
Created by Solveig Sieberts sieberts I've looked at how the gold standard tables are referenced in the scoring code and how the dyskinesia scores in that table relate to the dyskinesia scores indicated by the raw data we were provided. Neither look out of the ordinary.
There were some peculiarities about how dyskinesia scores were obtained versus tremor and bradykinesia (dyskinesia was scored for the limb *opposite* the side performing the task), but I'm unsure how that would affect scores.
Setting that probably negligible difference aside, the scoring process was the same across all three subchallenges. I believe these results are consistent with our dry run results, but we'll double check. i mean over all participants, 2.2 is not overall shifted towards positive. but yes, it is very different from my cv, 0.3 different. Yuanfang-
The winning models are statistically better than the Demographic model for all outcomes in Subchallenge 2 (or do you mean for your model in particular). But we can double check that the correct outcome was used. @phil Can you look into this?
We have not determined when we'll release the labels. That will depend on how the community phase analyses shape up.
Solly solly i am just reading through the results. sub2.1 and sub2.3 is close to my CV. but sub2 is completely off.
then i looked at the distribution, 1. there is not significant overall shift towards positive compared to the clinical model, which means, on average participants provided random features over clinical. 2. sub2.1 and sub2.3 performances are correlated across teams, but not sub2.2. from these observations, do you think it is possible that the gold standard used were wrong. e.g. used other sub-challenge's gold standard, or curated wrongly
let us say nothing was found out wrong. what is the time point that sub2 gold standard is released so that i can check on our side?