Dear Moderators, Accordantly from the wiki is explained that: "A subset of the validation data is randomly sampled and set aside for these leaderboard rounds. In each individual round is made of a sample with replacement from this set aside subset. Leaderboard rounds give participants an idea of their score and allow them to adjust their models for improved accuracy. Final submissions are then scored on the remainder of the samples in an effort to avoid overfitting." So, the final round subset submission will not have any patient that was previously predicted in the other 2 rounds, correct? Or will be there all/few patients from other 2 rounds and more few that we didn't predicted yet? Thanks for your attention Best Regards

Created by Ruben Rodrigues rrodrigues
Dear Ruben, There should be roughly an 25% to 75% split between the leaderboard rounds and the validation for questions 1 and 2. There should not be any overlap between the two groups. Question three has a different structure for its rounds with each internal validation round having 15%, 30% and 100% with overlap in all three rounds. Kind Regards, Mike

Final round subset validation data page is loading…