Hi All,
Thanks for participating in the webinar. There were some outstanding items that we (organizers) needed to make decisions about. Here they are.
(1) If a team would like multiple submissions to be considered for the final evaluation, each team can submit upto 3 submissions. You will need to create 3 separate teams and submit the predictions are coming from 3 separate teams. We will still use the last submission of each team for final evaluation
(2) We are going to allow users to directly use the organizer provided Avocado predictions and the average predictor predictions for the validation and test sets in any way they like for Round 2. We felt this was fair and an efficient approach to avoid having participants simply recreate the Avocado predictions. You are also of course free to retrain models using Avocado code with any modifications.
(3) You are not allowed to use models or embeddings pretrained on data outside the challenge since this can provide an unfair advantage.
(4) We will get back to you about whether we will be using min-max normalized tracks for scoring or not within a week or so after we figure out how such normalization affects rankings etc. If our analysis is inconclusive i.e. we dont see any visible benefit of the normalization we will not use it and stick to the exact same procedure as Round 1. If we see a clear benefit, we will switch to normalizing the observed and predicted tracks before computing the 9 performance metrics and computing the rankings.
Thanks,
Organizers
Created by ANSHUL KUNDAJE akundaje We'll send instructions on Monday. Quick question: when will the round 2 leader board be opened, and how do we submit the data for continuous ranking?
Do we create a new folder with a specific name or do we overwrite previous data every time?
Thank you!
Drop files to upload
IMPORTANT additional rules for Round 2 page is loading…