I'm interested in this challenge but at the moment I'm too late to submit my results, may I ask when the dataset will be released? If I want to use these datasets do I have to wait until the challenge is over and the joint report is published?

Created by Letian Gao Gaullego
@Gaullego for clarification: The publicly available split that we use for validation can be used any time for any publication FYI https://ieee-dataport.org/open-access/stir-surgical-tattoos-infrared In our response, we are speaking about an additional test split we have withheld, and are using for scoring the challenge/shared publication submissions.
Hi Valay, We think this could be helpful. In terms of post-deadline submissions for the final challenge publication, I can confidently say that you should be fine anytime before November. For later possible dates, we will have to deliberate. In terms of publishing/updating evaluations on the test set, we will have to discuss, but it is on our minds now. We will have to get back to you after the challenge day in mid-October with more specific details. Let me know if there are other questions in the meantime though. Thanks, Adam
Hi Adam, we currently started looking into point tracking in surgical videos and so it might take some time before we submit the first run. If you plan to allow post-deadline submissions, submissions until when do you think can be added in the final report? Also, would it be possible to allow evaluation of methods on this platform until the release of the joint report(and also the test set) so that we can keep experimenting on the dataset. Please let me know. Thanks and Regards Valay
Firstly: we have extended the deadline to September 16th; even if you don't have your best method, it could be useful to submit something. :) To answer your question: we do not currently plan to release the test portion of the dataset before the joint report is released. Your question brings up a useful point. In prior years, this has been addressed in some challenges by allowing post-deadline submissions that are included in the publication, and evaluated after the challenge day. These are also included in the report. Would something like this be helpful? We will talk and find the best way to implement a way for methods to test after the release. Thanks, Adam

Inquiry About Test Dataset page is loading…