Dear RA2 DREAM Challenge Participants, We would like to announce that the submission deadline for the final scoring round has been extended to June 30, 2020 at 5pm Pacific Time. This means that the required method write ups will be due on July 14, 2020 (please use the [method writeup template](https://www.synapse.org/#!Synapse:syn20545111/wiki/603038) as a guide) and the final results will be announced on or before August 30, 2020. We greatly appreciate the feedback that we received from the community. There were differing opinions of keeping or extending the deadline, but it is clear that there are many of your RA2 DREAM Challenge colleagues that are in unique situations around the world. We always try to make the challenges as fair to everyone as possible. There are several teams with extenuating circumstances and we will allow them to submit after the previous close deadline, but we felt it would be most fair to extend this opportunity to everyone. The entire world has been affected by the SARS-CoV-2 virus, and most recently, the United States is facing some harsh realities of the imbalance in how we treat our own citizens. Thus, it is a very unique year and we felt it would be fair to everyone to let you know of the deadline extension as soon as we made this decision, not the day before or the day of the deadline. We would like to note that this is the final deadline and we will not accept any submissions after June 30, 2020 for any reason. We great appreciate your continued support of the challenge and we wish you luck with our final submission. As always, if you have questions/concerns, please post your comments to this thread. Kind Regards, The RA2 DREAM Challenge Organizers.

Created by James Costello james.costello
@allawayr Thank you, we appreciate it
Hi @stadlerm, We'll bring this request to the organizing committee - at the end of the day, this data is UAB's to share, so it is their call. Thanks! Robert
@allawayr Do you think it will be possible to publish the test images and the corresponding scores? At least the ones used during the leaderboard phase? This would be helpful to be able to perform some additional evaluation, aside from the challenge RMSE score, for our own write-ups and publications (after the embargo)
Hi @patbaa, Thanks for the question and apologies for my delay. We discussed this internally and will send out an email to each team with their final round scores - we're not prepared to release overall ranks/results yet as, as you noted, they need to be validated and we need to assess ties. Best, Robert
Dear @allawayr @james.costello , Now, that the final deadline is over I would like to ask if it is possible the share the initial raw leaderboard scores and positions with the competitors for the test set? I understand that these results are not the final one, they must be filtered, the close submissions must be tied or decide which one is significantly better than the other. Also, you want to verify the method/code. For me seeing the leaderboard and having quick feedback on our work (did the score improve? how is it compared to the others?) is one of the most fun parts of these competitions. I think I am not alone with this. For the RA2 challenge, we did not have that feedback since mid-May and it would be nice not to wait until early September. Thank you, Balint
Hi @yanmingtan, Looking at your CWL logs (in the zip file) it seems like your container runs successfully on the leaderboard data but not on the final data, which makes sense as the fastlane is also only the leaderboard data. Here's one possible issue that might cause this error: How are you constructing the prediction.csv file in your container? The final dataset is on a different set of patients than the leaderboard round, so your code needs to read in the template that we provide at runtime: Here's an example of how you can do it in R, would be similar in python of. course: https://github.com/allaway/ra2-docker-demo/blob/master/model.R ``` template <- read_csv('/test/template.csv') ```
@james.costello @allawayr @raphael_quek Hi, Sorry for the last-minute questions about the submission, but we have problems submitting our dockers and all to the final challenge submission, even though it works for the fast lane. Our latest submission ID is 9705605. Could you please kindly assist? PS: The log (as shown below) doesn't show any error so we were quite lost. ``` INFO:  Could not find any nv binaries on this host! 2020-06-30 09:56:12.033987: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /.singularity.d/libs 2020-06-30 09:56:12.035135: E tensorflow/stream_executor/cuda/cuda_driver.cc:313] failed call to cuInit: UNKNOWN ERROR (303) 2020-06-30 09:56:12.035188: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:169] retrieving CUDA diagnostic information for host: c0110 2020-06-30 09:56:12.035205: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:176] hostname: c0110 2020-06-30 09:56:12.035265: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:200] libcuda reported version is: Not found: was unable to find libcuda.so DSO loaded into this program 2020-06-30 09:56:12.035319: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:204] kernel reported version is: 418.87.1 2020-06-30 09:56:12.035616: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA 2020-06-30 09:56:12.045364: I tensorflow/core/platform/profile_utils/cpu_utils.cc:102] CPU Frequency: 2399765000 Hz 2020-06-30 09:56:12.045826: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x2aac10000b60 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2020-06-30 09:56:12.045859: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version Using TensorFlow backend. /RA2_script.py:464: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy pips['name'] = ['pip_1'] /usr/local/lib/python3.8/dist-packages/pandas/core/indexing.py:966: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy self.obj[item] = s ```

Final Submission Deadline June 30, 2020, 5pm PT page is loading…