Considering that the end of the Challenge was now almost 1.5 years ago, it feels strange to compare algorithms that are likely out of date already. We all know how difficult it is to get a perfect gold standard, but every algorithm is treated equally even with an imperfect gold standard (ie the imperfection is unlikely to favour certain algorithms). So here are my suggestions: The organizers (i) set a reasonable date in the near future as closure regardless of completion of the gold standard; and (ii) give the challengers a certain period to improve and re-submit their algorithms if they so wish. Yudi

Created by Yudi Pawitan yudpaw
Thank you for comments. First, we want to apologize for how long the evaluation has taken. We have continued to work on the benchmark and are close to making announcements about the results. Over the past year we?ve worked with multiple labs to try and secure long read sequencing that could be used to verify novel fusion calls made by participants. However, every attempt has failed. Because of this, we have to refine our evaluation methods and how they deal with the possibility that novel calls are genuine events or false positives. You are correct that the longer this work continues, they older the methods become. A challenge is a snapshot of a moment in time, but the benchmark used can live beyond the challenge and be applied to new methods. For this reason we have included a ?community challenge? in the evaluation. We are continuing to allow submissions to be added to the benchmarking. And while they may not be part of the ?official challenge? any new methods that we receive will be included in the paper that we are working on. Kyle

Closure and algorithms getting old page is loading…