Hi all
There was a relevant paper presented in the European Congress of Radiology 2017. They trained deep neural nets with a much smaller dataset than ours, and got AUC's of 0.82. Granted, they could see the images, and their evaluation was not blind. But, it is a good reference point. We are doing a little better in our Challenge, with a best AUROC of 0.86.
Here is an article reporting on this:
http://www.auntminnieeurope.com/index.aspx?sec=rca&sub=ecr_2017&pag=dis&ItemID=614155
Created by Gustavo A Stolovitzky gustavo @gustavo
Please remind people that they can also use external private/public mammography datasets to train the models.
Actually we still don't know if the results you are referring to were obtained training with additional (not-blind) external public datasets.
Also I would argue that the leaderboard results can be overoptimistic if considered that participants can optimize their parameters across the 3 * 3 submissions.
Thank you. Here is the PubMed link for the abstract of the actual paper.
https://www.ncbi.nlm.nih.gov/pubmed/28212138 Thanks. Seems they can detect some specific kinds of lesions instead of any kind of cancer in an accurate way, and pre-tagged by physicians.
Drop files to upload
Interesting paper in European Congress of Radiology page is loading…