Dear organizers,
Now that the final results are announced, I wanted to read the winners' solutions.
From my account, only the first place winner for Subchallenge 1 writeup is accessible.
When I try to access writeups for any other solutions (including the co-winner for
Subchallenge 2), I get this message:
You are not authorized to access the page requested. You can request access from
the owner.
Is this by design, that only one submission is accessible to the public, or maybe a problem
with my account, or looking in the wrong location?
This is how I try to access the writeups:
- click on Wiki for the submission
- under 4 - Leaderboards, click on 4.1.4 - Validation Round
- click on writeUp column links for the desired submissions
- similar for Subchallenge 2
Thanks for clarifying
Created by Ljubomir Buturovic ljubomir_buturovic Thank you @sam417. I didn't see the "Files" menu. Got it now.
Michelle, go to 4.3 - Final Results, click "here" in Complete results for SC1 or SC2 , look at the column "archivedWriteUp" as stated by Yaroslav. In menu "Files" you can find actual code and sometimes trained models. Thanks, Serghei. Greetings. Could @sam417 or someone else point me to how to access to the winner's code? Thank you! Thanks Yaroslav and Li for your insights. They were very helpful. @sam417
NO. I used histogram equalization only for my entry to the competition but not here. I trained a model on DDSM and then finetuned on INbreast. I didn't use the DDSM model on DREAM pilot. Li, to be clear, you training somehow DDSM pre-processed by histogram equalization, created the model, and ran this model for the InBreast set without fine-tuning? Did you run your independent DDSM model for the small DREAM pilot set?
Serghei
Thanks, looking forward to taking a look, @thefaculty! @ynikulin
I used histogram equalization in my entry, which brings the standard deviations closer between DDSM and DREAM. However, I'm not sure whether that directly improves the AUC or not.
The good news is, with end-to-end training, you can easily transfer one model to another dataset with even a small amount of data. I have evidence to support this claim. Hopefully I will find enough time to write up my manuscript by the end of Aug and you'll see it on arXiv!
Li Thanks @thefaculty and @sam417,
AUC of 59% when trained on DDSM and transferred to DREAM is a bit lower that what I would expect. Usually I had around 65, but it varied. In my opinion error = real_error + transfer_error where transfer error comes from a different input distribution. In my opinion 90% of AUC should be reproducible provided that transfer error = 0. An evident way of doing so is to finetune on enough amount of data coming from the target distribution. For now it is unclear (for me) if a proper normalization, probably a complex one, like histograms fitting, could diminish the transfer error to zero. Any insights or thoughts on this topic are appreciated!
Best,
Yaroslav
Hi Yaroslav,
Thanks. These were digital mammograms of the same manufacturer, Hologic, FOR PRESENTATIONS, and looked similar to DREAM mammograms. Actually, I ran, first, the DREAM pilot of 500, it had output of 272 images and AUC=0.86 that is very similar to your final results. Probably, the results of my set are related to the set itself but could be many different reasons. In any case, If I find another bigger set, I will update you. Concerning to using only DDSM model, it showed AUC=0.59. What AUC did you get for DREAM training/testing using only DDSM?
Thanks,
Serghei @ynikulin
In my own research, I was able to train a whole image classifier on DDSM and easily transfer it to classify images from INbreast with an AUC score of 0.85 or so. Even after the first epoch, the AUC is already at 0.75 level. As you know, DDSM contains scanned films while INbreast contains full digital images so their color profiles are quite different. My approach is in spirit similar to your winning strategy but I wasn't able to implement it during the competitive phase.
I'm in preparation of a manuscript to describe my findings and will upload it to arXiv soon. Stay tuned!
Li Hello @sam417,
Thanks for your feedback, appreciate that! Could you please tell a bit more about your dataset? Precisely, is it originally digital mammographies or scanned films? In the latter case it would be really interesting to know what results the net trained on DDSM only would give. Also, could you please show somewhere an example image? Does it visually look like DREAM data or DDSM data?
Waiting for details,
Thanks again,
Yaroslav
Hi,
Thanks, There is an access to files and write-ups. I would like to thank the top team of Yaroslav Nikulin for sharing the code and best models. I ran their test script using 3500, 4750, 3750, and 5000 models on our malignant/nonmalignant lesion set annotated by radiologists. It contains CC and MLO views of 76 cancer and 252 non cancer patients. I got the following results:
ROC=0.76(0.69:0.82), odds ratio (by SD)= 2.66(2:3.6). The results are encouraging although less powerful.
However, when I tried to run Dezso Ribli and DeepHealth code to compare with above model I did not find their final trained models. Does it mean that these groups just did not want to disclose their models or I am mistaken to find them?
Thanks,
Serghei
Thank you Yaroslav, that works. It didn't occur to me to click on that column
Even if it is not public, you can access the archived (I guess at the moment of first submission) copy (the links are in the same table, column "archivedWriteUp"). Same here. Can we read the writeups? Or it is up to the owner to share it to the public? Thanks!