Hi,
Thank you for organizing this challenge. It sounds interesting, but I have one question/doubt.
In the challenge description, it says that this is a domain adaptation task ?from simulation to surgery?, and you have provided labels for real data as well (only the number of patients are less than 11 simulations). Now my question is that having labels for both real and simulation data wouldn't make this challenge easier? Since many participants try to combine both datasets (source and target) and train a simple landmark detection. Or, maybe this problem is more challenging than what I assume? is there any baseline/inter-observer results for this study?
I was thinking, if it was a pure unsupervised domain adaptation task to adapt from the simulation domain to the real domain would make it a more interesting challenge.
Best,
Created by Sulaiman Vesal svesal Dear @svesal ,
thank you very much for your interest in the challenge and many apologies for our late reply. We hope its not too late for you to convince you that this is indeed a very challenging task to solve due to multiple reasons:
- Data from surgical training and from real intraoperative scenes are signficantly different. We are not sure whether a simple data set fusion strategy would actually lead to good results. Keep in mind that the test set only consists of real intraoperative scenes, so some kind of image translation of the simulated data might be helpful!
- Endoscopic data from surgeries are much more heterogenous and less standardized than other medical image data sets (artefacts, light conditions, view angle, blood, occlusion...), which renders point detection itself very challenging!
- We are finalizing a submission to a journal that uses these datasets; we plan to put it on arxiv asap for your reference.
We are happy to answer more questions.
Kind regards,
The AdaptOR team