A key message in the webinar today was that the main goal of the challenge is to produce an accurate model with potential clinical applications
With that in mind it was highlighted that we should share our thoughts on the current training data and how that might negatively affect this goal
We recognize that some of these limitations are part of the "challenge", but we figured we would share our thoughts and then let the challenge organizers decide what could reasonably be altered about the training data, and what will remain to keep the challenge exactly that: a challenge.
One issue that has already been pointed out is that we do not get any information about the joints, as has usually been done in such cases. When looking at the literature, two prominent examples would be:
- Deep Learning Approach for Evaluating Knee MR Images: Achieving High Diagnostic Performance for Cartilage Lesion Detection (https://www.ncbi.nlm.nih.gov/pubmed/30063195)
- Automatic Detection of Knee Joints and Quantification of Knee Osteoarthritis Severity using Convolutional Neural Networks (https://arxiv.org/abs/1703.09856)
While these models focus on knees they have two important attributes: All images are in the same rotation, and the joints assessed all lie somewhat in the same region of the image. While this is not necessarily possible with the joints in the hand (due to disfigurement and dislocations - e.g.: sample UAB640-LH comes to mind, or UAB640), we think it would greatly improve the potential of the data, in particular for specifc joint assessment
To solve this issue, it would be invaluable to have the different joints highlighted in the image, with some soft of label and a bounding box (as already mentioned in other topics). At the very least in the training data (the full set available on submission), so that we could train a classifier to recognize the labelled joints in the image before detecing the joints and scoring them. We can always do segmentation to find joints, but without labels we won't be able to learn which exact joint we found. We could try and create a classifier from the training data, after manually labelling it ourselves
Though this is unlikely to be successful, considering that it's only 50 images each. It would also likely require a rheumatologist to assess where the joint "starts" and "ends", and we recognize that not everyone participating in the challenge might have access to one
Again, we don't know how feasible this is (probably not at all), but we just wanted to raise the point anyway
We recognize that this is maybe too much of a change in terms of the challenge, so we raise some other issues here that might be more feasibly changed:
- As already mentioned, fixed rotation of the images so that there is a smaller amount of variation in how the hands are displayed (at least have all images face upward, instead of having some images have the tips at the bottom)
- Some images are images within images (UAB109, UAB111, UAB039), and it would be helpful if those could be ractified to only show the actual radiograph
- To extent on the previous point, some images have considerable white artifacts around the actual radiograph (UAB245 - especiallly in LH, which cuts into the finger, UAB692, UAB422, UAB503)
- As already mentioned, the hand images for UAB545 contain both hands in both images LH and RH
- Really bad contrast: UAB469
- UAB195-LH has an artifcat where an additional fingertip appears in the image, disconnected from the actual hand
- UAB344-RH has some strange white noise in form of random white circles sprinkled throughout
We think that, if all images could at least be be brought into one cannonical orientation, that would already help a lot. Maybe some of the more noiser images could be removed/cleaned up. Especially, since 50*4 images is already a rather small starting point, the quality of these individual images is key
Thank you for the webinar and we are looking forward to competing in this competition and to collaborating with the other teams, and we are very excited to see what sort of solutions everyone will come up with
Created by Michael Stadler stadlerm Hi @LouBridges, I think the request is to do this outlining in all of the training data, not just a sample. Is that feasible on your end? @dongmeisun is correcting as many of the artifacts as possible. I think we can fairly easily outline where each joint is located and post that. @allawayr @jakechen @agaldran totally agreed - but from a high prediction perspective, with a look at clinical utility the more irregularities we remove the better
I also don't think that orientation would be a big issue to fix for clinical application. It's noteworthy that models in the literature often reach values short of clinical utility, despite very well regulated datasets
I think it's definitely fine to keep some of the issues intact and you will never find a flawless dataset
I'm also not asking for anything to be removed/changed (except for maybe the one specific outlier with both hands), and just pointing out the potential issues.
If some of them are alleviated then fine, otherwise we will work around them ? Thanks for the detailed thoughts and comments @stadlerm and @agaldran! I agree in general that we should use data that is representative of the type of data a clinically-deployed algorithm might encounter.
I do think, however, that it might be good to fix these cases, as they are something that could reasonably be fixed before applying an algorithm in a clinical setting:
-> *Some images are images within images (UAB109, UAB111, UAB039), and it would be helpful if those could be ractified to only show the actual radiograph*
-> *As already mentioned, the hand images for UAB545 contain both hands in both images LH and RH*
(we are already in the process of fixing this)
The other ones you mention seem to me like they are probably a reasonable type of to expect an algorithm to encounter.
What do @james.costello , @LouBridges , @jakechen and @dongmeisun think?
Regarding your initial question about segmentation, it's my hypothesis that one _might_ be able to train a model that is not dependent on explicit segmentation of the joints but, but I have no idea how well it would work in practice. I'd be interested to hear what the other challenge organizers think about this. Hi,
Thanks for posting such a detailed list of problems with the data! Just my two pennies on this topic. I mostly agree that there are some outliers in the data, like the two-hands radiography, that would make it very hard to learn to score joints properly. But in my opinion, for each restriction we ask the organizers to impose on the data, we are creating a limitation on the applicability of the resulting model in the future. For instance, if we require all images to have a perfectly balanced contrast, or all hands/feet to be enclosed in a tight bounding box, then our models will "only" be able to work properly on that kind of radiographies in the future. Even more concerning is the case when one trains a deep neural network to solve the task, and predictions are spit by the model with no explainability at all. If you happen to run such a model on a large-scale database with some outliers on it, you will never know if those outliers were happily scored in a meaningless manner.
As I said, I am in favor of removing some heavy outliers, but not really all of them so that we end up with a perfectly clean but unrealistic dataset. For example, if we want our algorithms to work on images pointing downwards after they have been trained, maybe we should try to design some kind of clever data augmentation technique that generates examples (and properly transformed labels) of images pointing up and downards, instead of asking the organizers to remove all these "strange" images, thereby creating the requirement for future images to be all correctly aligned? Same goes for low-contrast images, imho.
Thank you for the webinar, by the way, it was really interesting. And thank you also for organizing this challenge.
Regards,
Adrian
Drop files to upload
Potential improvements to the training data that would facilitate more accurate models page is loading…