There have been questions proposed around image segmentation and the use of outside data. For the challenge, we will not be supplying images with bounds around the joints necessary for scoring erosion or joint space narrowing. The Challenge requires that teams identify the joints for scoring purposes as part of the overall submitted method. YOLO is an image segmentation algorithm that has been successfully applied to identify joints in radiographs. A nice introduction to YOLO along with primary literature references can be found [here](https://towardsdatascience.com/an-introduction-to-implementing-the-yolo-algorithm-for-multi-object-detection-in-images-99cf240539) . You do not have to use YOLO, it is just being provided as an example. Please also refer to the data dictionary and example images in the [Data section](#!Synapse:syn20545111/wiki/597243) for details on the joints themselves.
Another question has been raised around using outside datasets for transfer learning. This is allowed, but the image that are being used outside of the ones provided through the challenge must be publicly accessible. A method cannot be built using data only accessible to that team.
Created by James Costello james.costello Hi Jim,
It's not clear. The [challenge statement](https://www.synapse.org/#!Synapse:syn20545111/wiki/597242) is:
1. Single-shot CombinedScore(joint space narrowing, joint erosion)
2. Joint space narrowing
3. Joint erosion
We are given as training data, images with narrowing and erosion scores indexed by joint label.
We are on our own to figure out how to locate the bounding polygons of the labelled joints, so that we can extract the joint images and train a neural net to classify narrowing and another one to classify erosion. This is because you have not provided those bounding polygons.
What's not clear is this: What exactly do you expect the Dockerized program to learn? There's at least 3 forms of training and you have provided data for 2 of the 3:
* Labelled joint bounding polygon
* Given a bounding polygon for a joint, joint space narrowing
* Given a bounding polygon for a joint, joint erosion
You have given us enough information for scoring engine to ask a docker image to train for narrowing and erosion. You have not given us enough information to train a neural net to produce a joint bounding polygon. I see 3 neural nets, separately trained, in this application. To train the bounding polygon net, we need to hand-label all the training data.
May solvers assume:
1. A non-deliverable but necessary individual team work item for the challenge is to hand-annotate bounding polygons for every joint of every training image
2. You will not expect us to deliver the ability to train the bounding polygon localization neural net inside the Docker image
Hi Lars,
Thanks for the question. I was not very clear in my statement. The goal is to have a self contained package where a new set of image from a rheumatologist can be taken as input and the SvH scores along with joint erosion and joint space narrowing as the output. That is the ultimate goal, but as you can see we split apart overall SvH score (subchallenge 1), joint space narrowing (subchallenge 2), and erosion (subchallenge 3). For subchallenge 1, image segmentation is not required and it is reasonable to assume teams can build a predictor without segmentation. For subchallenge 2 and 3 it will be required.
To be clear for the segmentation part, you can pre-train a segmentation algorithm and include that in the submitted container. We simply require that the method take in the 378 images on the Cheaha systems, segment these images, learn and score. The method should obviously not be hard coded to know which image is which image, but rather you can use the training data and any PUBLIC outside dataset to train your segmentation algorithm. Does this make sense? If not, happy to try and address any other confusions.
Cheers,
Jim
The challenge requires solvers to submit a Docker image which is capable of self-training for joint localization (and then damage classification with supervised learning, for which ground truth data is supplied for the training cases). The data supplied for training inside the Docker could be different from the data supplied for our training prior to submitting the Docker image. If this is the case, then the solvers must necessarily develop an unsupervised learning or non-neural-network method for joint localization.
Unsupervised learning is at the leading edge of current AI research. By making this the key ask, sponsors are changing this from a topic of application of currently available machine learning methods to Rheumatoid Arthritis, to the more general topic of unsupervised learning.
Sponsors are asking us to do work similar to
* [Unsupervised Segmentation of 3D Medical Images Based on Clustering and Deep Representation Learning](https://arxiv.org/abs/1804.03830),
rather than doing work similar to
* [Ossification area localization in pediatric hand radiographs using deep neural networks for object detection](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0207496)
Do I understand the ask correctly? Do sponsors know the difference between supervised and unsupervised learning?
Drop files to upload
Image segmentation and use of outside data page is loading…