1. on this page https://www.synapse.org/#!Synapse:syn4224222/wiki/401749 it says Output: Two scores (SL, SR), each between 0 and 1, indicating the likelihood that the subject was tissue-diagnosed with cancer within one year from the given screening exam, in the left (L) and right (R) breast respectively.
i remember like 1 or 2 month ago it was mentioned on the forum it is evaluated per patient. so some tyyes of post-processing will be done on the two scores on the organizers side to derive the evaluation score?
2. when i read through the dozens of pages in the challenge description, i found somewhere it says in subchallenge 1 only one of the meta data file is available. then i cannot remember which one is which. then i tried to find this page again, but no matter how hard i try, i cannot find it anymore. i hope it is not hallucination. can one of the organizers confirm?
would be good to provide a single pdf file contains all information from all pages so we can search. thanks a ton.
Created by Yuanfang Guan ???? yuanfang.guan Sorry, I still have to ask since it's not clear to me.
For each subchallenge, do we make
1) 1 prediction per laterality per subject, or
2) 1 prediction per laterality per subject per exam?
An example: We have a subject with 2 exams, each exam with 3 images per side -- how many predictions would that require?
It may be helpful if you could link to a sample submission file for each challenge.
Thanks a lot i guess i found confusing because if it is the same training data, then what can prevent me from using the other columns (e.g., age, sex) in meta?
i thought different tsvs, and different image headers (which contains all non-allowable data in sc1) would be fed into sc1.
thanks
i think the labels should be kept in the image file, even it is redundant, because it is so confusing when mapping from patient label to image label. Sorry for the ambiguous response. Your initial question was regarding the scoring so I was reasoning in this context.
The [Challenge Dictionary](https://www.synapse.org/#!Synapse:syn7214004) has three columns for the exams metadata file: training, SC1 scoring, SC2 scoring. There are no "SC1 training" or "SC2 training" because the training set is the same for both sub-challenges (see also [Training, leaderboard and test sets](https://www.synapse.org/#!Synapse:syn4224222/wiki/401743)). The cancer label then is available **for training** in both sub-challenges.
Please let us know if you find any other elements that you find confusing. i think there is some inconsistency:
on page https://www.synapse.org/#!Synapse:syn4224222/wiki/401759 it says:
>images_crosswalk.tsv has the following 7 columns:
>subjectId examIndex imageIndex view laterality filename
on page https://www.synapse.org/#!Synapse:syn4224222/wiki/401750 it says:
>Available in both Sub-challenge 1 and 2, the images crosswalk file links the images uniquely d.....
>(but not meta file)
on page https://www.synapse.org/#!Synapse:syn7214004, it says:
>2016-09-26: Removed the cancer column from the images crosswalk file. Use cancerL and cancerR from the exams metadata instead.
in your reply , you said:
>It shows that the exams metadata table is not available in SC1
Then, all above give the following conclusino: we will only have images_crosswalk.tsv in SC1, and images_crosswalk.tsv does not have the cancer status. then how can we figure out which one is cancer and which one is not in training?
thanks a bunch
The difference between Sub-challenge 1 and 2 remains that clinical and longitudinal information are available in SC2 while only digital mammography images are provided for a given exam in SC1.
See [Challenge Questions](https://www.synapse.org/#!Synapse:syn4224222/wiki/401749) for additional information. >We changed the thinking because a participant who predicts that a subject will get cancer could be right for the wrong reason (that is one could think that the subject will get cancer because of the left breast got the cancer, when it was the right breast).
but that was the whole point of 2nd challenge. or else what is the differenc? between 1?
thanks a ton. Hi Yuanfang,
> i remember like 1 or 2 month ago it was mentioned on the forum it is evaluated per patient. so some tyyes of post-processing will be done on the two scores on the organizers side to derive the evaluation score?
The scoring will be performed at the breast level (not at the subject level) so we don't perform any particular post-processing regarding this aspect. We changed the thinking because a participant who predicts that a subject will get cancer could be right for the wrong reason (that is one could think that the subject will get cancer because of the left breast got the cancer, when it was the right breast).
> i found somewhere it says in subchallenge 1 only one of the meta data file is available
The reference for which information is available in which sub-challenge is the [Challenge Dictionary](https://www.synapse.org/#!Synapse:syn7214004). It shows that the exams metadata table is not available in SC1. The images crosswalk file is available in both sub-challenges. Please let us know if you find any contradictory information on the Wiki.
Thanks!