I'm digging in now on RA2 for the new decade. I just noticed that the image dimensions in the training set are not standardized and run a gamut from 306x255 to 3765x2251. There is not significantly more information in the larger images than in the smaller. Images can also be single channel or 3 channel. For example UAB649-RF is 306x255x3 with a background color of 0,0,0, and UAB366-LH is 3765x2251x1 with a background color of 3:
${imageLink?synapseId=syn21537418&align=None&scale=100&responsive=true&altText=Non%2Dstandardized image formats}
I will normalize these as part of my process. However, I don't think this kind of normalization is the hard part of the challenge. It might clarify the challenge if the image set was normalized to say 256x256x1 with a background color of 0, like this:
${imageLink?synapseId=syn21479253&align=None&scale=100&responsive=true&altText=}
Given that a single case always involves 4 films, once standardized it also makes sense to stack the 4 films like this:
${imageLink?synapseId=syn21479254&align=None&scale=100&responsive=true&altText=Image stack LH RH LF RF}