Hello, I know you tried your best and thank you for that, but I have a tough time seeing the relevance of some cases and a corrected review by a physician or an expert in the field. i.e. BraTS 1616, BraTS 1636, BraTS 1147, BraTS 810, BraTS 1104 and probably others. These cases seem to have been produced by your merge of the different deep algo's described in your paper, but it seems that the review process has not been done. I also suspect some cases in the validation ... Since you have taken the validation of 2020 and increase ~x2 with your new methods. I guess it will be the same for the test. So if it's the same for the test, the final ranking could be something other than a good generability of the model... Thanks for clarifying Best regards,

Created by Alxaline
Hi Alex, Apologize for not listing all the cases you mentioned. We would be looking at the complete cohort if there are any such cases. In the interest of time, I would suggest excluding these cases and be assured that there are no such cases in the validation and testing cohort. Thank you for your consideration.
Hello @ujjwalbaid I think in some cases you don't scroll the slices completely when checking. Example of two cases that I mentioned above that you didn't mention here. BratS 1616 (top picture) (coordinates 112, 77, 18) and BraTS 810 (bottom picture) (105, 108, 18). As you can see the artifact here occurred at the bottom of the brain due to bad skull-stripping but you did not erase the labels. Otherwise, many cases have small scattered components in places in the brain that sometimes don't make sense probably due to the use of deep algo fusion.... ${imageLink?synapseId=syn26064335&align=None&scale=100&responsive=true&altText=} ${imageLink?synapseId=syn26064336&align=None&scale=100&responsive=true&altText=} Best regards,
Hi @Alxaline, We are looking into these cases. We understand there is some issue with 1636 and 1104 and can be excluded as of now. Every single case was annotated and approved by the neuroradiologists. We have once again verified the validation dataset and assure that the validation dataset is free from any such case. This would be duly checked for the testing cohort as well. Thank you.

Noisy labels, relevance? page is loading…