This appears to be true for all .dcm in the pilot set, which means the max pixel value is around 4095. But I'm not sure if this is true for the training and test sets. Some normalization methods depend on global stats. If this is not true for some images, then the normalization could be totally wrong for those images and the predictions will go very bad. Would the organizers be able to confirm that?

Created by Li Shen thefaculty
Thank you both!
[ImageMagick identify](https://www.imagemagick.org/script/identify.php) command provides detailed information about the images. The example below shows that the image has a bit depth set to 16 bits while actually using only 12 bits to encore the pixel values. ``` $ identify -verbose 121388.dcm ... Colorspace: Gray Depth: 12/16-bit Channel depth: gray: 16-bit ... ``` As suggested above, please use the information from the image instead of hardcoding parameters for all the images. Thanks!
Hi Li, > Can we conveniently assume all .dcm have a depth of 12 bits? No. Please use the bit depth specified in the DICOM header.
Not sure if this is true for the test data, but definitely not true overall in practice (16 bit sometimes used). If the goal is a generalizable model this assumption may be less than ideal. (Personally I'm converting everything to float and normalizing, which should handle both.)

Can we conveniently assume all .dcm have a depth of 12 bits? page is loading…