Hi fellow researchers,
I wanted to share a quick segmentation baseline using Auto3DSeg package from MONAI.
For example, for Brats23 GLI dataset task, you can use the following code snipped, to train 5-folds with SegResNet.
```lang-python
from monai.apps.auto3dseg import AutoRunner
def main():
input_config = {
"modality": "MRI",
"dataroot": "/data/brats23",
"datalist": "brats23_gli_folds.json",
"sigmoid" : True,
"class_names": [ {"name": "wt", "index": [1,2,3]}, {"name": "tc", "index": [1,3]}, {"name": "et", "index": [3]}],
}
runner = AutoRunner(input=input_config, algos = "segresnet", work_dir= "./brats23_workdir")
runner.run()
if __name__ == '__main__':
main()
```
I got the following results running it (each fold is a single model)
|AVG |WT|TC|ET
fold0| 0.9078|0.933| 0.914| 0.877|
fold1|0.9214| 0.940| 0.923| 0.902|
fold2|0.9160| 0.936| 0.921| 0.891|
fold3|0.9155| 0.933| 0.923| 0.890|
fold4|0.9151| 0.938| 0.924| 0.884|
##How to run
- Install MONAI. I recommend using the latest dev branch
```
pip install git+https://github.com/Project-monai/monai.git@dev
```
instead of (pip install monai). You can also clone it from github manually
- copy [brats23_gli_folds.json](https://www.dropbox.com/s/b4dxpn81gwii0ut/brats23_gli_folds.json?dl=0) file in the code-snipped dir. It has a list files with fold assignments (for Brats23 GLI task)
I created it with 5-folds 80/20 assignment, randomly. It's attached here for convenience, so that you can try the same folds (or even run with your own algorithms to compare results)
- place the Brats23 GLI data in "/data/brats23/GLI" folder, so that it has 2 subfolders /data/brats23/GLI/TrainingData and /data/brats23/GLI/ValidationData.
(or just update the path to your data ("dataroot" key) in the code snippet above)
- (Optional) I recommend using [Nvidia Pytorch docker](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch/tags) , instead of a native Pytorch installation. This will run faster on Nvidia GPUs.
##About Auto3DSeg
Auto3DSeg automates many decisions for you (and is designed to work for any 3D segmentation task). We've been developing it as part of MONAI open source project.
Underneath, it will analyze your data (check datastats.yaml file produced), it will configure itself for the task (ROI, network, etc based on your data), it will configure itself to your hardware ( scale to all your GPUs /multigpu and scale to GPU memory), train 5-folds (or any folds requested), infer and ensemble. It comes multi-node support too.
In the example above, I specified to use only "segresnet" algo. (https://docs.monai.io/en/stable/networks.html#segresnet). AutoRunner(algos = "segresnet"...)
If that option is not provided, by default AutoRunner will run all supported algos (currently: segresnet, dints, swinunetr), 5-folds each, which can take a longer time, and will use a subset of best models checkoints from different algorithms for ensembling.
As the model trains, you can monitor the progress with Tensorboard ( average and WT, TC, ET dice numbers)
```
tensorboard --bind_all --logdir=./brats23_workdir/segresnet_0/model/
```
you can also see a more detailed log in "./brats23_workdir/segresnet_0/model/training.log", where "brats23_workdir" is just a dir we specified in AutoRunner( work_dir= "./brats23_workdir", ....). In the same folder you'll find the trained pytorch checkpoints.
there are many options you can add to "input_config" dict above (including num_folds, learning_rate, etc). they are not well documented at the moment, but check the "hyper_parameters.yaml" in (./brats23_workdir/segresnet_0/config/).
##Classes
In the example you noticed we specified "class_names" , with a list of 3 items (on how to re-combine input labels to a BraTS expected 3 regions WT, TC, ET). Since these regions are overlapping, the output of the network is a 3 channels followed by a sigmoid. Instead you may want to train on a non-overlapping classes (0,1-NCR,2-ED,3-ET) as in the BraTS raw input labels, and only recombine into WT, TC, ET after training, the code will be even simpler (with a network outputting 4 channels followed by a softmax)
```lang-python
from monai.apps.auto3dseg import AutoRunner
def main():
input_config = {
"modality": "MRI",
"dataroot": "/data/brats23",
"datalist": "brats23_gli_folds.json",
"class_names": ["ncr","ed","et"],
}
runner = AutoRunner(input=input_config, algos = "segresnet", work_dir= "./brats23_workdir2")
runner.run()
if __name__ == '__main__':
main()
```
the class_names key above is optional, and provides more readable Tensorboard labels, (the system already knows you have 3 foreground classes + background, based on the data analysis step of ground truth labels, but not their names). Training with non-overlapping (mutually exclusive) classes is the most common segmentation task, and you can try this snipped on your own data or in another challenge.
_Note: if you train using the first snipped (with overlapping classes), the inference resulting files will be a 3 channel binary nifti (WT, TC, ETC), but the BraTS challenge requires a 1 channel integer label for submission, so you'll need to resave the output nifti files manually. But if you train with non-overlapping classes (softmax), the inference results files will already be in the expected 1 channel format. _
##Timing
On A100x8gpu Nvidia machine, this examples takes around 4hr to train per fold, and on V100x8gpu 16gb machine around 10hrs per fold, with 300 epochs over the BraTS23 GLI training set.
##Development
[Auto3DSeg](https://monai.io/apps/auto3dseg) is open source and is being developed and improved over time, so all your suggestions are welcomed. I hope it will help you as a baseline to build up and improve your results for this challenge, or to compare your per-fold numbers using the provided json split.
_PS: Special thanks to all the BraTS organizers for another year of this great challenge._
Cheers,
Andriy
Created by Andriy Myronenko amrn Hi @amrn,
Organizers have reached out to your team multiple times but have not received a reply. Could you please revert at ubaid@iu.edu ? Hi Andriy,
Thank you for sharing this information regarding AutoSeg3D.
Do you know if it is possible to run it with Windows 11 OS?
I have been having trouble getting it to recognize a GPU, although other MONAI tutorials are able to utilize my GPU.
Thanks!
Dominic Hi Andriy,
Thanks for sharing this!
MONAI Auto3DSeg is really awesome and can be used in many medical imaging applications.
The above code snippet seems to work on the GLI challenge of BraTS 2023. I wonder how to adapt this code to other BraTS 2023 challenges and to other datasets as well.
Best,
Ramy
Drop files to upload
Baseline with Auto3DSeg from MONAI: Tutorial. page is loading…