Hi Team,
My Current understanding is, during the validation phase we will submit the nii predictions as suggested.
**What is to be expected for testing phase?** [a] Are we supposed to submit a Docker with the training and inference support to evaluate? (or) Will it be carried out on your side with the code submission we make? [b] Will there be codebase update specific to 2024 (or) can we use the old code base? It is not updated from FeTS2022 as of this comment.
**Changes to Model Architecture and training procedure?** [a] Are we allowed to change model architecture, changing specific layers conducive to better aggregation, like changing norm activation layers? If yes, what are the limits of such changes? [b] Are we allowed to make any training procedural changes, (i.e) the way in which optimization of models is carried out like global-local distillation / zeroth order optimization etc ?
**Or are we supposed to touch the Server's aggregation procedure alone with all local procedures constant ?**
Created by Joseph Geo Benjamin jgeob **What is to be expected for testing phase? **
[a] No dockers. We will only need the script that adheres to the instructions. We will be running your code from our side.
[b] The codebase is the same yes. I just realized it still says 2022 in the github repo (https://github.com/FETS-AI/Challenge/tree/main/Task_1). But this is the correct one.
**Changes to Model Architecture and training procedure? **
[a] Sadly, we don't yet have the flexibility to mess with model architecture. Changes should be constrained to those instructed in the relevant challenge script (https://github.com/FeTS-AI/Challenge/blob/main/Task_1/FeTS_Challenge.py)
[b] Same as above, and yes to your last question: "only supposed to touch server's aggregation procedure alone with all local procedures constant." I know how limiting that can be and we want to change it in the years to come as we grow.