Hello teams.
Can I ask where I extract a metric 'communication cost' from the given code of Task1 as previous challenge papers all contain on their results?
For example, 'communication cost' was attached as 0.3 or 0.7 according to Machler et al. 2023.
However, this challenge gives simulation time and the convergence score (it seems more appropriate because it is real time-consumption based on seconds).
So, I still wonder if these metrics (simulation time and convergence score) are newly adapted from this challenge, and if not how 'communication cost' should be extracted from the code?
Besides, I would like to know if MAX_SIMULATION_TIME was also applied on previous 2022 challenges.
(refer to: https://github.com/raraduck/Challenge/blob/dbcebdfd9918e416edd6019dc7ae1d1fd8a1e954/Task_1/fets_challenge/experiment.py#L31)
meeting MAX_SIMULATION_TIME is quite challenging to improve the model performance in federated learning.
One more question.
Previous challenge's papers are containing and suggesting Dice levels of WT, ET and TC.
WT=label1+label2+label4
ET=label4
TC=label1+label4
(label1: NCR, label2: ED, label4: ET)
However, Task1 code calculates metrics based on label1, label2 and label4.
So, I wonder which metrics between them should be added as target dice score on short paper as results?
Thank you.