Hi Team,
The training seems to be taking unusually long, even in A100 GPUs; just to be sure, it's not some dependency-related issue, is it possible to know the training time/resource usage on specific GPUs that you might have used to benchmark the baselines?
Also, what does Simulated time correspond to? and is the 7-day train time based on this simulated time (i.e) should we strive to keep it below 7 days?
Created by Joseph Geo Benjamin jgeob hey, I asked around for the information you asked for regarding the original benchmarks but sadly we have lost access to them
You can find details of what Simulated Time represents in the README here: https://github.com/FeTS-AI/Challenge/tree/main/Task_1
It reflects simulating the time it would take if this was a real federation
Regarding the unusually long training, can you elaborate on how many iterations / epochs you're using and how much time this takes overall?