Hi,
I'd like to know how much resources (CPUs/GPUs and RAM) and computational time for a single submission will be available. This is crucial to me for analysis design.
Best,
Lucas
Created by charzu Hi all,
Thank you very much for your extended patience.
After some testing, and given the limited size of the dataset for this Challenge, we are planning on limiting individual runs to 8 CPUs with 64 GB memory. The SLURM jobs for each container will be limited to 24 hours.
Each run will have access to a Tesla P100 as well.
I've added some instructions to the Docker Submission page that describes environmental variables that should be set in your "/run.sh " script to make the required NVIDIA libraries available to the container when we run it.
We might have some flexibility to increase the runtime or other allocation if we find folks running into these limits.
Let me know if you have any questions or comments!
Thanks,
Robert
Hi @charzu and @stadlerm,
Thanks for reaching out and apologies for the delay.
We are still running baseline models to assess exactly how much compute we will be able to provision for each participant. I will update this thread as soon as possible with an answer.
@jelaiw - can you comment with regards to GPU availability? Does UAB have GPUs that can be provisioned, or will you provide CPUs only?
Thanks,
Robert
I second this - it would be good to know if and what GPUs are available and which drivers are installed to make sure we can setup the docker image accordingly. For example to be able to use tensorflow GPU functionality