What are the GPU specs on the scoring server? I.e. what model and how much DRAM?
Created by Lars Ericson lars.ericson Great! Glad you got it running. OK I'm in Done Validated status now on the Quick Lane. Hi Lars,
We are running the docker containers using Singularity with the `singularity run -nv` option - this should be more or less equivalent to running `docker run --gpus all`. We've implemented a fast lane (RA2 Challenge Fastlane) that permits unlimited submissions running on a tiny subset of the data (and does not return a score, but does return logs) to allow folks to check that their containers are running as expected and that they produce a valid prediction file at the end of the run - feel free to submit to this queue to check that your container is running as expected.
In my tests using tensorflow, the logs will tell you whether the Tesla P100 is successfully registered, but your mileage may vary with other modeling toolkits. @allawayr, when I am testing my Docker file, I run it like this, with a **--gpus all** flag:
```
docker run -it \
-v $STAGE/test:/test:ro \
-v $STAGE/train:/train:ro \
-v $STAGE/output:/output \
--gpus all \
--entrypoint=/bin/bash docker.synapse.org/syn12345678/mymodel:001
```
Does the scoring server run Docker images with --gpus all flag or similar? Without it, my image doesn't see the **nvidia-smi** function or GPUs.
Each run has access to an NVIDIA Tesla P100 with 16 GB of GPU-based memory in addition to 8 CPUs with 64 GB RAM. The maximum run time as currently configured is 48 hours per container (a slight change from what was previously mentioned [here](https://www.synapse.org/#!Synapse:syn20545111/discussion/threadId=6270)). We are can consider adjusting these limits if people are running into limitations with these specs, but I suspect this will be sufficient for most models based on our internal benchmarking; changes will likely require approval from UAB to access extra resources . Let me know if you have any additional questions!
Have a good weekend,
Robert