Dear Dream Challenge, My preprocessing has completed, however I have the following errors: STDERR: E1207 16:31:21.097940 16 common.cpp:113] Cannot create Cublas handle. Cublas won't be available. STDERR: E1207 16:31:21.100627 16 common.cpp:120] Cannot create Curand generator. Curand won't be available. I believe this occurs when the GPU is not mounted to the docker container. For example, on my PC, I would run an image with, docker run -ti --device /dev/nvidia0:/dev/nvidia0 --device /dev/nvidiactl:/dev/nvidiactl --device /dev/nvidia-uvm:/dev/nvidia-uvm jcboyd/ubuntu-cuda-caffe, where jcboyd/ubuntu-cuda-caffe is my custom-built Docker stack of ubuntu 16.04 + cuda 8.0 + caffe. Is there a particular setup I should be using? I've seen something on environment variables (also nvidia-docker) in the Wiki, but I don't know if this is compulsory or not.

Created by jcb
> Are the GPUs mounted only for training though? The GPUs are always mounted when running your containers, for preprocessing, training, and inference/scoring.
Hi Bruce, Are the GPUs mounted only for training though? The error occurred in preprocessing. I have not yet managed to run a training job on Synapse (as per the "Job Status" thread), so I haven't yet seen whether this happens for my training image.
> Is there a particular setup I should be using? When we run your container we mount the character devices that you mention on your behalf. Further, for your convenience, we provide the environment variable "GPUS" which is a list of the devices, as described under "Environment Variables" [here](https://www.synapse.org/#!Synapse:syn4224222/wiki/401759). Please note that the GPUs may not be `/dev/nvidia0`, e.g. your model might be run with `/dev/nvidia2` and `/dev/nvidia3`.   To verify that the GPUs are mounted as expected, several participants have submitted a container which starts with a call to 'nvidia-smi'. You are welcome to do this if you wish.

GPUs unavailable? page is loading…