Hello,
could you provide information about the shared memory setting used for our docker container?
The default is too small to use multithreaded dataloaders in Pytorch. At least on our system, using multithreading during inference saves quite a bit of processing time.
Could maybe you consider increasing it by adding --shm-size=4gb to the docker command?
See also the note at https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes/running.html or https://github.com/aws/sagemaker-python-sdk/issues/937
Have a great day,
team jabber
Created by Felix Zimmermann fzimmermann89 Hi, we are using the default value for option '--shm-size'. It's OK to add `--shm-size=4gb` to run your docker container, just feel free to remind us to add this option in you submission email.