Dear admin, We tried to submit our docker but we always get the same error due to a bus error because dataloader's workers are out of shared memory. In our local environment, we can fix this issue by adding the parameter `--shm-size 12gb` where 12 is arbitrary. We try to change the shm-size when building the docker image but it doesn't work. The only solution we found was to use the `--shm-size` parameter on docker run. Do you know if there is a solution to this problem?

Created by Marc DEMOUSTIER marc_d
@marc_d , Oh great, thank you for the followup! We will keep this in mind for future errors as well.
Hi @vchung, We found a solution to our problem. It comes from the dataloader class from PyTorch. If we use multiples workers it will use a lot of shared memory and the default 64Mb of shared memory in a docker container will not be enough. We set `num_workers=0` to fix this issue. Thanks for your help!
Hi @marc_d , We are looking into how we can help resolve your issue. To better understand, your model requires more than the default 64Mb of shared memory size in order to run successfully? That is, it specifically needs 12Gb?

Insufficient shared memory (shm) page is loading…