The docker process is getting killed even before training starts, i.e. during the data loading. I am only loading the file paths during data loading, so what can be the cause of this error?
The error log says: Possible error: Insufficient shared memory. I think there are some defunct memory-hogging processes in the VM. Can you please check?
[Error log](https://drive.google.com/open?id=1bzfa56ppVB6B86EMfn7x6crYeG9y3WCK)
Created by BISPL This is a problem when running pytorch in a docker container.
(https://discuss.pytorch.org/t/training-crashes-due-to-insufficient-shared-memory-shm-nn-dataparallel/26396)
So, I modified docker parameter and restarted your task '44f73ea7-bac6-4671-99b1-acdbe88b3261'.
This task is now working properly.
Drop files to upload
Process killed during data loading page is loading…