Greetings,
I am getting RuntimeError: CUDA error: out of memory.
The code was working perfectly on a single 1080Tx GPU with ~6 GB memory consumption. Can you please post the server machine configuration?
Regards,
Created by BISPL No that is not the problem. I am already using loss.item(). I think it looks similar to this issue.
https://discuss.pytorch.org/t/cuda-out-of-memory/449/13
Try `loss.detach()` Greetings,
Thank you for your reply.
I am able to run the code on my PC successfully with the sample data. Can you help me what can be the possible reason behind the OOM error?
[nvidia-smi(Local PC run)](https://drive.google.com/open?id=1DmMDxr9gorBtffz9cb-JwRTlWQFXCF_n)
The script was running on GPU-1. The memory usage was nearly 9 GB. So I guess I shouldn't be getting an OOM error on a 22GB GPU?
Regards This is result of 'nvidia-smi' in container.
```
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 390.48 Driver Version: 390.48 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla P40 On | 00000000:08:00.0 Off | 0 |
| N/A 15C P8 10W / 250W | 0MiB / 22919MiB | 1% Default |
+-------------------------------+----------------------+----------------------+
```
Drop files to upload
RuntimeError: CUDA error: out of memory page is loading…