I've uploaded my docker image to preliminary evaluation, but it fails because CUDA is out of memory. The traceback part of log file is Traceback (most recent call last): File "/usr/local/bin/process.py", line 133, in LandmarkDet().process() File "/usr/local/bin/process.py", line 124, in process landmarks_predicted = self.predict(input) File "/usr/local/bin/process.py", line 102, in predict output = test(mesh) File "/usr/local/bin/final.py", line 85, in test results.append(dio.store_uv_data_test(mesh, unwrap_type, device)) File "/usr/local/bin/data_io.py", line 87, in store_uv_data_test barycentrics = gu.barycentric(qps3d, uv_tiles[:, 0], uv_tiles[:, 1], uv_tiles[:, 2]) File "/usr/local/bin/geom_utils.py", line 195, in barycentric (q[:, 1] - b[:, 1]) * (c[:, 0] - b[:, 0])) / torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 128.00 MiB. GPU 0 has a total capacity of 23.64 GiB of which 69.69 MiB is free. Process 5930 has 19.26 GiB memory in use. Process 8423 has 4.01 GiB memory in use. Of the allocated memory 3.54 GiB is allocated by PyTorch, and 30.01 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) I've tested my alogrithm at my own laptop, its maximum GPU memory cost is 4148MB?should I reduce the memory cost of my algorithm? Thanks for your help! @oussama.smaoui

Created by Peng Yan CG_sayaka

How much GPU memory is allowed for my algorithm? page is loading…