Greetings,
My uploaded model for Test(Phase 2) has been running from the last 5 hours and is stuck at 70%. I guess running only inference for the test files should be faster!?
Can you please check the current status of the task ID 0ad3053c-dc05-4807-9f53-bdd1815a0ed2
Regards
Created by BISPL In phase1, CHD has 20 files and HCMP has 10 files.
But in phase2, CHD has 80 files and HCMP has 35 files.
Naturally, phase1 should be quicker.
As a result of top and nvidia-smi output, cpu usage of your code is very high (because we provide 4 cores.) and there are no attached processes to gpu. I am still not able to understand how is it taking 5 hours for just inference! The test phase is just running inference.sh on the Test Phase 2 files right? The inference for Test Phase 1 files were much quicker. It's done just before.
How many output files have been created till now? Can you please check that? Yes. I am using GPU during inference.
So the process is still running? This is output of 'top'
```
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
10 root 20 0 24.223g 3.947g 425336 R 357.6 1.6 991:23.52 python
```
This is output of 'nvidia-smi'
```
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 390.48 Driver Version: 390.48 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla P40 On | 00000000:0E:00.0 Off | 0 |
| N/A 38C P0 63W / 250W | 5891MiB / 22919MiB | 51% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+
```
Do you use gpu in your inference code?