My submission 9695711 is failing. I believe it is a storage issue. I'm not doing anything exotic, this is only a parallel backend to standard sklearn methods . Please see the log below and I would greatly appreciate any advice.
File "/usr/local/lib/python3.7/site-packages/sklearn/externals/joblib/pool.py", line 371, in send
CustomizablePickler(buffer, self._reducers).dump(obj)
File "/usr/local/lib/python3.7/site-packages/sklearn/externals/joblib/pool.py", line 240, in __call__
for dumped_filename in dump(a, filename):
File "/usr/local/lib/python3.7/site-packages/sklearn/externals/joblib/numpy_pickle.py", line 484, in dump
NumpyPickler(f, protocol=protocol).dump(value)
File "/usr/local/lib/python3.7/pickle.py", line 437, in dump
self.save(obj)
File "/usr/local/lib/python3.7/site-packages/sklearn/externals/joblib/numpy_pickle.py", line 278, in save
wrapper.write_array(obj, self)
File "/usr/local/lib/python3.7/site-packages/sklearn/externals/joblib/numpy_pickle.py", line 93, in write_array
pickler.file_handle.write(chunk.tostring('C'))
OSError: [Errno 28] No space left on device
Created by Ivan Brugere ivanbrugere hi @trberg , for now, I ran it serially. Surprisingly it completed in time on one core! The python joblib backend can be tricky. I assume it was creating large temp files but I'm not sure why, unless it's densifying my sparse matrix (would definitely do it). If performance looks promising to further explore hyperparameters, I may revisit it. Hi @ivanbrugere,
So we've looked into this and we can't seem to find a cause. We ran a couple of test models and didn't see this error. We've also checked the available space on the NCATS server and we aren't nearly out of space.
Is there another possible cause? What is happening at this point in your code? Do you have an estimate on the size of the file your writing out?
Thanks,
Tim