```
# get the task runner, passing the first data loader
for col in collaborator_data_loaders:
#Insert logic to serialize train / val CSVs here
transformed_csv_dict[col]['train'].to_csv(os.path.join(work, 'seg_test_train.csv'))
transformed_csv_dict[col]['val'].to_csv(os.path.join(work, 'seg_test_val.csv'))
task_runner = copy(plan).get_task_runner(collaborator_data_loaders[col])
if use_pretrained_model:
print('Loading pretrained model...')
if device == 'cpu':
checkpoint = torch.load(f'{root}/pretrained_model/resunet_pretrained.pth',map_location=torch.device('cpu'))
task_runner.model.load_state_dict(checkpoint['model_state_dict'])
task_runner.optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
else:
checkpoint = torch.load(f'{root}/pretrained_model/resunet_pretrained.pth')
task_runner.model.load_state_dict(checkpoint['model_state_dict'])
task_runner.optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
```
from the code snippet above, I wonder why once task_runner objects that were created inside for-loop are not called all again for the next logic of loading model and optimizer.
it seems that only last task_runner object is being used from the next logic.
I ask this question because the detailed processes are wrapped by open-source packages and it is hard to get it directly from the code outside of them.
Created by anony mous dwnsnu No problem! Best of luck :) Wow thank you so much it is big help! I reached out to the devs, and in summary:
the get_task_runner function initializes the objects for the collaborators
(https://github.com/securefederatedai/openfl/blob/41b175eaa361d98a2f670c9a90e920162416db6e/openfl/federated/plan/plan.py#L415,
https://github.com/securefederatedai/openfl/blob/41b175eaa361d98a2f670c9a90e920162416db6e/openfl/federated/task/runner_pt.py#L300)
the task_runner defines a single model and optimizer which is also used for the training + aggregation processes, and they are not initialized separately for collaborators. Exact position can be found from this link;
[Code snippet](https://github.com/FeTS-AI/Challenge/blob/a7dda8abc6adbd7f700569e36638c2fc8d9fb188/Task_1/fets_challenge/experiment.py#L286)
https://github.com/FeTS-AI/Challenge/blob/a7dda8abc6adbd7f700569e36638c2fc8d9fb188/Task_1/fets_challenge/experiment.py#L286
After this questioning, I came to understand that it is for preparing datasets to be loaded onto RAM.
But, I am not so sure yet why RAM usage goes up so high.
That's why I giving you this question that the most curious part of causing high memory usage.
If someone who knows well could answer, it would be so appreciated.
Thank you Could you point me to which script and line you found this in?
Also, what's the exact problem you think this is causing?
Drop files to upload
Q. code looks weird in some part. page is loading…