ljvmiranda921 / pyswarms

A research toolkit for particle swarm optimization in Python

Home Page:https://pyswarms.readthedocs.io/en/latest/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

multiprocessing doesn't work as before

starfriend10 opened this issue · comments

Describe the bug
After 1 year not running the same data/code (it's an ordinary deep learning optimization task), I used the same PC, system (ubuntu), Python (maybe some packages updated or added), but the computation time increases 6 more times. For example, last year, to run 100 particles, it cost 5 min per iteration in average, but now it can cost 30 min per iteration.

To Reproduce
It's a customized code, but generally based on DBN+PSO, optimizing 9 hyperparameters for a 4 hidden layers deep belief network (using https://github.com/albertbup/deep-belief-network).

Expected behavior
I did a full comparison of computation time (seconds) between using 22 and 11 cores for one iteration, as you can find below (np: n_processes):

<style> </style>
particles np=22 np=11
1 18 18
2 7 61
3 44 82
4 29 96
5 68 81
10 193 183
15 300 268
20 241 407
25 484 499
30 497 552
40 732 778
60 1163 1086
100 2393 1987

You can see that doubling of the number of processes doesn't reduce the time (there can be some variations due to different random sampling of epochs), which is strange. Anyone has a similar problem or knows why will be great appreciated!

Environment (please complete the following information):

  • OS: Linux
  • Version: Ubuntu 18.04.6 LTS
  • PySwarms Version: 1.1.0
  • Python Version: 3.7.4

Additional context
I also tried to see if pathos works as this post (#450) suggests, but got even longer time; when adding "nodes" for ProcessPool I got an error "invalid index to scalar variable".

commented

Is this still relevant? If so, what is blocking it? Is there anything you can do to help move it forward?

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs.