spatie / async

Easily run code asynchronously

Home Page:https://spatie.be/en/opensource/php

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Why is it slow ?

Vicftz opened this issue · comments

Hello everyone !
This package look awesome, but I got few problems with it, especially with the speed of parallelism!
Here is my test , with Laravel 8.7

$start = now();
$number = 100;
echo 'Start of script <br><br>';
foreach (range(1,$number) as $i){
    echo $i;
}
echo '<br> Time without parallelism : ' . now()->diffInMicroseconds($start) . ' microseconds <br>';

$start = now();
$pool = Pool::create();
foreach (range(1, $number) as $i) {
    $pool->add(function () use ($i) {
        return $i;
    })->then(function (int $output) {
        echo $output;
    });
}
$pool->wait();
echo '<br> Time with parallelism : ' . now()->diffInMicroseconds($start) . ' microseconds';

And here is the result :

Start of script

12345678910111213 ....
Time without parallelism : 16 microseconds
42137961551112810 ...
Time with parallelism : 707905 microseconds

So the code without parallelism is 44187 faster than with parallelism. And it's only with a loop of 100 iteration.
Did I make a mistake ?

Thanks for you help.

commented

I am getting the same kind of result.

commented

This is the overhead when creating PHP processes. Please see #97:
That's rule number one: this package is only useful if you're dealing with several tasks which take at least a few seconds each to process.

The answer to your question is because you are running 100 instances of Laravel now in the background, which is insane.
I had a similar problem, ended up extending the Pool class and changing everything, so only one instance of Laravel gets up and I communicate through that one instance that's a lot faster, but took me 1 month to implement but it's possible so if you really need it go with my approach.

Dear contributor,

because this issue seems to be inactive for quite some time now, I've automatically closed it. If you feel this issue deserves some attention from my human colleagues feel free to reopen it.