Jeffail / tunny

A goroutine pool for Go

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Dynamically increase

pjebs opened this issue · comments

commented

How can I dynamically increase number of workers?

Hey @pjebs, I've been thinking about this and there's a number of ways that this could be implemented depending on use case. What in particular would you need a dynamic pool for? Any pseudocode examples would be great.

commented

This is my use-case:

https://github.com/pjebs/beanstalkd-worker

I have a constant: NUM_WORKERS_MULTIPLIER which dictates how many workers get generated:
workers := make([]tunny.TunnyWorker, runtime.GOMAXPROCS(runtime.NumCPU())*NUM_WORKERS_MULTIPLIER)

I was just thinking if it was possible to dynamically increase/decrease this number so if I measure CPU usage I can lower the number dynamically and vice-versa.

commented

Good library by the way!

+1

@Jeffail An additional use case for this would be querying an API that allows n concurrent connections, where n changes based on the load the API is currently under and is reported with each API response.

Also +1. I want to use this lib for managing the external workers. I should handle the situation where worker dies or I need to dynamically connect more workers to process higher traffic.

If someone knows good alternative with ability to change concurrency, please write a comment

Hey everyone, the tunny API couldn't have handled this at all gracefully without a significant restructure, so I've performed a full rewrite that adds this along with a few other fixes and QOL changes. It's currently on a branch found here: https://github.com/Jeffail/tunny/tree/feature/refactor-api

I'm going to tag the current state of tunny at 0.0.1 and eventually merge this new API into master under the tag 0.1.0. Obviously this will potentially have a huge impact on users that aren't vendored, so I'm interested in hearing any advice on doing this gracefully, if it's even possible.

commented

Not possible.

I'm giving it a week or so before committing the changes to master and have added an issue for guiding anyone that is impacted #19.

I also ran a quick benchmark to check for performance regression:

BenchmarkOld-8            100000             18496 ns/op                                                                                                                                                                               
BenchmarkNew-8            200000              6373 ns/op 

So we're looking good, as expected really since I'm no longer using reflection.

This has now been merged into master.