scredis / scredis

Non-blocking, ultra-fast Scala Redis client built on top of Akka IO.

Home Page:https://scredis.github.io/scredis/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

100+ Threads

eugenemiretsky opened this issue · comments

I am running with the default config, and I can see 107 scredis threads - that seems like way too much.
Could not find a way to tune it. Seems like PinnedDispatcher is used, so there will be a thread per actor, but I cannot find when actors are created

Stack trace report: https://fastthread.io/my-thread-report.jsp?p=c2hhcmVkLzIwMjAvMDIvMTYvLS1hcGktaGgtY3huY2VlZjE0NWYtMDQ5ZC00YzkyLWJiMDctN2VlNzM4MzQ5NmIzLnR4dC0t&

Screen Shot 2020-02-16 at 7 53 18 PM

You created multiple Redis() clients, each of them creates its own threads and that's why you have so many threads.

You can see e.g. scredis-10 threads are related to 10th instance etc.
On your screenshot you have at least 13 instances of redis client, each of them having a few threads, that's your 100+ threads.

I am using Redis cluster, with 20 (10 master 10 slave) nodes or so. So it seems like it is creating 10 threads per master node? (I am reading only from master)

Any recommendations on how to tune this? We have a large Redis cluster. And we end up with 1000 threads

@kpbochenek Is this library still supported.
The issue makes it pretty much unusable for us
Screen Shot 2020-03-30 at 8 32 03 PM

hi @eugenemiretsky , could you try change your config and see if it solves your issue?

Looking into doc: https://scredis.github.io/scredis/Configuration.html

You can see default dispatcher is

      io-dispatcher {
        executor = "thread-pool-executor"
        type = PinnedDispatcher
      }
      
      listener-dispatcher {
        executor = "thread-pool-executor"
        type = PinnedDispatcher
      }

PinnedDispatcher creates thread per actor.

Could you try with:

blocking-io-dispatcher {
  type = Dispatcher
  executor = "thread-pool-executor"
  thread-pool-executor {
    fixed-pool-size = 32
  }
  throughput = 1
}

you can define max number of threads you want to have.

application.conf:

scredis {
  io {
    akka {
      io-dispatcher-path = "scredis.custom.cdispatcher"
      listener-dispatcher-path = "scredis.custom.cdispatcher"
      decoder-dispatcher-path = "scredis.custom.akka.cdispatcher"
    }
  }
  custom.cdispatcher {
    type = Dispatcher
    executor = "thread-pool-executor"
    thread-pool-executor {
      fixed-pool-size = 32
    }
    throughput = 1
  }
}

🤞

Thanks!
Looked at the code, a few question

  1. decoder-dispatcher-path doesn't seem to be used anywhere. Listener actor creates a decoder actor with akkaIODispatcherPath
  2. IOActor (and to lesser extent listener actor) maintain a lot of state. Would affinity dispatcher be a better match?
  3. I suspect that IOActors keep getting restarted (see my other post), will confirm in the thread dump, and may consider moving only those to a separate dispatcher.
  4. Decoder actors are stateless would it make sense to run them on separate dispatcher with increased throughput?
  1. Yes I noticed that too, I need to fix it.
  2. What dispatcher?
  3. If you could provide me exception thrown from IOActor I could take a look what is wrong. I have no possibility to run big redis cluster and playing with nodes going up and down. Maybe client<->cluster protocol changed and needs to be updated in scredis but I need to know what fails to be able to look at it.
  4. yes, I took a snippet from akka docs page, int your case it should be definitely bigger.

@kpbochenek thanks for the response.

What is a new Actor System created for every new connection? This is another reason for having so many threads (all dispatchers and their threads are created for every new connection)

nvm. Found the options. We were not using it

@kpbochenek

As you have seen, made a PR to create only one actorSystem
Even after that, we get a lot of threads. Our use case is a (many )small instances (2 cores) talking to a large Redis cluster (15 nodes). We end up with 6 (4 decoder, 1 listener, 1 io) actors, * 15 = 90. That results in 90 threads since you are using pinned dispatcher.
We switched to using a Regular thread pool, an saw a huge performance increases (using 2x less instances now).
I feel like the default thread pool configs are optimized for a large instance talking to a single Redis, not small instance talking to many .

I want to make a PR to fix it, but don't want to mess with default configs

  1. Was thinking to allow sharing a decoder pool between Cluster Connection instances
  2. Any other ideas how to solve this? Just add more documentation on how and why override thread pool configs?

Thanks for working on that!

Yes I think it would be ideal to state in documentation that default configuration works best for smaller clusters and if you have bigger clusters you should adjust that accordingly. Clear example how to override config to avoid big amount of unnecessary threads would be a big win in this case.

Having in documentation 2 configurations working best for small and big redis cluster would be best.

If anyone can contribute this change that would be best because currently I have no access to big redis cluster.

Gonna close this for now.