Reduce contention in timer driver
Darksonn opened this issue · comments
I have often heard reports about performance issues due to large amounts of timers. Specifically, this is due to contention on the mutex in the timer driver.
Some ideas:
- If a
Sleep
instance is dropped before it is polled, its destructor will still lock the mutex to check whether it needs to be removed from the driver. This seems unnecessary, and will be a common case for timeouts. - We could shard the timer into multiple instances with one per runtime thread.
- It's worth thinking about whether our timers are optimized for the case where most timers complete normally (often the case of
tokio::time::sleep
), or the case where most timers are cancelled before they complete (often the case for timeouts). Can we do anything to differentiate this in the driver? - Perhaps a completely different implementation is appropriate?
Before spending a lot of time on a large change, please check with us beforehand by posting here or in Discord so that we can discuss whether your implementation idea is what we want.
PRs related to this issue:
I think sharding could go a long way with relatively little code. On TimerEntry
creation, it acquires a random shard index, modulo the number of configured shards on the runtime. DashMap shards by default with 4*num_cpus()
. I think starting with the number of worker threads would go a long way on it's own though in reducing the lock contention. Making it later configurable will help for any such corner cases in future.
On second thoughts, driving each shard adds more complexity...
Hi. I would like to provide a possible solution here.
I think for most web applications(or most applications), perhaps canceling timeouts before they are completed is the main source of performance issues.
(1) Background
In production environments, servers that handle network data requests often set timeouts for connections or requests.
See https://blog.cloudflare.com/the-complete-guide-to-golang-net-http-timeouts/
However, multi-thread concurrent timeout operations may be affected by lock contention performance.
And it because the reason some web server use it own timeout crate instead the timeout in tokio. See https://github.com/cloudflare/pingora/tree/main/pingora-timeout
(2) Solution
We can assign a TimeHandle
or Inner
in time/mod.rs
to each worker thread. Then we move multiple TimeHandles from driver:: Handle
to shared: worker:: Shared
, and each worker thread is assigned a TimeHandle
in the manner of remote: Box<[Remote]>.
The details here can be further discussed and considered, and this is only a preliminary idea :)
Yes, we want to make the timer thread-local for every worker threadl, for reducing the lock contention.
Since sleep
/timeout
must be executed in the worker thread, each sleep
and timeout
can find its thread-local timer.
(3) Performance improvement
- There is no global lock for creating
sleep
s ortimeout
s . Each timer is thread-local. - Because of the work-stealing, the
sleep
ortimeout
might be dropped in the worker thread which is not the thread created it. One worker thread needs to lock the thread-local timer of another worker thread. But we can assume that the task distribution is sufficiently uniform, so the acquisition of locks is also uniform. For that reason, maybe we need a index to tell which timers theTimerShared
comes from. - On the other hand, as the background says, there are a large number of high concurrency most timers are cancelled before they complete (often the case for timeouts). Such as the following code which is a situation encountered by most web servers. But because of the create and drop is all local, we can get a good performance.
- In
Driver.park_timeout
method, we get the all timer locks to get the smallest Instance to calculate timeout which is used for poll driver. Because we get the driver first, the lock condition lock here is small scale.
use std::time::Duration;
use tokio::time::timeout;
#[tokio::main]
async fn main() {
let fut = async { 1 }; // a vecry quick async task, but might timeout
let _to = timeout(Duration::from_secs(1), fut);
}
Since sleep/timeout must be executed in the worker thread, each sleep and timeout can find its thread-local timer.
This is not true. You can spawn a separate thread for the tokio runtime and register/poll the timeout on a different thread by using Handle::enter() on that thread.
This is how https://github.com/smol-rs/async-compat works
Yes, your view is correct. More precisely, sleep
and timeout
require the handle in the TLS.
I think we can achieve our goal this way:
- Assigning a thread local timer to each worker thread can make the poll timer logic straightforward.
- For those
sleep
andtimeout
created in other threads, we can determine which timer to place them in based on the calculated Instant (sharding).
The advantage of doing so is that there is no need to introduce additional complexity of sharding.
Always using Instant for sharding is another solution I came up with, but due to the loss of some thread-local, the cost of lock contention might be higher. I prefer the first option, but the second option is also acceptable.
Here's what I'm thinking. We shard the timer into several shards. When timers are created from within the runtime, they register with whichever timer shard is associated with the current runtime thread. Timers from outside the runtime pick a timer shard randomly.
When a timer is cancelled before completion, it must lock the shard it was registered with. That may or may not be the one on the current thread, depending on whether it has been moved across threads. It could be worth asking someone with a running production system to instrument their timers to see how often that happens.
Anyway, when we poll the IO/timer driver, it's generally done on just one thread. To start with, I don't think we should try to change that. The driver can just lock all shards one after the other, get the minimum timer from each one, and use that for its sleep duration.
I'd like to avoid locking the timer driver mutex in drop of Sleep
when not necessary. For example, if the Sleep
has not yet been polled, or if it has completed, then I think we can probably avoid taking the mutex.
None of this should happen in one PR. I think these changes can be done incrementally.
I would like to solove this issue by some PRs incrementally. It might start with reducing the unnecessary lock contention most timeout are cancelled before they complete (often the case for timeouts).
That's great! Specifically, what change do you intend to start with in the first PR?
Hi. I think the most pressing issue currently is that even if timeout has never been registered, locks will still be used in cancel. This will be my first change.
I agree. That would also be my first change. Go for it. :)
I tried removing the lock call on drop, but it wasn't super trivial due to the happens-before nature. Loom was not happy. The not-yet-polled state can be hardcoded. The deregistered state could perhaps be set when sleep returns Ready, I'm not sure if that will solve the loom problems though since that also only reads the state deregistered atomic.
Here's what I'm thinking. We shard the timer into several shards. When timers are created from within the runtime, they register with whichever timer shard is associated with the current runtime thread. Timers from outside the runtime pick a timer shard randomly.
When a timer is cancelled before completion, it must lock the shard it was registered with. That may or may not be the one on the current thread, depending on whether it has been moved across threads. It could be worth asking someone with a running production system to instrument their timers to see how often that happens.
Anyway, when we poll the IO/timer driver, it's generally done on just one thread. To start with, I don't think we should try to change that. The driver can just lock all shards one after the other, get the minimum timer from each one, and use that for its sleep duration.
I'd like to avoid locking the timer driver mutex in drop of
Sleep
when not necessary. For example, if theSleep
has not yet been polled, or if it has completed, then I think we can probably avoid taking the mutex.None of this should happen in one PR. I think these changes can be done incrementally.
This strategy sounds good to me.