VegasLimit without slow start is unusable
nvartolomei opened this issue · comments
Thank you for bringing this thing to the public.
VegasLimit is almost unusable as of current implementation for any high concurrent workloads (cpu bound with high core count or IO bound) due to very slow ramp up.
I see it removed in #16 because of being too aggressive and overshooting, my experiments with smoothing and exponential growth show good results. Also another thing to note that was not implemented is that Vegas seems to have another parameter, gamma (default 1). When queuing is higher than gamma it stops slow-start (sets new ssthreshold) and enters congestion avoidance. Some papers say to decrease limit by 1/8, linux does something interesting too https://github.com/torvalds/linux/blob/master/net/ipv4/tcp_vegas.c#L219
ymmv
no PRs since I'm experimenting with this idea using Go at the moment
I'm also looking at a Go implementation and am a bit worried about the slow start problem.
I've noticed the smoothing function here:
https://github.com/Netflix/concurrency-limits/blob/master/concurrency-limits-core/src/main/java/com/netflix/concurrency/limits/limit/VegasLimit.java#L174
Seems to wash out all changes. The limit is only ever incr or decr by 1 so the int truncation on this smoothing seems to keep it at the same value all the time. trunc(0.8n + 0.2(n+1)) = n. I must be missing something here.