quickfixgo / quickfix

The Go FIX Protocol Library :rocket:

Home Page:https://www.quickfixgo.org/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Rate limiting requirement

smulube opened this issue · comments

Hi, we're currently in the middle of going live with a project that involves FIX connections to a very large stock exchange. We are currently testing in their UAT environment, but will be going live to production systems very shortly. In the exchange's UAT environment, clients using the trading facility are limited to no more than 10 msg/second/session; once we activate in production this limit will increase to 1000 msg/second/session (for the agreement we currently have in place for them). If we breach this limit when sending messages to the exchange they temporarily disconnect us, meaning an interruption to our processing, and we would have concerns about losing messages in this situation.

This is not so much a concern for production as I think there is little danger of us breaching the 1000 msg/sec/session limit in the near future, but while testing in UAT we have been experiencing rate limit disconnections despite the precautions we tried to put in place to not call quickfix.Send faster than the limit.

What we think is happening after looking at the library is that when we call Send from our application code, the quickfixgo library actually buffers those messages in an internal slice of byte slices, which are then sent out via the writeLoop inside connection.go. This means that what we suspect is happening is that sometimes this buffer is capturing enough messages that when they are sent, they go out in a burst that pushes us over the limit causing us to be disconnected.

We have forked the library and patched in a rate limit at the lowest level (i.e. inside connection.go) to ensure that we never accidentally burst over the specified limit, but I'd like to ask if there would be any interest in us cleaning up our fork and submitting a PR with these changes?

Here's an outline of how we've solved on our side:

  1. Add a new session config value called MaxMessagesPerSecond into the internal.Settings struct so that when building sessions from config we are able to specify per environment/per session limits
  2. Inside the initiator or acceptor after the connection is established and we spawn the gorouting that runs the writeLoop function in connection.go, we pass in the MaxMessagesPerSecond config value and inside the loop after we pull a message from the channel to send we either immediately proceed (as is done at the moment), or we use a rate limiter to ensure we never burst above the specified session limit.

These changes seemed to work well in testing - we have stopped seeing any occurrences of ratelimit disconnections. Is this a change that would be of any interest to be pull requested back here as I can't believe we're the only people to face this restriction, or should we just look at continuing to maintain our own fork.

many thanks

@smulube please PR this back here, this is definitely something of interest!

If your going to proceed with this, please allow 0 to default to the original unlimited behavior.

@smulube please PR this back here, this is definitely something of interest!

ok, will clean things up on our side and open a PR back here for you to have a look at 👍

If your going to proceed with this, please allow 0 to default to the original unlimited behavior.

yep this was exactly how we implemented, think we must have had the same thought process around this 😄

Pull request opened for review here #506

That works for backing off - but usually you then want to put backpressure further upstream to prevent your producer from creating a deeper and deeper backlog. If its market data, you might want to coalesce the data at an instrument level and probably prioritise latest updates. If its order data you need to rethink if you can't handle the data flow. Using a goroutine with an atomic queue depth is pretty much what I've also done for a basic handler, but its quite naive in the long run and I know I will need something smarter.