alisaifee / flask-limiter

Rate Limiting extension for Flask

Home Page:https://flask-limiter.readthedocs.org

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Limiter gives unexpected results when running with gunicorn

vulcan25 opened this issue · comments

EDIT: solved, see next comment.

Flask-Limiter==1.0.1
limits==1.3

I have Flask Limiter setup on my /one endpoint with a limit as follows against the whole blueprint, as per the docs:

limiter.limit('4/hour')(bp)

Obviously with this config I would expect 4 200 response codes, followed by 429 until an hour has passed.

With Flask running in dev mode, using werkzeug's run_simple:

from werkzeug.serving import run_simple
#[snip]
if __name__ == '__main__':
    run_simple('localhost', 9999, app, use_reloader=True, use_debugger=True, use_evalex=True)

I can issue the following command to hit the endpoint 20 times.

for var in `seq 20` ;
do curl -w '%{http_code}-' http://localhost:9999/one -o /dev/null -s;
done

The following is the output of this command running 5 times, with me restarting the dev server each time (This is because I'm using Flask-Limiter's default config of in memory storage rather than redis, etc - restarting the dev sever forces the rate limits to reset).

200-200-200-200-429-429-429-429-429-429-429-429-429-429-429-429-429-429-429-429-%
200-200-200-200-429-429-429-429-429-429-429-429-429-429-429-429-429-429-429-429-%
200-200-200-200-429-429-429-429-429-429-429-429-429-429-429-429-429-429-429-429-%
200-200-200-200-429-429-429-429-429-429-429-429-429-429-429-429-429-429-429-429-%
200-200-200-200-429-429-429-429-429-429-429-429-429-429-429-429-429-429-429-429-%

This is what I expected. Four 200 errors and the rest 429.

However, when I launch the same application with gunicorn (again restarted per test)

gunicorn --bind 0.0.0.0:9999 --workers 4 run:app   --log-level debug        

The results are as follows.

200-200-200-200-200-429-200-200-200-200-200-200-200-200-429-429-200-429-429-429-%
200-200-200-200-200-200-200-200-200-429-429-429-429-429-429-429-200-429-200-200-%
200-200-200-200-200-200-200-200-200-200-200-200-200-200-429-429-200-429-429-429-%
200-200-200-200-200-200-200-429-200-200-200-200-200-200-200-429-429-200-429-429-%
200-200-200-200-200-200-200-200-200-200-429-200-200-429-200-429-429-429-429-429-%

Clearly the 4/hour rate limit is not being enforced, but whether I get a 400 or 200 seems to be quite random.

I ran another test without restarting gunicorn in between tests. With 100 requets as above, it eventually tends towards a 429:

200-200-200-200-200-200-200-200-429-200-429-200-429-200-200-200-200-200-429-429-%
429-429-429-429-429-429-429-200-429-429-429-429-429-429-429-429-429-429-429-429-%
[ snip - 3 more rows with all 429]... 

I tried hacking the Flask Limiter code to output more info to gunicorn's debug console.

I added a detail class to wrappers.Limit to output some information:

def detail(self):
    d = { 'limit':self.limit,
          '__scope': self.__scope,
          'per_method': self.per_method,
          'methods': self.methods,
          'error_message': self.error_message,
          'exempt_when': self.exempt_when
    }
    out = pprint.pformat(d) 
    return (out)

Then update extension.py adding a for loop to the __evaluate_limits function, which I understand to be the logic function.

def __evaluate_limits(self, endpoint, limits):
    for limit in limits: print (limit.detail())
    failed_limit = None
    ...

This at least confirms that the string '4/hour' is being interpreted correctly.

{'__scope': None,
 'error_message': None,
 'exempt_when': None,
 'limit': 4 per 1 hour,
 'methods': None,
 'per_method': False}

For better diagnostics, I suspect my function should grab information from somewhere else in the codebase, but I don't know where. I'm also not sure if this could be a gunicorn configuration value that is required; Some sort of internal caching causing this?

okay, so I figured this out. Running gunicorn with --workers 4 means each worker has it's own memory, hence the random results.

do you have some fix to get it working?

do you have some fix to get it working?

use memcached or redis?