Type error on version 0.8.0
elrik75 opened this issue · comments
My pytest
tests fails since I upgraded starlette_exporter
to 0.8.0 (from 0.7.0).
I use FastAPI
.
Here the error:
labels = [method, path, status_code, self.app_name]
self.request_count.labels(*labels).inc()
> self.request_time.labels(*labels).observe(end - begin)
E TypeError: unsupported operand type(s) for -: 'NoneType' and 'float'
venv/lib/python3.7/site-packages/starlette_exporter/middleware.py:105: TypeError
It seems that there are cases where end
is never set.
The full trace:
venv/lib/python3.7/site-packages/requests/sessions.py:590: in post
return self.request('POST', url, data=data, json=json, **kwargs)
venv/lib/python3.7/site-packages/starlette/testclient.py:431: in request
json=json,
venv/lib/python3.7/site-packages/requests/sessions.py:542: in request
resp = self.send(prep, **send_kwargs)
venv/lib/python3.7/site-packages/requests/sessions.py:655: in send
r = adapter.send(request, **kwargs)
venv/lib/python3.7/site-packages/starlette/testclient.py:243: in send
raise exc from None
venv/lib/python3.7/site-packages/starlette/testclient.py:240: in send
loop.run_until_complete(self.app(scope, receive, send))
venv/lib/python3.7/site-packages/nest_asyncio.py:70: in run_until_complete
return f.result()
/usr/lib/python3.7/asyncio/futures.py:178: in result
raise self._exception
/usr/lib/python3.7/asyncio/tasks.py:223: in __step
result = coro.send(None)
venv/lib/python3.7/site-packages/fastapi/applications.py:199: in __call__
await super().__call__(scope, receive, send)
venv/lib/python3.7/site-packages/starlette/applications.py:112: in __call__
await self.middleware_stack(scope, receive, send)
venv/lib/python3.7/site-packages/starlette/middleware/errors.py:181: in __call__
raise exc from None
venv/lib/python3.7/site-packages/starlette/middleware/errors.py:159: in __call__
await self.app(scope, receive, _send)
hi @elrik75 thanks for reporting an issue. Can you give me a little more info about your endpoint?
In hindsight initializing the end
variable as None might not be a great idea; on the other hand I think 0
or a value from time.perf_counter()
would silently give misleading results.
I published a simple fix in 0.8.1. Please let me know if this fixes your issue.
Hi! Thanks a lot, my tests pass now without errors.
I've not checked the metric values yet. I'll post here if I see something weird on production.