Freezing server and increasing CPU usage when multiple workers using
dinosaurtirex opened this issue · comments
Is there an existing issue for this?
- I have searched the existing issues
Describe the bug
Hello! It is kinda rare thing, but it already happened may be 4 times in my backend. Normally i running sanic in production with this options:
sanic main:app -H -p --fast
So the fast means to create as much workers as possible
Most of the time it works, but sometime, when i reloading server, its just freeze entire system and i don't know why.
This glitch issue only with big complicated projects, like mine, but if you set like 3-4 workers on 2 cores, it will make exact same cpu usage.
Like normally my project works with 2 workers and using 25-40% of the CPU. But sometime, i don't figure out when, it's just increasing CPU usage and freezing entire server. Only way to fix, it's to run on single worker.
I don't know what it be and how to exact reproduce error, but sometimes i have this.
I don't wating for fix of the issue, i just reporting activity, which may be someone had too. May be it's problem in my back-end codebase. But if my code works most of the time, i don't know what to think
Code snippet
I'm sorry, but actual code it's making by private company, and i cannot share it. But i can show how possible main.py file looks like:
import asyncio
import platform
from sanic import Sanic, Request
from sanic.response import json, HTTPResponse
from tortoise.contrib.sanic import register_tortoise
from orm.models import User
from orm.db import TORTOISE_ORM, _DB_URL
from settings.constants import CODES, LOGGER_PATH
from exceptions.logger import add_info_to_logger
"""<60 BLUEPRINT IMPORTS>"""
DB_URL: str = _DB_URL
MODULES_GLOBAL: list[str] = TORTOISE_ORM["apps"]["models"]
PLATFORM = platform.platform()
app = Sanic("app")
if "Windows" in PLATFORM:
app.config.OAS = True
else:
app.config.OAS = False
app.static(BASE_STATIC_PATH, BASE_STATIC_PATH)
app.config.CORS_ORIGINS = "http://localhost:1234,https://productionwebsite.com"
Extend(app)
"""
<REGISTER 60 BLUEPRINT SECTION>
"""
@app.on_request
async def run_before_handler(request: Request):
request.ctx.start_time = asyncio.get_event_loop().time()
is_authenticated = await check_auth(request)
request.ctx.is_authenticated = is_authenticated
if is_authenticated:
request.ctx.user: User = await get_user_from_request(request)
@app.on_response
async def run_after_handler(request: Request, response: HTTPResponse):
end_time = asyncio.get_event_loop().time()
execution_time = end_time - request.ctx.start_time
await write_perfomance_information_to_database(
request.url,
execution_time
)
if "Windows" in PLATFORM:
...
else:
@app.exception(Exception)
async def catch_anything(request, exception):
try:
await add_info_to_logger(
LOGGER_PATH,
str({
"Text": "An error occurred",
"ErrorInfo": exception,
"ErrorUrl": request.raw_url,
"UserGot": request.ctx.user.serialize()
})
)
except Exception as exc:
await add_info_to_logger(
LOGGER_PATH,
str({
"Text": "An error occurred",
"ErrorInfo": exception,
"ErrorUrl": request.raw_url,
"UserGot": ""
})
)
return json({"status": CODES[4002]}, status=500)
@app.route("/")
async def hello_world(request: Request) -> json:
return json({"status": "system is fine"})
register_tortoise(
app,
db_url=DB_URL,
modules={"models": MODULES_GLOBAL["models"]},
generate_schemas=True
)
if __name__ == "__main__":
dev = False
if "Windows" in PLATFORM:
dev = True
app.run(
host=IP,
port=PORT,
dev=dev,
access_log=False
)
Expected Behavior
Like i said, if i use more workers than i have, system just freeze. But sometime it's also freeze with --fast parameter
How do you run Sanic?
Sanic CLI
Operating System
Ubuntu 22.04
Sanic Version
Sanic 23.3.0; Routing 22.8.0
Additional context
No response
By the way, big respect and thanks for all sanic developers. I guess it is best async server framework for now. Love you!
Is this still happening? Does it happen in a local Docker container? Local machine? Production?
Is this still happening? Does it happen in a local Docker container? Local machine? Production?
luckily it didn't happened again, everything works just fine, probably some linux glitch
If you notice again, please let me know.