tiangolo / uvicorn-gunicorn-machine-learning-docker

Docker image for high-performance Machine Learning web applications. With Uvicorn managed by Gunicorn in Python 3.7 and 3.6, using Conda, with CUDA and TensorFlow variants.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

MAX_WORKERS parameter is not used

guillaumebu opened this issue · comments

I was trying to set 1 as the maximum number of workers but it didn't seem to have any effect.

It seems that the latest image is a bit outdated and thus does not include the latest changes to gunicorn_conf.py file that can be found in code and documentation of uvicorn-gunicorn-docker.

Below the /gunicorn_conf.py file found in the running docker service:

import json
import multiprocessing
import os

workers_per_core_str = os.getenv("WORKERS_PER_CORE", "1")
web_concurrency_str = os.getenv("WEB_CONCURRENCY", None)
host = os.getenv("HOST", "0.0.0.0")
port = os.getenv("PORT", "80")
bind_env = os.getenv("BIND", None)
use_loglevel = os.getenv("LOG_LEVEL", "info")
if bind_env:
    use_bind = bind_env
else:
    use_bind = f"{host}:{port}"

cores = multiprocessing.cpu_count()
workers_per_core = float(workers_per_core_str)
default_web_concurrency = workers_per_core * cores
if web_concurrency_str:
    web_concurrency = int(web_concurrency_str)
    assert web_concurrency > 0
else:
    web_concurrency = max(int(default_web_concurrency), 2)

# Gunicorn config variables
loglevel = use_loglevel
workers = web_concurrency
bind = use_bind
keepalive = 120
errorlog = "-"

# For debugging and testing
log_data = {
    "loglevel": loglevel,
    "workers": workers,
    "bind": bind,
    # Additional, non-gunicorn variables
    "workers_per_core": workers_per_core,
    "host": host,
    "port": port,
}
print(json.dumps(log_data))

Hey there! I'm sorry, I'm currently not using this Docker image, so I won't be able to fix it, update it, and maintain it. I'm sorry for that.

I just added a note about it here: https://github.com/tiangolo/uvicorn-gunicorn-machine-learning-docker#deprecation-warning-

Sorry for the long delay! 🙈 I wanted to personally address each issue/PR and they piled up through time, but now I'm checking each one in order.

Assuming the original issue was solved, it will be automatically closed now. But feel free to add more comments or create new issues.