Only one CPU is being used on 4 core CPU
laxmimerit opened this issue · comments
Hi,
I am hosting a web application on 4 cores CPU. When I make concurrent requests, only one CPU is being used and other 3 CPU's are using only 3 to 4%. How can I get high parallelism and concurrency?
my docker-compose
file is as follows
version: '3'
services:
web:
build:
context: .
volumes:
- ./app:/app
ports:
- "80:80"
environment:
- WORKERS=16
command: bash -c "uvicorn main:app --reload --host 0.0.0.0 --port 80"
I have tried to set WORKERS
but it is not getting reflected. Only one CPU is being used regardless of WORKERS
value.
How can I correctly set these values?
I think the problem is that you're overriding the command:
. Take a look at the start.sh
script from the base image.
Thanks for the help here @ThibaultLemaire ! 👏 🙇
Thanks for closing the issue @laxmimerit 👍
hi @ThibaultLemaire
My dockerfile is as follows, but it seems like it's not utilising even 10% of the CPU and the latency on more than 20 users is going above 2-3 seconds.
Can you please take a look on this docker image?
# Python Base Image
FROM tiangolo/uvicorn-gunicorn-fastapi:python3.10
RUN apt update
RUN apt upgrade -y
# install zbar
RUN apt install -y zbar-tools ffmpeg libsm6 libxext6 libgl1-mesa-glx
# Create the working directory
# get the requirements
COPY ./requirements.txt /app/requirements.txt
# install dependencies
RUN pip install --no-cache-dir --upgrade -r /app/requirements.txt
# get the working app
COPY ./server /app
# create dirs under /tmp
RUN mkdir -p /tmp/data && mkdir -p /tmp/media
# Execute the server
CMD ["uvicorn", "main:app", "--proxy-headers", "--host", "0.0.0.0", "--port", "80"]
At first glance, it looks like you too are overriding the CMD
. I'm sorry but I will not help you any further.
I consider it good practice on Github (and other collaborative platforms) to not tag people on issues 3 years after they have been closed.
If you think your problem is related, maintainers will usually ask that you open a new issue referencing this one. Thank you.