tiangolo / meinheld-gunicorn-flask-docker

Docker image with Meinheld and Gunicorn for Flask applications in Python.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Error Upstream prematurely closed

huwentao1 opened this issue · comments

Recently, our test environment always reported the following error, and it is on the fixed interface. Even if it is re-released, it will still report this error.Errors info

[error] 11#11: *1 upstream prematurely closed connection while reading response header from upstream, client: 192.168.160.1, server: , request: "GET /index HTTP/1.1", upstream: "http://127.0.0.1:8080/index", host: "127.0.0.1:3000"

So I wrote a test script locally to reproduce the problem, but I don’t know why it happened.This is our basic image,We manage our image throw supervisord.

FROM tiangolo/meinheld-gunicorn-flask:python3.7

I wrote a flask interface, and set gunicorn timeout 10

import configparser
import datetime
import time
from decimal import Decimal

import logging
from flask.json import JSONEncoder
from flask_api import FlaskAPI

from flask_sqlalchemy import SQLAlchemy
from sqlalchemy import Column, Integer

cf = configparser.ConfigParser()
cf.read("./app.ini")
app = FlaskAPI(__name__)


class CustomJSONEncoder(JSONEncoder):
    def default(self, obj):
        try:
            if isinstance(obj, Decimal):
                return float(obj)

            if isinstance(obj, datetime.datetime):
                return time.mktime(obj.timetuple())

            iterable = iter(obj)
        except TypeError:
            pass
        else:
            return list(iterable)
        return JSONEncoder.default(self, obj)


app.json_encoder = CustomJSONEncoder
app.config["SQLALCHEMY_COMMIT_ON_TEARDOWN"] = True
app.config["SQLALCHEMY_TRACK_MODIFICATIONS"] = False
app.config["SQLALCHEMY_DATABASE_URI"] = cf.get("sqlalchemy", "pool")
app.config["SQLALCHEMY_POOL_SIZE"] = 100
app.config["SQLALCHEMY_POOL_RECYCLE"] = 280

app.config["DEFAULT_RENDERERS"] = ["flask_api.renderers.JSONRenderer"]
app.config["TESTING"] = cf.get("env", "is_testing")
app.logger.setLevel(logging.INFO)


@app.teardown_appcontext
def shutdown_session(exception=None):
    db.session.remove()

db = SQLAlchemy(app)

class User(db.Model):
    __tablename__ = "user"
    id = Column(Integer, primary_key=True)

@app.route("/index")
def index():
    User.query.all()
    import time
    time.sleep(100)
    return {"index": "ok"}

A timeout was reported during the first requets, and then an error was reported. This is normal because the program itself timed out. But why close the connection during the second request?

[2020-04-24 03:36:44 +0000] [10] [CRITICAL] WORKER TIMEOUT (pid:16)
[2020-04-24 03:36:44 +0000] [16] [ERROR] Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 2309, in __call__
    return self.wsgi_app(environ, start_response)
  File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 2292, in wsgi_app
    response = self.full_dispatch_request()
  File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1813, in full_dispatch_request
    rv = self.dispatch_request()
  File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1799, in dispatch_request
    return self.view_functions[rule.endpoint](**req.view_args)
  File "/app/app/__init__.py", line 61, in index
    time.sleep(100)
  File "/usr/local/lib/python3.7/site-packages/gunicorn/workers/base.py", line 201, in handle_abort
    sys.exit(1)
SystemExit: 1

2020/04/24 03:36:56 [error] 12#12: *3 upstream prematurely closed connection while reading response header from upstream, client: 192.168.176.1, server: , request: "GET /index HTTP/1.1", upstream: "http://127.0.0.1:8080/index", host: "127.0.0.1:3000"
[2020-04-24 03:36:56 +0000] [20] [INFO] Booting worker with pid: 20

What does upstream prematurely closed connection mean?
I still have this problem after removing db session remove according to the previous issue, if anyone knows why I hope it can help me

@huwentao1 did you ever find the issue? I have the same problem.

Hey there! Is the error coming from the Docker image or from something else like an Nginx on top? I suspect that could be it.

Sorry for the long delay! 🙈 I wanted to personally address each issue/PR and they piled up through time, but now I'm checking each one in order.

Assuming the original issue was solved, it will be automatically closed now. But feel free to add more comments or create new issues.