Optimize server memory usage
laike9m opened this issue · comments
Maybe it's time to switch to a server with smaller memory usage.
Maybe it's time to switch to a server with smaller memory usage.
According to the dev team, zeabur measures the memory usage of the whole container. I doubt if a simple Flask server can use so much memory, will add some measurement.
In the meantime, it seems this is how they build an image for Python apps (like this one), is that correct @pan93412 @ScmTble?
https://github.com/zeabur/zbpack/blob/main/internal/python/python.go
If so, we may be able to find ways to optimize it
The RAM
Maybe it's time to switch to a server with smaller memory usage.
According to the dev team, zeabur measures the memory usage of the whole container. I doubt if a simple Flask server can use so much memory, will add some measurement.
In the meantime, it seems this is how they build an image for Python apps (like this one), is that correct @pan93412 @ScmTble?
https://github.com/zeabur/zbpack/blob/main/internal/python/python.goIf so, we may be able to find ways to optimize it
Correct. For the variable part (planMeta["..."]), you can get it by calling zbpack --plan
.
The memory leak possible happen on mypy:
==579== 305,344 bytes in 286 blocks are still reachable in loss record 1,942 of 1,942
==579== at 0x48407B4: malloc (vg_replace_malloc.c:381)
==579== by 0x49A308C: UnknownInlinedFun (obmalloc.c:99)
==579== by 0x49A308C: UnknownInlinedFun (obmalloc.c:572)
==579== by 0x49A308C: UnknownInlinedFun (obmalloc.c:1966)
==579== by 0x49A308C: _PyObject_Malloc (obmalloc.c:1959)
==579== by 0x49A5778: UnknownInlinedFun (obmalloc.c:685)
==579== by 0x49A5778: new_keys_object (dictobject.c:600)
==579== by 0x49A5E0F: dictresize (dictobject.c:1242)
==579== by 0x49AB5CA: UnknownInlinedFun (dictobject.c:1060)
==579== by 0x49AB5CA: insertdict (dictobject.c:1103)
==579== by 0x63FC890: CPyDef_nodes___SymbolTable___deserialize (in /usr/local/lib/python3.10/site-packages/ced4bbd844d3a34b6fc2__mypyc.cpython-310-x86_64-linux-gnu.so)
==579== by 0x642B6DF: CPyDef_nodes___TypeInfo___deserialize (in /usr/local/lib/python3.10/site-packages/ced4bbd844d3a34b6fc2__mypyc.cpython-310-x86_64-linux-gnu.so)
==579== by 0x642F610: CPyPy_nodes___TypeInfo___deserialize (in /usr/local/lib/python3.10/site-packages/ced4bbd844d3a34b6fc2__mypyc.cpython-310-x86_64-linux-gnu.so)
==579== by 0x49BCBCE: cfunction_vectorcall_FASTCALL_KEYWORDS (methodobject.c:446)
==579== by 0x63FB37A: CPyDef_nodes___SymbolNode___deserialize (in /usr/local/lib/python3.10/site-packages/ced4bbd844d3a34b6fc2__mypyc.cpython-310-x86_64-linux-gnu.so)
==579== by 0x643498F: CPyDef_nodes___SymbolTableNode___deserialize (in /usr/local/lib/python3.10/site-packages/ced4bbd844d3a34b6fc2__mypyc.cpython-310-x86_64-linux-gnu.so)
==579== by 0x63FC859: CPyDef_nodes___SymbolTable___deserialize (in /usr/local/lib/python3.10/site-packages/ced4bbd844d3a34b6fc2__mypyc.cpython-310-x86_64-linux-gnu.so)
testing command:
valgrind --leak-check=full --show-leak-kinds=all --leak-check=yes python m1.py
m1.py
from mypy import api
code = """
import typing
def foo(x: typing.Any):
pass
def should_pass():
foo(1)
foo("10")
def should_fail():
foo(1, 2)
"""
from memory_profiler import profile
@profile
def run():
for i in range(20):
result = api.run(["-c", code])
print(result)
run()
I think the best and fast solution is just put mypy into a subprocess, start one every time.