when max_buffering is set too large, the memory will not be released.
lzle opened this issue · comments
When max_buffering is set too large, the memory will not be released.
Examples are as follows:
After requesting access, it is found that the nginx process will always occupy 10GB of memory and will not release it until the process ends. Theoretically, the memory should drop after the access, because the table is already empty. can you explain it?thanks!
location / {
content_by_lua '
local kafka_buffer = require("resty.kafka.ringbuffer")
local buffer = kafka_buffer:new(200, 10240)
-- 1MB
local message = string.rep('a', 1024 * 1024)
for i = 1, 10240 do
local ok, err = buffer:add('topic', 'key', message .. i)
if not ok then
ngx.say("add err:", err)
end
end
for i = 1, 10240 do
buffer:pop()
end
ngx.say("ok)
';
}
@lzle have you tried force gc? after pop, collectgarbage("collect")
yes, call collectgarbage("collect")
will release the memory, but I don't know whether the force gc will cause other problems, such as stop the world
, which will cause the processing to pause. Is there any other better way? and why the automatic garbage collection mechanism did not reclaim memory?thanks!
@lzle it's a normal GC issue, Lua need the memory reach 2x memory size to start GC. you could investigate how Lua GC works.