MySQL memory continues to grow then results in OOM
rcwwilliams opened this issue · comments
I have found during load testing of an application on OpenShift Online v3 that MySQL memory continues to grow until eventually the container is killed due to OOM.
I am using mysql57.
After doing some internet research, I found a link saying that mysql does not release memory with linux THP set to true and that if THP is disabled, this issue does not occur:
https://bugs.mysql.com/bug.php?id=84003
(final comment at bottom of page)
Please could someone look into this as it is potentially a very serious issue for anyone running mysql in OpenShift Online.
Thanks
@hhorak ?
I really don't follow, as I understand the bug, this needs to be changed in kernel, I see nothing we can do in container itself. Am I missing something?
Hi Honza
Thanks for your comment.
I am an application developer and this stuff is outside my area of expertise, so please bear with me.
Having read up on how containers work in a multi-tenant environment, I now understand what you are saying.
However, we have to migrate an application from Openshift Online v2 to v3, but the fact that the mysql container crashes after a while with OOM is a show stopper for us unless we can find a way round it.
This is something that could potentially affect all Openshift Online users of this image.
Do you have any thoughts on how this could be addressed (if not within the container itself)?
Please be aware that although I am experiencing mysql RSS continuing to increase, I am not in a position to verify if it goes away with THP disabled.
Thanks
IIUIC, the THP is basically setting of the deployment, so I'd say this issue should be reported to the OpenShift Online. @bparees, what would be the best place to report such a request?
As for MySQL, there are bunch of variables that may be set and might change the total memory used, however, as the upstream bug's last comment (https://bugs.mysql.com/bug.php?id=84003) says: "It may be due to our long lived connections, thousands of prepared statements on those connections, and the inability of memory allocator to clean up huge pages in a timely manner."
So I'd also recommend to take a look whether the number of connections and prepared statement is sane.. In worse case, there is always possibility to increase the resources limits, if the load is too high for the current resources limits.
I am using a really minimalist tuning configuration for mysql (see below) and it's a small database (<50MB for full mysqldump). Maybe I have missed out something significant from the configuration, but if not then maybe it's the THP thing that's causing the issue. Meanwhile I am continuing to run load tests.
innodb_buffer_pool_size=5M
innodb_log_buffer_size=256K
query_cache_size=0
max_connections=15
key_buffer_size=8
thread_cache_size=0
host_cache_size=0
innodb_ft_cache_size=0
innodb_ft_total_cache_size=0
#per thread or per operation settings
thread_stack=262144
sort_buffer_size=32K
read_buffer_size=8200
read_rnd_buffer_size=8200
max_heap_table_size=16K
tmp_table_size=1K
bulk_insert_buffer_size=0
join_buffer_size=128
net_buffer_length=16384
innodb_sort_buffer_size=64K
#settings that relate to the binary log (if enabled)
binlog_cache_size=4K
binlog_stmt_cache_size=4K