ceph / ceph-container

Docker files and images to run Ceph in containers

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

OSDs crash under load after receiving SIGCHLD

tobiaslangner opened this issue · comments

Bug Report

What happened:
Several times a day OSDs in our ceph cluster crash (interestingly mostly on one of the involved hosts) with as far as I can tell no log output detailing the reason for the crash. The most telling line in the log is

osd1_1     | teardown: managing teardown after SIGCHLD
osd1_1     | teardown: Sending SIGTERM to PID 67

after which the OSD process turns down. I enabled detailed logging through '--debug-ms 1 --debug-osd 10 and I uploaded extensive logs of two crashes here. The log output before the crash is quite different between the two cases, so I'm not sure whether it has anything to do with the crash.

Please let me know how I can find out more information about why SIGCHLD is sent to begin with.

What you expected to happen:
OSDs should not crash.

How to reproduce it (minimal and precise):
No idea, it just seems to happen here...

Environment:

  • OS (e.g. from /etc/os-release):
    Host: Ubuntu 20.04
    Ceph Container: CentOS Linux 8
    Docker Image: quay.io/ceph/daemon:master-8ebf635a-pacific-centos-8-x86_64

  • Kernel (e.g. uname -a):
    Linux obelix 5.4.0-99-generic #112-Ubuntu SMP Thu Feb 3 13:50:55 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux

  • Docker version (e.g. docker version):
    20.10.12

  • Ceph version (e.g. ceph -v):
    ceph version 16.2.7 (dd0603118f56ab514f133c8d2e3adfc983942503) pacific (stable)

  • Ceph setup:
    3 hosts, two OSDs each, erasure-coded pool with 2+1

For some reason, the OSD process that used to crash every now and then is in a boot loop now. It starts of and reproducibly terminates with the following lines:

osd1_1     | 2022-03-14T11:54:04.099+0100 7f6e5fe10080  4 rocksdb: [version_set.cc:4568] Recovered from manifest file:db/MANIFEST-000033 succeeded,manifest_file_number is 33, next_file_number is 40, last_sequence is 1343759, log_number is 37,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 0
osd1_1     | 2022-03-14T11:54:04.099+0100 7f6e5fe10080  4 rocksdb: [version_set.cc:4577] Column family [default] (ID 0), log number is 31
osd1_1     | 2022-03-14T11:54:04.099+0100 7f6e5fe10080  4 rocksdb: [version_set.cc:4577] Column family [m-0] (ID 1), log number is 31
osd1_1     | 2022-03-14T11:54:04.099+0100 7f6e5fe10080  4 rocksdb: [version_set.cc:4577] Column family [m-1] (ID 2), log number is 31
osd1_1     | 2022-03-14T11:54:04.099+0100 7f6e5fe10080  4 rocksdb: [version_set.cc:4577] Column family [m-2] (ID 3), log number is 31
osd1_1     | 2022-03-14T11:54:04.099+0100 7f6e5fe10080  4 rocksdb: [version_set.cc:4577] Column family [p-0] (ID 4), log number is 31
osd1_1     | 2022-03-14T11:54:04.099+0100 7f6e5fe10080  4 rocksdb: [version_set.cc:4577] Column family [p-1] (ID 5), log number is 31
osd1_1     | 2022-03-14T11:54:04.099+0100 7f6e5fe10080  4 rocksdb: [version_set.cc:4577] Column family [p-2] (ID 6), log number is 31
osd1_1     | 2022-03-14T11:54:04.099+0100 7f6e5fe10080  4 rocksdb: [version_set.cc:4577] Column family [O-0] (ID 7), log number is 31
osd1_1     | 2022-03-14T11:54:04.099+0100 7f6e5fe10080  4 rocksdb: [version_set.cc:4577] Column family [O-1] (ID 8), log number is 31
osd1_1     | 2022-03-14T11:54:04.099+0100 7f6e5fe10080  4 rocksdb: [version_set.cc:4577] Column family [O-2] (ID 9), log number is 31
osd1_1     | 2022-03-14T11:54:04.099+0100 7f6e5fe10080  4 rocksdb: [version_set.cc:4577] Column family [L] (ID 10), log number is 31
osd1_1     | 2022-03-14T11:54:04.099+0100 7f6e5fe10080  4 rocksdb: [version_set.cc:4577] Column family [P] (ID 11), log number is 37
osd1_1     | 2022-03-14T11:54:04.099+0100 7f6e5fe10080  4 rocksdb: EVENT_LOG_v1 {"time_micros": 1647255244100804, "job": 1, "event": "recovery_started", "log_files": [34, 37]}
osd1_1     | 2022-03-14T11:54:04.099+0100 7f6e5fe10080  4 rocksdb: [db_impl/db_impl_open.cc:760] Recovering log #34 mode 2

>>> about two minutes elapse here <<<

osd1_1     | teardown: managing teardown after SIGTERM
osd1_1     | teardown: Sending SIGTERM to PID 58
osd1_1     | /opt/ceph-container/bin/docker_exec.sh: line 32:    58 Terminated              "$@"
osd1_1     | teardown: Waiting PID 58 to terminate .....
osd1_1     | teardown: Process 58 is terminated

It looks like this was caused by a rogue sniper script running on my host. No need to investigate further.