moby / moby

The Moby Project - a collaborative project for the container ecosystem to assemble container-based systems

Home Page:https://mobyproject.org/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

dockerd produces 180GB syslog file in one night

intelliIT opened this issue · comments

Description

i have a 5-member docker swarm running.
last night one of them produced a 180GB syslog-file which ultimately filled the disk up to 100%.
it was repeatedly logging either peerDbDelete or peerDbAdd:

Jul  7 00:05:14 dockerm5 dockerd[1037]: time="2024-07-07T00:05:14.076396990Z" level=warning msg="peerDbAdd transient condition - Key:10.0.8.191 02:42:0a:00:08:bf cardinality:101 db state:Set{{6462a062dbb924b0e2ae05f445980bfbbf368e02f37662c855ce967b89a19988 xxx.xxx.xxx.xxx 24 32
false}, {79c186f13fb74147ce08c86ead0cd7e761613f8d80b0b701631dac887abf020d xxx.xxx.xxx.xxx 24 32 false}, {ca434cc31bfaa9be0b25e7994d295dcfbd8026aa1149d79e0ecd0ee782902141 xxx.xxx.xxx.xxx 24 32 false}, {2a46c198491ccf7a73e9ccc2cdca196d78a56578b55865cd627e0f4212561a9a xxx.xxx.xxx.xxx 24 32 fa
lse}, {573c3b18b745d5fb01410f8e2fa107caee009bc00f81025cb065ea838671fb1f xxx.xxx.xxx.xxx 24 32 false}, {aaa989bba855d05af93965bd07656e2adddb289d989ca5925745ac2e5f1e3111 xxx.xxx.xxx.xxx 24 32 false}, {1ae10c51d3b7185dd294449caad5627cf0e5eecc399ef2a066203a76ee40feaa xxx.xxx.xxx.xxx 24 32 fals
e}, {20442b09b2dcad18d3023074fe355ba7ca02027692b05bdf68614ef1f7b125e8 xxx.xxx.xxx.xxx 24 32 false}, {91e3f45399e0a393c677a6faf047e60f918ed5cf269bb35936df2f907a52be0b xxx.xxx.xxx.xxx 24 32 false}, {2fa7d3c5981a41f0f42f5e69b50909259a5215914960a3cdfc5193a33c8dfba6 xxx.xxx.xxx.xxx 24 32 false}
, {1ee594c01d9ae4467e49d0055b4ab8c22d936c5a7caf74a4a5c622909036ed21 xxx.xxx.xxx.xxx 24 32 false}, {f28e1575b62b07a482f9535ead68b4d8e2f5b4a285a2d4aad82389d2a4ac72e0 xxx.xxx.xxx.xxx 24 32 false}, {60a6c5ce6c7cc999e66a9806b49b2b904e42675e412cfd55ddc11b3254ab045d xxx.xxx.xxx.xxx 24 32 false},
{f8d1dc61be5a9e492951be85e71020c5c1b16b2cb11690021a87daffb7c83be0 xxx.xxx.xxx.xxx 24 32 false}, {fc37d32ebd7b15ca34e12c7efa0ccc8448ef5a3d0af17c531c9c40fd43b5af58 xxx.xxx.xxx.xxx 24 32 false}, {d9075ce700d17c21cffd11f72b7ec2851f334a64adfc0c24a5fbd99172243453 xxx.xxx.xxx.xxx 24 32 false}, {b
e17eaf30266a8e17d54b8bd33d658c01fecef8e9b93de558eaac090ea26cac6 xxx.xxx.xxx.xxx 24 32 false}, {39551fe84ce393d58ee3ee00f0c611bed755623396f493bd9a37c3079d7f642a xxx.xxx.xxx.xxx 24 32 false}, {a7b06bc1eb9697f9501f8c3ce4aa4bdaf9d2621e527f7dad0215b5c1e6dfd641 xxx.xxx.xxx.xxx 24 32 false}, {a4c
9d4e82cad66b27366996237c51f8b58f13aa3881f27088949522aa2657be9 xxx.xxx.xxx.xxx 24 32 false}, {6c4857a3b6bf5489825ff5040bc033390052f15c89b01dc67af3aa0c633a39dd xxx.xxx.xxx.xxx 24 32 false}, {7bb554d238eec5c758286a63b88c3d330210fa0489237087c062fcc064cca917 xxx.xxx.xxx.xxx 24 32 false}, {84747
33a3778fb27ef411bdefa7eea234b56983d9bf37c3084aa1edd066f955c xxx.xxx.xxx.xxx 24 32 false}, {2775226c539251c51cf27e4e22a874ba32b118f485af2e63404743a0871bea74 xxx.xxx.xxx.xxx 24 32 false}, {b4becf0e70c4cc57bf28b5e17e44a0f864dfea525093a6e3dc03935c87b8b743 xxx.xxx.xxx.xxx 24 32 false}, {8c15cff
cd8eaab03e9f3e12eae53c7539b252fde76b865977710e1adceb8cef8 xxx.xxx.xxx.xxx 24 32 false}, {ffec13ff42109ee43cd5f93a52225fd33a93d921fbceb79a0f1ebdfa9836832b xxx.xxx.xxx.xxx 24 32 false}, {651811fc4929b8baa768dad26b47041dc03961e12ca74b81609fbc36f4063c13 xxx.xxx.xxx.xxx 24 32 false}, {b1a8cd6a9
c63093c22340ef84eaca2bff200992c233cc8642ffa3451bdcaf617 xxx.xxx.xxx.xxx 24 32 false}, {ef372684d488566bb7479be0f220308dfd0ca8452adbb6b578cbefbfc58017ff xxx.xxx.xxx.xxx 24 32 false}, {9349b8127df1716c8d2b4b123771f931e8eb0710b75a350dc391998eed1734fc xxx.xxx.xxx.xxx 24 32 false}, {442427a18d2
46b8cd4fd03c6ca39fff94197de194b71287eff15272bd31b202b xxx.xxx.xxx.xxx 24 32 false}, {431060dc6fbccdd97ed409c4df8b52ea6443932fa295fd5459040b9b4e398071 xxx.xxx.xxx.xxx 24 32 false}, {4d4fee33c56861fdfc9843200db48c06f8319af1e7496b29d64503991015f65a xxx.xxx.xxx.xxx 24 32 false}, {81a0ce8d097a5
d1e5ceff9960705229851b0fa9d71de663de33c85422032d27f xxx.xxx.xxx.xxx 24 32 false}, {50059d7139259412b140f7d9f5a75e69741c56edd3cc3322daee32d7bc66c4ec xxx.xxx.xxx.xxx 24 32 false}, {c0cbb54e3b2ac8772dff87a8301cb70cd720ce751f562930de11214ef10765dd xxx.xxx.xxx.xxx 24 32 false}, {3b2fb6c61ad2e44
4b155a04c9e3e555f3c1f995af46cf7fa83042b16e36cc5ca xxx.xxx.xxx.xxx 24 32 false}, {653089d2c4626d015c973fbefdebf36a20ed6b8c4150927fd5321e7ed04bba6f xxx.xxx.xxx.xxx 24 32 false}, {d7cb29ef8fec9f04970f6b5b1a0c3d2945a445869b29f341387d2c4f231f1d05 xxx.xxx.xxx.xxx 24 32 false}, {1a4f29bfaf62af6ea
47ba6f3ee58070f80bab2732c1b323ea2fcb7da939bb005 xxx.xxx.xxx.xxx 24 32 false}, {331fdef6cc7ff121a3ddc8e85beddc3895c6d7d3c6816208568bb41cffd0c72d xxx.xxx.xxx.xxx 24 32 false}, {040dff476623be20bbebebb4c22591d29a685f07813cfbc18c35ec31a659df9c xxx.xxx.xxx.xxx 24 32 false}, {1ef1f28678b4fbc4aa4
0c62f4bbbf3f65967b6f84d4087e12abf17fffe66b289 xxx.xxx.xxx.xxx 24 32 false}, {81305c00aa5d69f79d28419aae40d60f8fe27f988b286a451b903e5f3b65e3a9 xxx.xxx.xxx.xxx 24 32 false}, {4f2cbc5d27d4fd25382be1139839d6c63c1f24a52a32ac54960f94519933e286 xxx.xxx.xxx.xxx 24 32 false}, {605de15df7ab67dbe5ada
ed133ece9246b9208b0c0c05cf2aa5f10302cf58275 xxx.xxx.xxx.xxx 24 32 false}, {a69a8f67a800b7830694387fd58b500a72a48e66f6e0b4b5514b9587ac2fe349 xxx.xxx.xxx.xxx 24 32 false}, {65e52196bd378677a9ed78d4f358c9587578e179666bd022257d47c2c0290052 xxx.xxx.xxx.xxx 24 32 false}, {59d78001912800cf755aa71
a6096bf75b9143352b2d08f1d8b5af0b90baeb965 xxx.xxx.xxx.xxx 24 32 false}, {04a0811edf6c242a19df87fb1ee7b1d6e3a101b4db984e330a9d1d35da9b7d8c xxx.xxx.xxx.xxx 24 32 false}, {b0169425fd1ee3804a5e12e0b6f84b54c4b611234d6a2cc3cdd085c2385eb98c xxx.xxx.xxx.xxx 24 32 false}, {608eee11a19759e3e00f508d0
4ad93b332ec0a0e6932d835b7de54c004bed892 xxx.xxx.xxx.xxx 24 32 false}, {56768e1fdd2b9590a5510b6fe8c38832e59bcc8525c19a10241513afa495c057 xxx.xxx.xxx.xxx 24 32 false}, {4a6c595da70af1ad35e7d25e9ab22ced2e93ca37d7b4e5f8b81f6858d08ed751 xxx.xxx.xxx.xxx 24 32 false}, {d2ae2de081c19fcf9527500fd78
6741470e88ad9562b4cba26cce7b9b5a4572c xxx.xxx.xxx.xxx 24 32 false}, {f009db36c090f83b1df47b0d4873a250afd298e2c46c676f7780167bc32ee1e2 xxx.xxx.xxx.xxx 24 32 false}, {95b868fe7e605aafca35f6c41b3d8c020d74136e254a6c9510314ba305a7e314 xxx.xxx.xxx.xxx 24 32 false}, {7c7591331926de2ede92ef37a4ba5
d78f28bf1572c8c38317ab44097de6d7d99 xxx.xxx.xxx.xxx 24 32 false}, {40ab485635e30010747a4ed271b36948b7c2d22823eeff4a55f954de095af502 xxx.xxx.xxx.xxx 24 32 false}, {237e61677147abc875f6df745fb40d94aef5a43b4f85d7be4fa076a0fa0d2abf xxx.xxx.xxx.xxx 24 32 false}, {870d3564ddb1b4b9813a58f605cf4a4
44ca54f54ac66f33f130976722968628e xxx.xxx.xxx.xxx 24 32 false}, {f1c290ceee9daf885d65168350653e460dafda1c40ad44b25de7a45ffd54eb6a xxx.xxx.xxx.xxx 24 32 false}, {13bf748bf3b6235c707b035fae0756821e754f3211dfeb69834d77af7cb70d31 xxx.xxx.xxx.xxx 24 32 false}, {ecea06262a2a6412d6280c8b92a28d8f4
222ddbdb916d8fceb37676bcba8c0e9 xxx.xxx.xxx.xxx 24 32 false}, {d82a08156dae6973831e3b994f38bf24fe6f89b98b055b71c445a532725c415e xxx.xxx.xxx.xxx 24 32 false}, {9e870eb32a3727c965da425c90b56a35923910e0c4dd2912d716cd433ff0f0a5 xxx.xxx.xxx.xxx 24 32 false}, {f0c713006ae6e8c192c279c3fd1c3b4d96f
143eb1b01c6ab561ac93565169b62 xxx.xxx.xxx.xxx 24 32 false}, {2ee3c4a70fe138a2828e8d1bb1f42a6de8744683d5b3521ea460c54baa7edea5 xxx.xxx.xxx.xxx 24 32 false}, {cedb015949e13c643ceb2992780086ea2938e07fa3cecd5e2c1182366ef3500e xxx.xxx.xxx.xxx 24 32 false}, {823bc10b6011a9784baf52f86e01f4125241e
40cf59e467551852388c6b2bc2e xxx.xxx.xxx.xxx 24 32 false}, {dbe18743293b07ab945909616fe4c2948757a3bee11a1f201589981786799a3f xxx.xxx.xxx.xxx 24 32 false}, {738bcdb11d1f2ca7196003d175121788055a86eb2fde00802092cbbb6d69c018 xxx.xxx.xxx.xxx 24 32 false}, {4ace8ac0f1512145a30052fd27043da4ce95017
8b33ddb674fb3c6631fae1afe xxx.xxx.xxx.xxx 24 32 false}, {2f2ff08a8c47e944696113c5c09fcc5234751d2c823c2a17ded025d214883d34 xxx.xxx.xxx.xxx 24 32 false}, {0f2e9fbb571669048f33cbe27d4c9264a9e03f37c5025f9bd3533657070e149e xxx.xxx.xxx.xxx 24 32 false}, {008a76a63061523c9f248e074333a99e82cd5282a
c9ee4c08ac3c9d1596873f4 xxx.xxx.xxx.xxx 24 32 false}, {787abdf2b1de2216844b1cf1b5d60f5a116ef03dedaeda3c4d71ff253d0bf09c xxx.xxx.xxx.xxx 24 32 false}, {3f895b3057350365d4b4a159ea91ca9750e6e48a6771b9a2c092ec2738babd74 xxx.xxx.xxx.xxx 24 32 false}, {b15db2a211a8a2690c38e98b29c00c2846b65e8e93f
382f45a2f2d7093b1b50b xxx.xxx.xxx.xxx 24 32 false}, {198a4ad9e541472c80b491aa9ffe903050fb849e8b3279f1c0d770a5f77f04f9 xxx.xxx.xxx.xxx 24 32 false}, {673de7800c9ea7f23f1c8234b4826bdadc463f8b0361d89e523caaf3a5a49d08 xxx.xxx.xxx.xxx 24 32 false}, {912614ebf6add5b0f26edaf7c71c6c639db1842cf2d20
7ca5cbf248f664b2e49 xxx.xxx.xxx.xxx 24 32 false}, {7cf9bf49965706398fc62c2f6bbec4c061795df5d33da36a876ea9446155d6c9 xxx.xxx.xxx.xxx 24 32 false}, {ce55b8836f724a4acde59efbceb35dd8c2fbdd297c4acb075b0876c4efe5c877 xxx.xxx.xxx.xxx 24 32 false}, {7f1039e68c7cd8494cde918f85dac5a1d7f9a484c4231d4
354e00a8ae4be3e54 xxx.xxx.xxx.xxx 24 32 fal

Jul  7 00:05:14 dockerm5 dockerd[1037]: time="2024-07-07T00:04:41.277174008Z" level=warning msg="Neighbor entry already present for IP 10.0.8.191, mac 02:42:0a:00:08:2e neighbor:&{dstIP:[0 0 0 0 0 0 0 0 0 0 255 255 10 0 8 191] dstMac:[2 66 10 0 8 46] linkName:vx-001008-26rtl link
Dst:vxlan0 family:0} forceUpdate:false"


Jul  7 00:05:14 dockerm5 dockerd[1037]: time="2024-07-07T00:05:14.076537095Z" level=warning msg="rmServiceBinding 00565edd351a061b1c234b91e8837d241b5d7527bd39f96526e822a26c8d04d0 possible transient state ok:true entries:9 set:true Set{ffa6371ee2dce155effbe6f05393adbe69fc6e0fd84
7547425c391d55fe6a230, 4ace8ac0f1512145a30052fd27043da4ce950178b33ddb674fb3c6631fae1afe, 672d50d259e76c45828c6c0c5b628a57bc72cf25a5ad7310e578f33a280afff7, c96c2327fc06285e5c6096768121667e36acf60560c963ed0fb8be0c0b91f67c, e9cfca9a92d7f6e57e156219c6bb497327c782f13df5b321b71ee6c83
6babe82, ce05a1d200cb9875"

where xxx.xxx.xxx.xxx is a random mixture of the other swarm-node ip-adresses.

i have not noticed any service issues or downtime.
also this is my only evidence found so far, so if there a other places to look for potential logs, please tell me.

this "ramp-like" structure gets build up over time, when more lines get added to the log-output.
image
image

Reproduce

Expected behavior

No response

docker version

Client: Docker Engine - Community
 Version:           26.1.2
 API version:       1.45
 Go version:        go1.21.10
 Git commit:        211e74b
 Built:             Wed May  8 13:59:59 2024
 OS/Arch:           linux/amd64
 Context:           default

Server: Docker Engine - Community
 Engine:
  Version:          26.1.2
  API version:      1.45 (minimum version 1.24)
  Go version:       go1.21.10
  Git commit:       ef1912d
  Built:            Wed May  8 13:59:59 2024
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.6.31
  GitCommit:        e377cd56a71523140ca6ae87e30244719194a521
 runc:
  Version:          1.1.12
  GitCommit:        v1.1.12-0-g51d5e94
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

docker info

Client: Docker Engine - Community
 Version:    26.1.2
 Context:    default
 Debug Mode: false
 Plugins:
  buildx: Docker Buildx (Docker Inc.)
    Version:  v0.14.0
    Path:     /usr/libexec/docker/cli-plugins/docker-buildx
  compose: Docker Compose (Docker Inc.)
    Version:  v2.27.0
    Path:     /usr/libexec/docker/cli-plugins/docker-compose

Server:
 Containers: 38
  Running: 2
  Paused: 0
  Stopped: 36
 Images: 18
 Server Version: 26.1.2
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Using metacopy: false
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: systemd
 Cgroup Version: 2
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog
 Swarm: active
  NodeID: m5brphqxj7c259yz8k7wg3478
  Is Manager: true
  ClusterID: 844lcq64at7r32kr638tos5wn
  Managers: 5
  Nodes: 5
  Default Address Pool: 10.0.0.0/8
  SubnetSize: 24
  Data Path Port: 4789
  Orchestration:
   Task History Retention Limit: 5
  Raft:
   Snapshot Interval: 10000
   Number of Old Snapshots to Retain: 0
   Heartbeat Tick: 1
   Election Tick: 10
  Dispatcher:
   Heartbeat Period: 5 seconds
  CA Configuration:
   Expiry Duration: 3 months
   Force Rotate: 1
  Autolock Managers: false
  Root Rotation In Progress: false
  Node Address: xx.x.xx
  Manager Addresses:
   xxx....:2377
 Runtimes: io.containerd.runc.v2 runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: e377cd56a71523140ca6ae87e30244719194a521
 runc version: v1.1.12-0-g51d5e94
 init version: de40ad0
 Security Options:
  apparmor
  seccomp
   Profile: builtin
  cgroupns
 Kernel Version: 5.15.0-113-generic
 Operating System: Ubuntu 22.04.4 LTS
 OSType: linux
 Architecture: x86_64
 CPUs: 8
 Total Memory: 31.34GiB
 Name: dockerm5
 ID: 44b82ca6-f607-48ff-9edc-7c625da191be
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

Additional Info

No response

Are the peerDbAdd / peerDbDelete logs only being spammed by the one node, or all of them? Do any Swarm services have statically-assigned IP addresses? One possible explanation for the logs is if multiple containers are connected to the same overlay network with the same IP address assigned.

@corhere it is only one of them 5 members. no statically-assigned IPs. i have not manually configured any adress specific settings for any of my overlay networks.
i identified the overlay network and therefore the stack with the resulting ip-adresses, but i cant find any static configs or discrepancies.

network setting

  default:
    name: feaplat
    driver: overlay
    attachable: true

docker network inspect feaplat

    {
        "Name": "feaplat",
        "Id": "u2dycw390zqa1vesq6kilt2gw",
        "Created": "2024-07-24T13:56:17.443160093+08:00",
        "Scope": "swarm",
        "Driver": "overlay",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "10.0.1.0/24",
                    "Gateway": "10.0.1.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": true,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "282095cc88cc4bb0205094a0b4cf9f6041310572b6fcb9608f6ffdc1014196ea": {
                "Name": "feapder_backend",
                "EndpointID": "85eb3d3f7d65fe41b5ea4398f2aecea5e1503f62573688ac63e52a16292c334e",
                "MacAddress": "02:42:0a:00:01:bd",
                "IPv4Address": "10.0.1.189/24",
                "IPv6Address": ""
            },
            "90727748b74b4698737169bfcf27f48da977b9976ef1d25bc4a7fdb962ff8a4f": {
                "Name": "feapder_backend_mysql",
                "EndpointID": "41ddc3ddad8bade53408862416a50af7cc64699ddcffae25a83e2af8df2715d1",
                "MacAddress": "02:42:0a:00:01:88",
                "IPv4Address": "10.0.1.136/24",
                "IPv6Address": ""
            },
            "c00b29e316ecb02ac14437e4979f0df98f67d46ab2547b967459866f03af1d2d": {
                "Name": "feapder_influxdb",
                "EndpointID": "8ce3ee412648d5d2531d3f633717cc35cd14f38c0731564c2bc52361f858748b",
                "MacAddress": "02:42:0a:00:01:37",
                "IPv4Address": "10.0.1.55/24",
                "IPv6Address": ""
            },
            "df33dcbb8794dcffdcbaf46673614cc9c07ab1271a08c29354e899387fc20a90": {
                "Name": "feapder_front",
                "EndpointID": "de3345d498787636a4d45fe8620c337917f3dfc7afd7c8b9eb9a352abb65a5fe",
                "MacAddress": "02:42:0a:00:01:84",
                "IPv4Address": "10.0.1.132/24",
                "IPv6Address": ""
            },
            "f1f1d1c208314d267a00cd4f5c433ad0d2f2b31eb47117c00354a2982a95bb1d": {
                "Name": "feapder_backend_redis",
                "EndpointID": "2993e562319a7627bd754c3fa71547b66c4157a770f77bd1f9ff1ff8e038eaf1",
                "MacAddress": "02:42:0a:00:01:3d",
                "IPv4Address": "10.0.1.61/24",
                "IPv6Address": ""
            },
            "lb-feaplat": {
                "Name": "feaplat-endpoint",
                "EndpointID": "bcabb15ff0d9c7278e1434d433170720535267300de413896f7a1fe3c137fc8c",
                "MacAddress": "02:42:0a:00:01:a7",
                "IPv4Address": "10.0.1.167/24",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.driver.overlay.vxlanid_list": "4097"
        },
        "Labels": {
            "com.docker.compose.network": "feaplat",
            "com.docker.compose.project": "feaplat",
            "com.docker.compose.version": "1.29.2"
        },
        "Peers": [
            {
                "Name": "2598b1a2405b",
                "IP": "20.20.1.10"
            },
            {
                "Name": "de5803250b3b",
                "IP": "20.20.1.11"
            },

My network config is like that, I have 20 nodes, all of them are print that logs

level=warning msg="peerDbAdd transient condition - Key:10.0.1.217 02:42:0a:00:01:d9 cardinality:63 db state:Set{{440800134a5f4768ed3acd26940aac709bf5a6f475091074822608c6c77d93b6 10.111...1a24edf4c10325eebab9aaf
 level=warning msg="Neighbor entry already present for IP 10.0.1.217, mac 02:42:0a:00:01:d9 neighbor:&{dstIP:[0 0 0 0 0 0 0 0 0 0 255 255 10 0 1 217] dstMac:[2 66 10 0 1 217] linkName:v...y:0} forceUpdate:false"
 level=warning msg="peerDbDelete transient condition - Key:10.0.1.144 02:42:0a:00:01:90 cardinality:144 db state:Set{{f1d4f43b052a2c4f2aa78d708564f41146795e66072bb3ec486f9733bf995a93 10...69d03763b95f1a214cdfc92
 level=warning msg="addServiceBinding 0f6d32b8590f812262de1ceb50a23a418a1454245fdfe9330253c952d0405272 possible transient state ok:true entries:7 set:true Set{82eade33ffe2f223fd19bce701...78e2f4cc2db86281e72a823
 level=warning msg="peerDbAdd transient condition - Key:10.0.1.190 02:42:0a:00:01:be cardinality:252 db state:Set{{4e77d3b3b06651cfdb4ce4a5ef2ed3db531c39a0c56871a89ddf8e11e7ad97fa 10.11...e605b715dc6bf880bf40e8b
 level=warning msg="Neighbor entry already present for IP 10.0.1.190, mac 02:42:0a:00:01:be neighbor:&{dstIP:[0 0 0 0 0 0 0 0 0 0 255 255 10 0 1 190] dstMac:[2 66 10 0 1 190] linkName:v...y:0} forceUpdate:false"
 level=warning msg="peerDbAdd transient condition - Key:10.0.1.186 02:42:0a:00:01:ba cardinality:54 db state:Set{{fe0045e9ca0283502ed655cae8ae9cfea3c6998d85d68ee9d643c6390dda6f35 10.111...d65cb0b8a2bcf5f92b19887
 level=warning msg="Neighbor entry already present for IP 10.0.1.186, mac 02:42:0a:00:01:ba neighbor:&{dstIP:[0 0 0 0 0 0 0 0 0 0 255 255 10 0 1 186] dstMac:[2 66 10 0 1 186] linkName:v...y:0} forceUpdate:false"
 level=warning msg="peerDbAdd transient condition - Key:10.0.1.147 02:42:0a:00:01:93 cardinality:169 db state:Set{{f8d9a1b0790e906edb95fa11502a998fd94f8c9f3aaa77212c308e9dc7a72fd3 10.11...5967b4a688e7ffed0b92fc8
 level=warning msg="Neighbor entry already present for IP 10.0.1.147, mac 02:42:0a:00:01:93 neighbor:&{dstIP:[0 0 0 0 0 0 0 0 0 0 255 255 10 0 1 147] dstMac:[2 66 10 0 1 147] linkName:v...y:0} forceUpdate:false"
H

And some of my container are restart frequently, I dont know the problem its because my container is error or my overlay network is error

My docker swarm , all nodes are print "peerDbadd, peerDbDel, Neighbor present", and the key IP include the containers which is still running or error stoped . The ip status and modify signal are send from the node where there container is running ? Or it likes the IP Broadcast storms ? Can anyone help ?

I have a hunch this issue might be related to #47728