linux-nvme / nvme-stas

NVMe STorage Appliance Services

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

nvme-stas does not disconnect PDC on receipt of mDNS goodbye packet

martin-gpy opened this issue · comments

Noticed a change in behavior from SLES15 SP4's nvme-stas-1.1.9-150400.3.9.3 to SP5's nvme-stas-2.2.2-150500.3.6.1 in terms of PDC handling on receipt of mDNS goodbye packet.

SLES15 SP5 Config:

# uname -r
5.14.21-150500.53-default

# rpm -qa|grep nvme
libnvme-devel-1.4+18.g932f9c37e05a-150500.4.3.1.x86_64
nvme-cli-2.4+17.gf4cfca93998a-150500.4.3.1.x86_64
libnvme-mi1-1.4+18.g932f9c37e05a-150500.4.3.1.x86_64
python3-libnvme-1.4+18.g932f9c37e05a-150500.4.3.1.x86_64
nvme-cli-bash-completion-2.4+17.gf4cfca93998a-150500.4.3.1.noarch
libnvme1-1.4+18.g932f9c37e05a-150500.4.3.1.x86_64
nvme-stas-2.2.2-150500.3.6.1.x86_64
nvme-cli-zsh-completion-2.4+17.gf4cfca93998a-150500.4.3.1.noarch

Whenever a NVMe/TCP link is down, 'stafctl ls' would remove the respective PDC entries in the staf cache in SP4's nvme-stas-1.1.9-150400.3.9.3, but that is not the case with SP5's nvme-stas-2.2.2-150500.3.6.1.

In the presence of mDNS goodbye packets:

  1. The NVMe/TCP link down will cause an immediate cache expiration in avahi.
  2. The host will start reconnecting to the PDC and any I/O controllers that were disconnected as the result of the link down for 10 minutes (i.e. the default ctrl_loss_tmo).
  3. nvme-stas will disconnect from the existing PDC upon receipt of the goodbye packet and immediate cache expiration. All PDC reconnects will stop. But IO controller reconnects will continue.
  4. After the link comes back online, an mDNS announcement is issued and the host should connect back to the PDC and to all the IO subsystems returned in the DLPE of the PDC.

Step 3 above is where the nvme-stas behavior has changed from SP4 to SP5 where nvme-stas no longer disconnects the PDC on receipt of the mDNS goodbye packet.

So is this change in behavior intentional? What necessitated it?

More details below.

With SP5's nvme-stas-2.2.2-150500.3.6.1:

An example of the 'stafctl ls' output when all NVMe/TCP links are up:

# stafctl ls | grep 192
  'traddr': '192.168.1.4',
  'traddr': '192.168.1.1',
  'traddr': '192.168.1.2',
  'traddr': '192.168.1.8',
  'traddr': '192.168.1.3',
  'traddr': '192.168.1.0',
  'traddr': '192.168.1.9',
  'traddr': '192.168.1.5',
  'traddr': '192.168.3.54',
  'traddr': '192.168.3.55',
  'traddr': '192.168.3.57',
  'traddr': '192.168.3.56',

But after the NVMe/TCP links are downed, the 192.168.3.* entries still remain in the staf cache:

#  stafctl ls | grep 192
  'traddr': '192.168.1.4',
  'traddr': '192.168.1.1',
  'traddr': '192.168.1.2',
  'traddr': '192.168.1.8',
  'traddr': '192.168.1.3',
  'traddr': '192.168.1.0',
  'traddr': '192.168.1.9',
  'traddr': '192.168.1.5',
  'traddr': '192.168.3.54',
  'traddr': '192.168.3.55',
  'traddr': '192.168.3.57',
  'traddr': '192.168.3.56',

avahi-browse -a shows links removed:

2023-07-16T13:32:52.383716-04:00 ssan-rx2530-13 stafd[20311]: Avahi._service_removed()           - interface=8 (vlan200), protocol=IPv4, stype=_nvme-disc._tcp, domain=local, flags=4 (mcast),       name=vs2 dd92d8b9-23f4-11ee-883a-00a098fd5d4f
2023-07-16T13:32:52.610378-04:00 ssan-rx2530-13 stafd[20311]: Avahi._service_removed()           - interface=8 (vlan200), protocol=IPv4, stype=_nvme-disc._tcp, domain=local, flags=4 (mcast),       name=vs2 ddd17425-23f4-11ee-883a-00a098fd5d4f
2023-07-16T13:32:52.883374-04:00 ssan-rx2530-13 stafd[20311]: Avahi._service_removed()           - interface=8 (vlan200), protocol=IPv4, stype=_nvme-disc._tcp, domain=local, flags=4 (mcast),       name=vs2 de113d25-23f4-11ee-b05e-d039ea010200
2023-07-16T13:32:53.096977-04:00 ssan-rx2530-13 stafd[20311]: Avahi._service_removed()           - interface=8 (vlan200), protocol=IPv4, stype=_nvme-disc._tcp, domain=local, flags=4 (mcast),       name=vs2 df617474-23f4-11ee-b05e-d039ea010200
2023-07-16T13:32:53.394254-04:00 ssan-rx2530-13 stacd[20341]: ServiceABC._config_ctrls()
2023-07-16T13:32:53.394394-04:00 ssan-rx2530-13 stacd[20341]: Stac._config_ctrls_finish()        - configured_ctrl_list = []
2023-07-16T13:32:53.405316-04:00 ssan-rx2530-13 stacd[20341]: Stac._config_ctrls_finish()        - discovered_ctrl_list = [(tcp, 192.168.3.57, 4420, nqn.1992-08.com.netapp:sn.c9da3ab223e811ee883a00a098fd5d4f:subsystem.s1, vlan200), (tcp, 192.168.3.56, 4420, nqn.1992-08.com.netapp:sn.c9da3ab223e811ee883a00a098fd5d4f:subsystem.s1, vlan200), (tcp, 192.168.3.55, 4420, nqn.1992-08.com.netapp:sn.c9da3ab223e811ee883a00a098fd5d4f:subsystem.s1, vlan200), (tcp, 192.168.3.54, 4420, nqn.1992-08.com.netapp:sn.c9da3ab223e811ee883a00a098fd5d4f:subsystem.s1, vlan200)]
2023-07-16T13:32:53.405705-04:00 ssan-rx2530-13 stacd[20341]: Stac._config_ctrls_finish()        - controllers_to_add   = []
2023-07-16T13:32:53.405798-04:00 ssan-rx2530-13 stacd[20341]: Stac._config_ctrls_finish()        - controllers_to_del   = []
2023-07-16T13:32:53.406094-04:00 ssan-rx2530-13 stacd[20341]: Stac._config_ctrls_finish()        - no_disconnect=False, match_trtypes=False, svc_conf.disconnect_trtypes=['tcp']
2023-07-16T13:32:53.406315-04:00 ssan-rx2530-13 stacd[20341]: Stac._dump_last_known_config()     - IOC count = 4
2023-07-16T13:32:54.599319-04:00 ssan-rx2530-13 stafd[20311]: ServiceABC._config_ctrls()
2023-07-16T13:32:54.599902-04:00 ssan-rx2530-13 stafd[20311]: Staf._config_ctrls_finish()        - configured_ctrl_list = []
2023-07-16T13:32:54.600152-04:00 ssan-rx2530-13 stafd[20311]: Staf._config_ctrls_finish()        - discovered_ctrl_list = [(tcp, 192.168.1.1, 8009, nqn.1992-08.com.netapp:sn.ddd7b562225311ee85af00a098fd6a1d:discovery, eth1), (tcp, 192.168.1.9, 8009, nqn.1992-08.com.netapp:sn.e159f4dd225311ee85af00a098fd6a1d:discovery, eth1), (tcp, 192.168.1.3, 8009, nqn.1992-08.com.netapp:sn.de68790d225311ee963d00a098fd6359:discovery, eth1), (tcp, 192.168.1.5, 8009, nqn.1992-08.com.netapp:sn.de7fe907225311ee85af00a098fd6a1d:discovery, eth1), (tcp, 192.168.1.0, 8009, nqn.1992-08.com.netapp:sn.ddd7b562225311ee85af00a098fd6a1d:discovery, eth1), (tcp, 192.168.1.8, 8009, nqn.1992-08.com.netapp:sn.e159f4dd225311ee85af00a098fd6a1d:discovery, eth1), (tcp, 192.168.1.2, 8009, nqn.1992-08.com.netapp:sn.de68790d225311ee963d00a098fd6359:discovery, eth1), (tcp, 192.168.1.4, 8009, nqn.1992-08.com.netapp:sn.de7fe907225311ee85af00a098fd6a1d:discovery, eth1)]
2023-07-16T13:32:54.600368-04:00 ssan-rx2530-13 stafd[20311]: Staf._config_ctrls_finish()        - referral_ctrl_list   = []
2023-07-16T13:32:54.604356-04:00 ssan-rx2530-13 stafd[20311]: Staf._config_ctrls_finish()        - must_remove_list     = []
2023-07-16T13:32:54.604538-04:00 ssan-rx2530-13 stafd[20311]: Staf._config_ctrls_finish()        - controllers_to_add   = []
2023-07-16T13:32:54.604744-04:00 ssan-rx2530-13 stafd[20311]: Staf._config_ctrls_finish()        - controllers_to_del   = []
2023-07-16T13:32:54.606921-04:00 ssan-rx2530-13 stafd[20311]: Staf._dump_last_known_config()     - DC count = 12

As seen above, the discovered controller list looks right and is consistent with avahi. But the controllers_to_del does not get updated.

And now with SLES15 SP4's nvme-stas-1.1.9-150400.3.9.3, one sees something like this:

2023-07-16T13:22:25.760028-04:00 ssan-rx2530-24 kernel: [2245972.396801][T25121] nvme nvme2: Reconnecting in 10 seconds...
2023-07-16T13:22:26.011178-04:00 ssan-rx2530-24 stafd[22855]: Avahi._service_removed()           - interface=8 (eth6), protocol=IPv4, stype=_nvme-disc._tcp, domain=local, flags=4 (mcast),       name=vs1 dc880912-23f4-11ee-883a-00a098fd5d4f
2023-07-16T13:22:26.268572-04:00 ssan-rx2530-24 stafd[22855]: Avahi._service_removed()           - interface=8 (eth6), protocol=IPv4, stype=_nvme-disc._tcp, domain=local, flags=4 (mcast),       name=vs1 dcc9dd33-23f4-11ee-883a-00a098fd5d4f
2023-07-16T13:22:26.521389-04:00 ssan-rx2530-24 stafd[22855]: Avahi._service_removed()           - interface=8 (eth6), protocol=IPv4, stype=_nvme-disc._tcp, domain=local, flags=4 (mcast),       name=vs1 dd0a310e-23f4-11ee-b05e-d039ea010200
2023-07-16T13:22:26.756617-04:00 ssan-rx2530-24 stafd[22855]: Avahi._service_removed()           - interface=8 (eth6), protocol=IPv4, stype=_nvme-disc._tcp, domain=local, flags=4 (mcast),       name=vs1 dd4e2ea9-23f4-11ee-b05e-d039ea010200
2023-07-16T13:22:27.074280-04:00 ssan-rx2530-24 stacd[22873]: ServiceABC._config_ctrls()
2023-07-16T13:22:27.074515-04:00 ssan-rx2530-24 stacd[22873]: Stac._config_ctrls_finish()        - configured_ctrl_list = []
2023-07-16T13:22:27.080964-04:00 ssan-rx2530-24 stacd[22873]: Stac._config_ctrls_finish()        - discovered_ctrl_list = []
2023-07-16T13:22:27.081314-04:00 ssan-rx2530-24 stacd[22873]: Stac._config_ctrls_finish()        - controllers_to_add   = []
2023-07-16T13:22:27.081584-04:00 ssan-rx2530-24 stacd[22873]: Stac._config_ctrls_finish()        - controllers_to_del   = []
2023-07-16T13:22:28.258572-04:00 ssan-rx2530-24 stafd[22855]: ServiceABC._config_ctrls()
2023-07-16T13:22:28.258784-04:00 ssan-rx2530-24 stafd[22855]: Staf._config_ctrls_finish()        - configured_ctrl_list = []
2023-07-16T13:22:28.258935-04:00 ssan-rx2530-24 stafd[22855]: Staf._config_ctrls_finish()        - discovered_ctrl_list = []
2023-07-16T13:22:28.259113-04:00 ssan-rx2530-24 stafd[22855]: Staf._config_ctrls_finish()        - referral_ctrl_list   = []
2023-07-16T13:22:28.259488-04:00 ssan-rx2530-24 stafd[22855]: Staf._config_ctrls_finish()        - controllers_to_add   = []
2023-07-16T13:22:28.259792-04:00 ssan-rx2530-24 stafd[22855]: Staf._config_ctrls_finish()        - controllers_to_del   = [(tcp, 192.168.3.52, 8009, nqn.1992-08.com.netapp:sn.c38686db23e811ee883a00a098fd5d4f:discovery, eth6), (tcp, 192.168.3.51, 8009, nqn.1992-08.com.netapp:sn.c38686db23e811ee883a00a098fd5d4f:discovery, eth6), (tcp, 192.168.3.53, 8009, nqn.1992-08.com.netapp:sn.c38686db23e811ee883a00a098fd5d4f:discovery, eth6), (tcp, 192.168.3.50, 8009, nqn.1992-08.com.netapp:sn.c38686db23e811ee883a00a098fd5d4f:discovery, eth6)]
2023-07-16T13:22:28.260186-04:00 ssan-rx2530-24 stafd[22855]: Controller.disconnect()            - (tcp, 192.168.3.52, 8009, nqn.1992-08.com.netapp:sn.c38686db23e811ee883a00a098fd5d4f:discovery, eth6) | nvme0
2023-07-16T13:22:28.260657-04:00 ssan-rx2530-24 stafd[22855]: Controller.disconnect()            - (tcp, 192.168.3.51, 8009, nqn.1992-08.com.netapp:sn.c38686db23e811ee883a00a098fd5d4f:discovery, eth6) | nvme1
2023-07-16T13:22:28.261058-04:00 ssan-rx2530-24 stafd[22855]: Controller.disconnect()            - (tcp, 192.168.3.53, 8009, nqn.1992-08.com.netapp:sn.c38686db23e811ee883a00a098fd5d4f:discovery, eth6) | nvme2
2023-07-16T13:22:28.261447-04:00 ssan-rx2530-24 stafd[22855]: Controller.disconnect()            - (tcp, 192.168.3.50, 8009, nqn.1992-08.com.netapp:sn.c38686db23e811ee883a00a098fd5d4f:discovery, eth6) | nvme3
2023-07-16T13:22:28.261838-04:00 ssan-rx2530-24 stafd[22855]: ServiceABC.remove_controller()
2023-07-16T13:22:28.262119-04:00 ssan-rx2530-24 stafd[22855]: ServiceABC._remove_ctrl_from_dict()- (tcp, 192.168.3.52, 8009, nqn.1992-08.com.netapp:sn.c38686db23e811ee883a00a098fd5d4f:discovery, eth6) | nvme0
2023-07-16T13:22:28.262395-04:00 ssan-rx2530-24 stafd[22855]: ControllerABC.kill()               - (tcp, 192.168.3.52, 8009, nqn.1992-08.com.netapp:sn.c3

One can see controllers_to_del is properly updated here above with SP4, unlike the SP5 case described earlier.

That is correct. We do not want to disconnect too quickly from discovery controllers because that also flushes the Discovery Log Pages (DLP) and would result in I/O controllers being removed. So it's very important to not react too quickly to missing mDNS information.

The new behavior is that both the mDNS must be missing and the connection to the Discovery Controller (DC) must be lost to consider that this DC is no longer reachable. When both conditions are met, then a timer is started (default 3 days) after which (and only after which) the DC will be removed and all associated data (DLP) will be removed from the system.

Imagine a CDC that fails (for whatever reason) and is no longer publishing mDNS information. You don't want to just discard it right away (along with all the associated DLP and I/O controller connections made from the DLP) without giving someone a chance to fix it. We're giving technicians some time (default 3 days) to fix things before automatically cleaning up.

Ref: stafd.conf: zeroconf-connections-persistence

Closing. Work as expected.