007revad / Synology_HDD_db

Add your HDD, SSD and NVMe drives to your Synology's compatible drive database and a lot more

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Failed to re-add Samsung SSD after upgrading to DSM 7.2.1-69057 Update 3 and ran the script

jk1z opened this issue · comments

After upgrading to DSM 7.2.1-69057 Update 3 and re-run the script. Somehow SSD is not included in the storage pool. How can I re-add SSD back?
image
image
image
image

Did you do the Online Assemble that the warning mentioned?

See how to do an online assemble

@007revad No, because online assemble only available to the "available storage pool" That option wasn't there

Try running the script again then rebooting.

@jk1z not responding.

@007revad Hi I have tried downloading new binary and run it. Still no luck. I have even taken the nvme drive out and format it.

I think it's stuck in a place where it's "in" the storage pool config. However, because it's not in the UI, I cannot remove it and then repair it

Is there a way you know I can ssh and remove the ssd disk storage config

I have even taken the nvme drive out and format it.

As you have no data on the drive now you could try https://github.com/007revad/Synology_M2_volume which will create the DSM system and swap partitions and create a storage pool. After a reboot the online assemble option should appear.

@007revad I'm getting this error :'(
image

image image image I have successfully execute the script but did not have the new storage pool

DSM thinks that NVMe drive is part of a cache group. Maybe a read/write cache with 1 NVMe drive missing.

Did you previously have a cache setup for volume 1?

If you go to "Storage Manager > Storage" and click on "Create > Volume" is the NVMe drive available?

Yes, I have in DSM 7.2.1 Update 2. but once upgraded to update 3 the nvme drive disappeared in the cache group.

A couple of people have reported that they needed to run the script and reboot, 2 or 3 times to get their NVMe drives back.

Try the following command:
sudo -i synostorage --unlock-disk /dev/nvme0

Then reboot.

Apparently it can take a few hours for things to appear normal.

A couple of people have reported that they needed to run the script and reboot, 2 or 3 times to get their NVMe drives back.

Try the following command: sudo -i synostorage --unlock-disk /dev/nvme0

Then reboot.

Apparently it can take a few hours for things to appear normal.

when you referring to the script which script is it? use nvme as sdd drive or adding third party nvme to the db?

Still no luck but I will perform a data scrubbing to see if it does any good

when you referring to the script which script is it? use nvme as sdd drive or adding third party nvme to the db?

The syno_hdd_db script.

The syno_hdd_db script.
I have tried 5 times. The drive still stuck in the detected state. And I cannot reassemble it

You could try shutting down the NAS, remove the NVMe drive, bootup, shut down, insert NVMe drive and boot up to see if it clears the error.

What do the following commands return?

sudo nvme list

udevadm info /dev/nvme0n1

cat /proc/mdstat | grep -E -A 2 'nvme|unused'

ls /run/synostorage/disk_cache_target

for f in $(ls /run/synostorage/disks/nvme0n1); do echo -n "${f}: " && cat /run/synostorage/disks/nvme0n1/$f && echo; done

Yesterday, a user using DS918+ was already using two Micron 1100 SSD 2TB products included in the compatibility list from Synology, but there was an issue where a problem occurred after the drive database information was updated.
For detailed inquiries, we will contact you as a separate issue.

스크린샷 2023-12-29 오전 11 19 24

It seems to me that this issue is also relevant.
This problem seems to have occurred due to a merge update to DB information already included in the compatibility list.

스크린샷 2023-12-29 오후 12 21 22

for f in $(ls /run/synostorage/disks/nvme0n1); do echo -n "${f}: " && cat /run/synostorage/disks/nvme0n1/$f && echo; done

ash-4.4# sudo nvme list
Node             SN                   Model                                    Namespace Usage                      Format           FW Rev
---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme0n1     S3X4NB0K300311M      Samsung SSD 960 EVO 500GB                1         118.24  GB / 500.11  GB    512   B +  0 B   3B7QCXE7
/dev/nvme0n1p1   S3X4NB0K300311M      Samsung SSD 960 EVO 500GB                1         118.24  GB / 500.11  GB    512   B +  0 B   3B7QCXE7
/dev/nvme0n1p2   S3X4NB0K300311M      Samsung SSD 960 EVO 500GB                1         118.24  GB / 500.11  GB    512   B +  0 B   3B7QCXE7
/dev/nvme0n1p3   S3X4NB0K300311M      Samsung SSD 960 EVO 500GB                1         118.24  GB / 500.11  GB    512   B +  0 B   3B7QCXE7
ash-4.4# udevadm info /dev/nvme0n1
P: /devices/pci0000:00/0000:00:1b.0/0000:01:00.0/nvme/nvme0/nvme0n1
N: nvme0n1
E: DEVNAME=/dev/nvme0n1
E: DEVPATH=/devices/pci0000:00/0000:00:1b.0/0000:01:00.0/nvme/nvme0/nvme0n1
E: DEVTYPE=disk
E: MAJOR=259
E: MINOR=0
E: PHYSDEVBUS=pci
E: PHYSDEVDRIVER=nvme
E: PHYSDEVPATH=/devices/pci0000:00/0000:00:1b.0/0000:01:00.0
E: SUBSYSTEM=block
E: SYNO_ATTR_SERIAL=S3X4NB0K300311M
E: SYNO_DEV_DISKPORTTYPE=CACHE
E: SYNO_INFO_PLATFORM_NAME=apollolake
E: SYNO_KERNEL_VERSION=4.4
E: SYNO_SUPPORT_USB_PRINTER=yes
E: SYNO_SUPPORT_XA=no
E: TAGS=:systemd:
E: USEC_INITIALIZED=973961

ash-4.4# cat /proc/mdstat | grep -E -A 2 'nvme|unused'
unused devices: <none>
ash-4.4# ls /run/synostorage/disk_cache_target
ash-4.4# for f in $(ls /run/synostorage/disks/nvme0n1); do echo -n "${f}: " && cat /run/synostorage/disks/nvme0n1/$f && echo; done
adv_damage_weight: 0
adv_status: not_support
bad_sec_ct: -1
below_remain_life_mail_notify_thr: 0
below_remain_life_show_thr: 0
below_remain_life_thr: 0
compatibility: disabled
compatibility_action: {"alert":false,"hide_alloc_status":false,"hide_is4Kn":false,"hide_remain_life":false,"hide_sb_days_left":false,"hide_serial":false,"hide_temperature":false,"hide_unc":false,"notification":false,"notify_health_status":true,"notify_lifetime":true,"notify_unc":true,"selectable":true,"send_health_report":true,"show_lifetime_chart":true,"ui_compatibility":"support"}
compatibility.lock:
container:
critical_warning: 0
dsl_cmd_support: 0
erase_time: 1
firm: 3B7QCXE7
firm_status_from_db: do_nothing
firm_status_from_db.lock:
force_compatibility: support
id: M.2 Drive 1
ironwolf: 0
is_bundle_ssd: 0
is_syno_drive: 1
low_perf_in_raid: normal
low_perf_in_raid_disk_list:
m2_pool_support: 1

mask_serial: 0
model: Samsung SSD 960 EVO 500GB
predict_status: not_support
predict_weight: 0
read_only: 0
remain_life: 99
remain_life_danger: 0
reset_fail_status: normal
reset_fail_weight: 0
sct_cmd_support: 0
seq_status: normal
serial: S3X4NB0K300311M
smart: normal
smart_attr_ignore: 1
smart_damage_weight: 0
smart_selftest_log_type: 0
smart_test_ignore: 1
smart_test_support: 0
ssd_bad_block_over_thr: 0
temperature: 32
timeout_status: normal
timeout_weight: 0
type: SSD
ui_serial: S3X4NB0K300311M
unc_status: normal
unc_weight: 0
vendor: Samsung
wdda_status: not_support
wdda_support: 0

These 2 stand out to me:

compatibility: disabled
is_syno_drive: 1

Have you previously run Synology_enable_M2_volume?

What do these commands return?

ls -l /usr/lib/libhwcontrol.so.*

md5sum -b /usr/lib/libhwcontrol.so.1

The last command should return:
afdcbf2ca3aa188cd363e276a1f89754 */usr/lib/libhwcontrol.so.1

Also try the following:

  1. Disable any syno_hdd-db schedules you have.
  2. Run sudo -i syno_hdd_db.sh --restore then reboot.
  3. Run this version of syno_hdd_db.sh with the -nr options: https://github.com/007revad/Synology_HDD_db/releases/tag/v3.3.74

image

Have you previously run Synology_enable_M2_volume?

I don't think so. Should I?

image
Looks like this file has been modified

I will restore all of the files and try v3.3.74

These 2 stand out to me:

compatibility: disabled
is_syno_drive: 1

Have you previously run Synology_enable_M2_volume?

What do these commands return?

ls -l /usr/lib/libhwcontrol.so.*

md5sum -b /usr/lib/libhwcontrol.so.1

The last command should return: afdcbf2ca3aa188cd363e276a1f89754 */usr/lib/libhwcontrol.so.1

Also try the following:

  1. Disable any syno_hdd-db schedules you have.
  2. Run sudo -i syno_hdd_db.sh --restore then reboot.
  3. Run this version of syno_hdd_db.sh with the -nr options: https://github.com/007revad/Synology_HDD_db/releases/tag/v3.3.74

I did the following. It still stuck
image

I ran the debug commands again. Here is the output.

overlord@Synology:~$ overlord@Synology:~$ sudo nvme list
Password:
Node             SN                   Model                                    Namespace Usage                      Format           FW Rev
---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme0n1     S3X4NB0K300311M      Samsung SSD 960 EVO 500GB                1         118.24  GB / 500.11  GB    512   B +  0 B   3B7QCXE7
/dev/nvme0n1p1   S3X4NB0K300311M      Samsung SSD 960 EVO 500GB                1         118.24  GB / 500.11  GB    512   B +  0 B   3B7QCXE7
/dev/nvme0n1p2   S3X4NB0K300311M      Samsung SSD 960 EVO 500GB                1         118.24  GB / 500.11  GB    512   B +  0 B   3B7QCXE7
/dev/nvme0n1p3   S3X4NB0K300311M      Samsung SSD 960 EVO 500GB                1         118.24  GB / 500.11  GB    512   B +  0 B   3B7QCXE7
overlord@Synology:~$ udevadm info /dev/nvme0n1
P: /devices/pci0000:00/0000:00:1b.0/0000:01:00.0/nvme/nvme0/nvme0n1
N: nvme0n1
E: DEVNAME=/dev/nvme0n1
E: DEVPATH=/devices/pci0000:00/0000:00:1b.0/0000:01:00.0/nvme/nvme0/nvme0n1
E: DEVTYPE=disk
E: MAJOR=259
E: MINOR=0
E: PHYSDEVBUS=pci
E: PHYSDEVDRIVER=nvme
E: PHYSDEVPATH=/devices/pci0000:00/0000:00:1b.0/0000:01:00.0
E: SUBSYSTEM=block
E: SYNO_ATTR_SERIAL=S3X4NB0K300311M
E: SYNO_DEV_DISKPORTTYPE=CACHE
E: SYNO_INFO_PLATFORM_NAME=apollolake
E: SYNO_KERNEL_VERSION=4.4
E: SYNO_SUPPORT_USB_PRINTER=yes
E: SYNO_SUPPORT_XA=no
E: TAGS=:systemd:
E: USEC_INITIALIZED=382185

overlord@Synology:~$ cat /proc/mdstat | grep -E -A 2 'nvme|unused'
unused devices: <none>
overlord@Synology:~$ ls /run/synostorage/disk_cache_target
overlord@Synology:~$ for f in $(ls /run/synostorage/disks/nvme0n1); do echo -n "${f}: " && cat /run/synostorage/disks/nvme0n1/$f && echo; done
adv_damage_weight: 0
adv_status: not_support
bad_sec_ct: -1
below_remain_life_mail_notify_thr: 0
below_remain_life_show_thr: 0
below_remain_life_thr: 0
compatibility: disabled
compatibility_action: {"alert":false,"hide_alloc_status":false,"hide_is4Kn":false,"hide_remain_life":false,"hide_sb_days_left":false,"hide_serial":false,"hide_temperature":false,"hide_unc":false,"notification":false,"notify_health_status":true,"notify_lifetime":true,"notify_unc":true,"selectable":true,"send_health_report":true,"show_lifetime_chart":true,"ui_compatibility":"support"}
compatibility.lock:
container:
critical_warning: 0
dsl_cmd_support: 0
erase_time: 1
firm: 3B7QCXE7
firm_status_from_db: do_nothing
firm_status_from_db.lock:
force_compatibility: support
id: M.2 Drive 1
ironwolf: 0
is_bundle_ssd: 0
is_syno_drive: 1
low_perf_in_raid: normal
low_perf_in_raid_disk_list:
m2_pool_support: 1

mask_serial: 0
model: Samsung SSD 960 EVO 500GB
predict_status: not_support
predict_weight: 0
read_only: 0
remain_life: 99
remain_life_danger: 0
reset_fail_status: normal
reset_fail_weight: 0
sct_cmd_support: 0
seq_status: normal
serial: S3X4NB0K300311M
smart: normal
smart_attr_ignore: 1
smart_damage_weight: 0
smart_selftest_log_type: 0
smart_test_ignore: 1
smart_test_support: 0
ssd_bad_block_over_thr: 0
temperature: 31
timeout_status: normal
timeout_weight: 0
type: SSD
ui_serial: S3X4NB0K300311M
unc_status: normal
unc_weight: 0
vendor: Samsung
wdda_status: not_support
wdda_support: 0

I did the following. It still stuck image

I should have asked if you were using Xpenology. Hopefully PeterSuh-Q3 can help you.

Ah, ok I see. I might replace the nvme with another one. It looks like this config is permanently stuck