007revad / Synology_HDD_db

Add your HDD, SSD and NVMe drives to your Synology's compatible drive database and a lot more

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

DSM 7.2.1 Diskstation 918+ NVME volume not supported

ManelGT1956 opened this issue · comments

Previously all was working well, no problems, happy with your scripts... I made the stupid decision to upgrade from last DSM (7.2.64570 update 3, all good...) to the new DSM 7.2.1.69057, really a bad move, as then puff, volumes read-only and, above all, NVME drives not accepted to add as volumes, "not supported in this DSM version..."
Any workaround or solution to this, please?

Even using also your Enable M2 volume script, as it seems...

Correction: as volumes are in a read-only state, it is understandable that probably that's causing scripts failing to be effective, maybe, as they couldn't change (write to) any files built or updated by the DSM upgrade proccess...

The volumes being in a read-only state shouldn't prevent the scripts from working... unless the DSM system partition was in a read-only state.

Why are your volumes currently read only?

Fucking shitty update, i suppose... my Diskstation had its 4 bays and an expansion DX517 with 5 more HDDs and also 2 NVME set as volumes, all working good until now, with your scripts, running well, no problem as i could see. Then i fired up the upgrade and, when finished, machine rebooted with NVME not accepted as fit for volume, only for cache and all other volume set as read-only, even with all HDDs healthy (meanwhile and as per suggestion from the system, did also a partition repair, to no avail...)

i am making a (painfully slow) backup of volume(s) to USB external (they are all 16 TB HDDs), i will try to convert, each volume at a time, from read-only to read-write, as that option appears in the storage manager menu options (don't know what and how it will work and if good option for my data...)
We'll see... at second resort, will try your script to "downgrade" DSM to previous version and, last resort, reset/reinstall it all with data from backup, as much as possible...
Wish me luck... (and sorry, english is not my native language)

I'm about to install DSM 7.2.1 on a DS720+

Then I see if my Synology_DSM_reinstall script will let me downgrade to DSM 7.2 update 1.

O.K., thanks... let's see how it goes...hope well

It is possible to downgrade from 7.2.1 to 7.2. But the DSM_reinstall script needs to updated to be able to downgrade from 7.2.1 to 7.2

So I manually edited /etc/defaults/VERSION and was able to downgrade to DSM 7.2

Steps I used to downgrade from 7.2.1 to "7.2 with Update 1"

NOTE: If you follow these steps, disable Secure Signin first.

  1. Edit /etc/defaults/VERSION to change these lines
    micro="0"
    buildnumber="64569"
    base="64569"
    productversion="7.2"

  2. Manual DSM Install to "7.2 with Update 1".

  3. Once NAS has finished rebooting refresh the browser and login. It didn't auto-refresh like normal.

  4. Uninstall incompatible packages from Package Center:
    Synology Photos
    Synology Application Service

  5. Uninstall incompatible default packages via SSH:
    synopkg uninstall StorageManager
    synopkg uninstall QuickConnect
    synopkg uninstall ScsiTarget
    synopkg uninstall SecureSignIn

  6. Re-install "7.2 with Update 1" again to install the correct default package versions.
    Edit /etc/defaults/VERSION again.
    Manual DSM Install to "7.2 with Update 1".

  7. Re-install uninstalled packages:
    Synology Photos
    Synology Application Service

Thank you very much!
I will try it as soon as i finish backing up volume 1 among alternate steps i mencioned.
Then, when done i will give feedback.
Again, thank you for your effort and attention, see ya then... ;)

Notes:

  • back-up of volume 1, of around 12 TB of data (there are 9, more two 1 TB NVME - these ones not accessible for now...), as it is being done to an external USB connected HDD, will take around at least one day, as far as i can see;
  • the upgrade, not only put my volumes in read-only mode but also left it without most of the packages, even the so called "standard" ones...
  • so, waiting for it to conclude and i will report again in a couple days ;)

Someone in issue #150 had a similar problem.

They solved it by doing the following steps.

  1. Shutdown NAS.
  2. Remove all M2 devices.
  3. Power on NAS.
  4. Shutdown NAS.
  5. Add M2 devices Kingston and Adata.
  6. Power on NAS.
  7. Run syno_hdd_db.sh -nfr
  8. All M2 devices are detected.

Note: Step 5 with 2 different M2 drives was because DSM refused to see their Kingston NVMe drives (and they had a spare Adata NVMe drive).

My observation to share. I installed the 7.2.1-69057 on my DS918+ with two M.2 INTEL SSDPEKNW010T8 cards in RAID1, getting the warning of incompatible drives. After running the latest syno_hdd_db.sh from main branch (without any options) and rebooting, the volumes became healthy and all is working fine as before.

Note to clarify my problem (and after reboot(s) with scripts HDD and Enable M2 done...) - even after doing the steps to perform a downgrade to previous DSM version, as told by 007revad...(now it shows as being DSM 7.2-64570 Update 3...)

1 -
In fact all drives appear in storage manager and healthy now (as it was the case, also, before, with the HDDs, only just the NVMEs showed before as not supported and so with no more information or accessible...), as i am stating, the drives themselves all appear(ed) as healthy, like before, also now the NVMEs too...

IT'S JUST THE VOLUMES THAT APPEAR AS BEING IN READ-ONLY STATE, and consequentially also the drives in them.

(ODDLY ENOUGH, after leaving, for a few times, the NAS powered off and turning it on again after a while, maybe related to having the 007Revad scripts running, set as scheduled/triggered task as per their instructions (note: with optional flags set, i will have to also try without them, too?...) I GOT ONE NVME, and so its corresponding volume - i have 2 NVMEs, each with its own volume and data -TURN ON AS HEALTHY, in good state and fully usable, no read-only state or any subsequent problem, apparently... the odd thing is that this happened with only one of them and not with the other, although they are the exact same model, etc... - still, also, the HDD volumes continue to appear as being in Read-only state, as the other NVME)

2-
ALSO, when i upgraded to DSM 7.2.1, most probably my issues could be relatable to a apparent/probable (?) and again odd incomplete/partially failed upgrade, as it seemed not all packages where present or upgraded or seen in package center, most were apparently not existent or not appearing there... AND after performing the "downgrade" as instructed by 007revad, this problem persisted... most packages where absent or not appearing/seeming as fully installed (e.g.: storage manager although "working" and accessible through "main menu" or similar, doesn't appear in package center, and so it happens with a lot of the usual packages...), even after re-installing DSM.
Could this be related, probably...?
I think i will (have to) try the process again (editing VERSION and re-installing DSM 7.2 again), after finishing some back-ups.
Hope this will improve further, wish me luck...
And thanks again for your patience with my lengthy reports and for your helping efforts.

Sorry, another note... although storage manager offering the option to turn volumes from read-only to read-write state, that function repeatedly failed every time i tried it (even risking, probably, to erase/loose their data).

Correction, maybe: i can't now recall for sure...but i have the impression that, after all, Storage manager not appearing in package center is a normal thing as it is, can't remember if it did, before...i suppose not, after all

but, anyway, the remainder of my notes is still valid, as far as i can check...

Before DSM 7.2.1 storage manager was built into DSM. In DSM 7.2.1 storage manager is now package that Synology can update without needing to release a new DSM update.

You are right, i got a bit confused... anyway, a lot of other packages were wiped away from installed packages along the update, as much as i could figure out.

Still, as already told, although drives healthy, volumes stubbornly on a read-only state... i suspect also, reading from other posts elsewhere, that it could relate to some RAM problem (i have 16 Gb of non Synology RAM installed, hope the problem isn't here, but...)

Anyway, still haven't found any easier way to solve this (converting or repairing volumes from read-only to read-write again, without losing data, etc...), so, meanwhile, had to resort to do a back-up of volume 1 and proceed to eliminate and remake it again from scratch, recovering what settings and data i will be able to, from the back-ups.

Shitty week of trouble, so far, almost a week trying to figure out what problem and what possible solutions i could have in hands, almost 2 days to back-up and salvage what i could from that disk, then a re-creation of volume and data scrubbing running now, probably for almost one day more, then getting back all i can from back-up and that means enduring, again, at least almost 2 days crossing fingers, then testing, redoing settings, packages and such, for probably another day or a couple days and hoping it all finally recovers and comes out good again. All while having to keep my "real world" life chugging along (work, family, we all know what it is...)

Most probably I will have to do this, too, with all the other eight HDD volumes and at least one NVME volume (eight 16 TB HDDs/volumes and one 1 TB NVME unit/volume - the other NVME, as reported before, is the only one volume that came back normal, by itself, healthy and read-write...weird thing as it happens to be)
Well, sometimes life sucks, indeed...wish me luck, for next couple of months of painfull work i will have to endure, if no other solution comes up in the meantime. ;)

I hate spending days checking my backups are up to date, and more days restoring everything from backups.

This why I never setup a drive as single BASIC storage pool. For HDDs I put them all in one SHR or SHR 2 storage pool. For NVMe drives I like to put them in one storage pool as RAID 1 (though the small size of NVMe drives makes them easier to backup and restore).

I could completely agree with you, for most use cases, but...

(Note that i use BTRFS on all volumes - and SHR as much as i can, here)

In my case and just about the NVMEs (later i'll come back about the HDDs, this is already long enough, for now, its late and i have to get ready to go to work, i'll be back by the end of the day...) it was just:

  • my NAS, DS918+, can so far only bear 2 of them, as we know and, being smaller, quicker and, as such, "easier" to back-up and manage, in the end and after some pondering pros and cons, I decided to have:

    • one for almost exclusively a Plex server, as it is already enough, now, to grab most of that space and even eager for more and more as time goes on, dealing with a quickly and continuously growing library already around 90+ terabytes of movies, documentaries, TV series, music... easier to keep an eye on it as it grows with less worries of conflit/constraints of space/resources with/from other apps, less hassle too if needing to substitute it anytime (failing/worned out/end of life, need and/or availability of a bigger one, etc...)

    • the other for Docker (now Container manager), where i run a handfull of containers to deal with my torrent downloads (Deluge), DC++ (serving some DC++ hubs and a dockered AirDC++ as one of my DC++ clients), Virtual machines (for now just a Windows 10 VM to run just some other DC++ clients) and also a couple more stuff but not much else, due to space constraints and the limited Diskstation resources and processing power.
      Same reasoning about compartmentalization and less hassle with apps and settings and so on, if needing or wanting to substitute later.

    Like this, not only i can mange them independently, in a compartmentalized/modularized manner but also i am prioritising and taking advantage of space management over the real world "benefits", as far as i can see, of using any RAID here (available realistic choices being SHR/RAID 1/0, only) - with only 2 units and bandwidth/resources/CPU processing power the DS918+ provides them, can't see any real use added value, overall, for any RAID option, in this scenario.

O.k., thanks, see you later in the evening ;)

Just chiming in that I've also ran into this issue but I'm actually not using NVME drives in mine. I was using the script for the memory and drive checking features. My main volume converted to read-only after updating to 7.2.1 and I'm working on downgrading it back to 7.2 now with some success in at least downgrading. So far nothing has allowed me to unlock the volume since it does seem completely healthy.

The error I receive is "Volume 1 has become read-only. Some of its features may not be supported by the current DSM version or your DiskStation model."

I'm also working on backing up my data like ManelGT1956 is since downgrading doesn't seem to sort out the issue.

@Colboto What model Synology do you have?

When I downgraded from DSM 7.2.1 to 7.2 I had packages that needed repairing but failed to repair. The optional packages needed to be uninstalled and re-installed. The default packages had to be uninstalled via SSH.

I documented the steps I had to perform to downgrade and get it all working:
https://github.com/007revad/Synology_DSM_reinstall/wiki/How-to-roll-back-from-7.2.1-to-7.2

I also lost my Hyper Backup tasks, and only realised today when I noticed my backups hadn't run recently.

I have a DS1819+, I forgot that I also enabled immutability with the script, and that was working, but I wonder if 17.2.1 detects that the 1819+ isn't supposed to be using immutability and marks the volume as not meant for my device based on the error it gives.

I did perform the downgrade successfully but DSM just wouldn't let go of the read-only lock. It would just fail every time.

read-only

Did you have any immutable snapshots?

You could try running the following command:
synosetkeyvalue /etc.defaults/synoinfo.conf support_worm "no"

After that don't run syno_hdd_db.sh with the -i or --immutable option. If you have it scheduled make sure the schedule is not using the -i or --immutable option.

Then reboot and see if your volume is healthy.

You could also try the following:

Run this command to find the md# for the array that contains volume 1.
cat /proc/mdstat

Then assuming it's md2 run:
mdadm --readwrite md2

My backup finished so I tried both fixes and neither seemed to do anything to the volume. Running --readwrite gives me the following message after I verified md2 is my volume.

"mdadm: failed to set writable for md2: Device or resource busy"

What does the following command return?
sudo mdadm --detail /dev/md2

Also:

sudo mdadm --detail /dev/md0
sudo mdadm --detail /dev/md1

I've actually already rebuilt the volume to reload my backup. But when I ran mdadm --detail /dev/md2 before nothing looked unusual. However, md0 and md1 both look like they're still untouched and they actually show unusual signs.

mdadm --detail /dev/md0
/dev/md0:
Version : 0.90
Creation Time : Tue Nov 29 17:38:21 2022
Raid Level : raid1
Array Size : 8388544 (8.00 GiB 8.59 GB)
Used Dev Size : 8388544 (8.00 GiB 8.59 GB)
Raid Devices : 8
Total Devices : 7
Preferred Minor : 0
Persistence : Superblock is persistent

Update Time : Fri Nov  3 13:28:46 2023
      State : clean, degraded

Active Devices : 7
Working Devices : 7
Failed Devices : 0
Spare Devices : 0

       UUID : 6cf4b99b:8243785d:3017a5a8:c86610be
     Events : 0.1291178

Number   Major   Minor   RaidDevice State
   0       8        1        0      active sync   /dev/sda1
   1       8       17        1      active sync   /dev/sdb1
   2       8       33        2      active sync   /dev/sdc1
   3       8       65        3      active sync   /dev/sde1
   4       8       81        4      active sync   /dev/sdf1
   5       8       97        5      active sync   /dev/sdg1
   6       8      113        6      active sync   /dev/sdh1
   -       0        0        7      removed

mdadm --detail /dev/md1
/dev/md1:
Version : 0.90
Creation Time : Tue Nov 29 17:38:24 2022
Raid Level : raid1
Array Size : 2097088 (2047.94 MiB 2147.42 MB)
Used Dev Size : 2097088 (2047.94 MiB 2147.42 MB)
Raid Devices : 8
Total Devices : 7
Preferred Minor : 1
Persistence : Superblock is persistent

Update Time : Fri Nov  3 11:04:00 2023
      State : clean, degraded

Active Devices : 7
Working Devices : 7
Failed Devices : 0
Spare Devices : 0

       UUID : 60eceb63:1a80d6b5:3017a5a8:c86610be
     Events : 0.120590

Number   Major   Minor   RaidDevice State
   0       8        2        0      active sync   /dev/sda2
   1       8       18        1      active sync   /dev/sdb2
   2       8       34        2      active sync   /dev/sdc2
   3       8       66        3      active sync   /dev/sde2
   4       8       82        4      active sync   /dev/sdf2
   5       8      114        5      active sync   /dev/sdh2
   6       8       98        6      active sync   /dev/sdg2
   -       0        0        7      removed

My md0 and md1 also show "clean, degraded" and empty bays as "removed".

Yours has Creation Time as Nov 29 2022 and Update Time as Nov 3 2023 so they look like they got updated when you re-created the volume.