Stefan311 / ZfsSpy

Tool to explore internal data structures from ZFS devices and to recover data from damaged pools

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Sector outside device!

meshr-net opened this issue · comments

I get this error:

Sector outside device!
java.lang.Exception: Sector outside device!
at io.ZFSIO.getBlock(ZFSIO.java:107)

Please help

java gets zero size here "dev.size = f.length();" for mapper device. I made disk image to fix it, but i get "Magic Number 0 invalid" errors in the Uberblocks. it says invalid everywhere in "Browse Blocks". "Browse Filesystem" says "Uberblock with last transaktion group not found."

Sorry for late reply.

Magic Number "0" means, this is no a valid Uberblock. This Magic Number must be "bab10c" or "cb1ba0000000000" (with wrong endian). Are the other Uberblock data also 0? In this case this Uberblock is overwritten with zeroes.

I have some questions:

  1. What was the pool geometry? Single disk? Mirror? Striped?
  2. What system do you use? (interesting to find the zero size issue)
  3. Does the vDev summary (the first page on "Browse Blocks") display plausible values? There must be 4 disklabel-rows for every disk with equal, non-zero data. If one disklabel displays something else, this disklabel may be corrupt. Since all disklabels are exact copies you can try an another one.
  4. Does the disklabel detail page (if you click "explore" in the vDev summary page) displays plausible data? There must be something in the "Name/Value Pairs" table, and something blue in the Uberblock table! If not - this disklabel is corrupt.
  5. Have you got at least one valid disklabel?
  6. What is happened to this pool? Maybe I have an idea to recover the data.
  7. Are you sure you have done the disk imaging right? I just ask, if you also copied the partition table, this would not work...

Thank you for your reply. I can pay your time if you can help me to recover my data as I lost valuable data.

  1. Single disk
  2. Ubuntu 16
    3,4. It has values except L2 row (it is 'not set!' there). Actually "Magic Number 0 invalid" is shown only in L2 ‘label nr’ everywhere. For other Labels: Uberblock0 says 'Magic Number bab10c valid'. But Pointers say ‘OBJSET (11), invalid invalid invalid’ If I click on 1st invalid. It says:

Block pointer #1 invalid blockpointer
Errors:
null
java.lang.IndexOutOfBoundsException
at java.nio.Buffer.checkIndex(Buffer.java:540)
at java.nio.HeapByteBuffer.get(HeapByteBuffer.java:139)

If I click on 2nd and 3rd invalid. It says:
Block pointer #0 DNODE (10), invalid valid valid
Block pointer #1 DNODE (10), valid valid valid
If i click 'Use target as Filesystem'
Errors:
Failed to load objectlist block #128 (objects 4096-4127)
blockpointer #1 ist not a MASTER NODE (21)
java.lang.Exception: blockpointer #1 ist not a MASTER NODE (21)
at httpHandlers.HttpHandlerDataset.createHtmlBody(HttpHandlerDataset.java:138)

Other Uberblocks there say: 'Magic Number 0 invalid' and 'Pointers empty'
Nothing is blue in Uberblocks.
5 Uberblocks are invalid everywhere
6 it was in vmware and didn't survive hard reset.
7 I used dd, it is about 500GB. I can see some files content if i look through raw data. So there must be a way to recover them

'Magic Number bab10c valid'

So the uberblock array isn't written over. Good.
Let me explain the uberblock mechanism: If ZFS finish a writing, it increase the transaction number, write the transaction number to the Label, chose a random spot in the uberblock list, write the transaction number and the pointers do the most recent metadata structures to this spot.
So the highest number in the uberblock list have the highest chance to get valid data. If the pool is well, the highest number in the uberblock list should match the transaction number in the label. This one is coloured blue for easy view. I would suggest to try out the uberblocks with the highest and the second highest transaction number.
If this have no valid block pointers, we can try to find out what exactly is invalid on this pointers. Please click on "Options Blockpointer: Short->Long". This will expand the blockpointer row. Now we can see where the block is, how it's compressed, how it's checksummed and so on. I think your system maybe use a checksum or compression type who is not yet supported. Maybe I have to implement this first.
Finally I have to say: Since ZFS use copy-on-write for all data structures, and redundancy to most important metadata structures there is a high chance to get at least some data back. ZfsSpy was written to recover some of my very private data, and it finally worked! Seems there is not many (or none) interest on an ZFS data recovery tool, so I suspend the development until now. If I can help you I will resume the work.