fcorbelli / zpaqfranz

Deduplicating archiver with encryption and paranoid-level tests. Swiss army knife for the serious backup and disaster recovery manager. Ransomware neutralizer. Win/Linux/Unix

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Create archives with volumes

KamikazeePL opened this issue Β· comments

Is it possible to create archive with multi-part volumes?
Something like in 7-zip command 7z a a.7z *.txt -v100M

Quote from 7-zip help

-v (Create Volumes) switch
Specifies volume sizes.

Syntax
-v{Size}[b | k | m | g]

{Size}[b | k | m | g]
Specifies volume size in Bytes, Kilobytes (1 Kilobyte = 1024 bytes), Megabytes (1 Megabyte = 1024 Kilobytes) or Gigabytes (1 >Gigabyte = 1024 Megabytes). if you specify only {Size}, 7-zip will treat it as bytes.
It's possible to specify several -v switches.

NOTE: Please don't use volumes (and don't copy volumes) before finishing archiving. 7-Zip can change any volume (including first volume) at the end of archiving operation.

Examples
7z a a.7z *.txt -v10k -v15k -v2m

creates multivolume a.7z archive. First volume will be 10 KB, second will be 15 KB, and all others will be 2 MB.

Yes, you can
In fact it was the very first change I suggested for zpaq, which was later integrated

The easiest is by multivolume, using how many "?" you want (in this example 4)

zpaqfranz a "z:\a_????.zpaq" c:\data\

You can see some examples with

zpaqfranz -he (name of the command)

In this case

zpaqfranz -he a

a means... add

with

zpaqfranz h h

the commands

The 2nd way is using the backup command.

zpaqfranz backup z:\mygooddata c:\whatever d:\you e:\want

The backup command will create

  • 1 file for each version (z:\mygooddata_00000001.zpaq, z:\mygooddata_00000002.zpaq...)
  • 1 mygooddata_00000000_backup.index
  • 1 mygooddata_00000000_backup.txt
    The last 2 files can be used for faster integrity check

Read the wiki or

zpaqfranz -he backup

I want to limit to size of zpaq archive.

You cannot πŸ˜„
I could theoretically do this, but it would really require a lot of work

You cannot πŸ˜„ I could theoretically do this, but it would really require a lot of work

my first question was about it. We lost in translation.

Is it really that important to divide into pieces of a fixed size?
It would indeed require quite complex development
I'd rather spend more time on new features, like backups over TCP

close for now

You can limit the size of the archive using split. I think it would be nice to have this in the program, but in the meantime it's reasonable to handle it with external utilities.

zpaqfranz does create "chunked" archives of fixed size (now)
from 59.1

https://sourceforge.net/projects/zpaqfranz/files/59.1/

-chunk

After (quite a lot) of digging here the fixed-chunk version, a long overdue improvement Of course there are a lot of complexity to be taken in account (aka: nothing is easy with zpaq)

zpaq does not allow to create archives of a fixed size: this implies that, generally, the first file is gigantic, and subsequent ones much smaller. This prevented splitting on optical media (e.g., Blue Ray), and this is bad (incidentally that's "where" I'm going to use it). The zpaq's operating logic doesn't really provide multiple output files, but now (maybe) it does πŸ˜„

Operation, which is still not fully integrated (for example, it is not supported in the backup command), is easy to activate. It is like a normal multipart archive, but with a switch that indicates the maximum size of the pieces Running with encrypted archives has been difficult, and it is still not 101% tested

zpaqfranz a z:\ronko_?? whatever-you-like -chunk 1G
zpaqfranz a z:\ronko_?? who-knows -chunk 500m

The -chunk gets straight number (2000000), K,M,G (100M), KB,MB,GB (500MB) The chunks are not guaranteed to be 100% exact, typically should be multiple of 64KB

59.2 is underway (minor bug fixed, interaction with Windows GUI (PAKKA)
1

https://github.com/fcorbelli/zpaqfranz/releases/tag/59.1

Basically, the size can be indicated in bytes, in multiples of 1000 (K, M, G) or in multiples of 1024 (KB, MB, GB)

15000000
10M
3GB
100KB

In version 59.1 there are not many controls, so it is good not to indicate size too small (* depends on 100 different factors). With the default settings I would not go below 100KB, just in case

I decided to implement to allow a split for Blue-Ray burning.

Note: the last piece will be LESS than the data set. The next version will NOT alter that file, but will create a new one.

TRANSLATION

if your archive become (first version, just an example)

part_01.zpaq 10000000
part_02.zpaq 10000000
part_03.zpaq 300

Running a second version will make something like

part_01.zpaq 10000000
part_02.zpaq 10000000
part_03.zpaq 300
part_04.zpaq 10000000
part_05.zpaq 10000000
part_06.zpaq 10000000
(...)
part_10.zpaq 55555

It is possible to convert a chunked file to a normal file, with the m (merge) command.

It is NOT possible, as with normal multipart files, to perform operations such as dump (command to see the internal structure) on individual chunks (the explanation is obvious: chunks no longer begin at versions)