Faalagorn / zopfli

Advanced Zopfli fork: ZIP support, multi-threading, compression cache (zopfliDB), auto/manual block split, compression and speed control with thread affinity locking and more, experimental switches.

Home Page:http://encode.su/threads/2176-Zopfli-amp-ZopfliPNG-KrzYmod

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Zopfli KrzYmod is a Zopfli fork that heavily modifies original Zopfli
project for power-user control and more features.

Features:
- Multi-threaded application with thread affinity locking,
- ZIP container support,
- Separate GZIP with filename stored support,
- Stores file modification time inside archive,
- Multiple files compression with ZIP container,
- Multiple recompressions support after splitting last,
- Best statistics directory/file based database to resume iterating,
- Additional switches to finetune compression and block splitting,
- Ability to use dumb block size splitting,
- Ability to use predefined split points,

Without passing its special commands the program should run as usual.

If you wish you can support me with PayPal: http://paypal.me/MrKrzYch00
Greatly appreciated.

Description of new commands list in Zopfli KrzYmod:

1. --zip

   Output as ZIP. The file will be slightly bigger than GZIP format but will
   allow Zopfli to store more than one file inside.
   Attributes are NOT supported!
   Empty files are NOT supported! [official Zopfli behavior]
   Empty directories are NOT supported! The actual directory entries 
   are NOT stored inside ZIP format as there is an additional overhead to that.

2. --gzipname

   Store original file name in GZIP format which will incrase its size a bit.

3. --dir

   Accept directory containing subdirectories and files as an input.
   This parameter requires --zip to be used.
   Check --zip switch for limitations.

4. --mb#

   Maximum blocks allowed for stream to be split to. Default is 15 as per
   original. 0 means unlimited.

5. --mui#

   Maximum unsuccessful iterations after last best to try in order to 
   consider the result as satisfying. This switch relies on maximum
   iterations to use (--i) and should always be lower than this value.
   This parameter is automatically set to 1 when CTRL+C is pressed to
   finish the work as soon as possible while still allowing some reductions
   to hit in.

6. --lazy

   Use lazy matching in LZ77 Greedy. Which, as opposite to original Zopfli
   is disabled by default in Zopfli KrzYmod as usually not using it
   produced better results but is not always the case.

7. --ohh

   Use optimized Huffman Headers which was programmed by Frédéric Kayser.
   This option changes how Huffman trees are encoded in dynamic blocks.
   It records 8 as 4+4 and 7 as 4+3 instead of 6+single+single and 6+single.
   Enabling it usually improves compression by a few bits per block.

8. --brotli

   Use Brotli compressor's Optimized Huffman for RLE. The result may be
   sometimes smaller.

9. --rc

   Use descending sorting of lit/len and disk symbols' counts. Has an impact
   on compression.

10. --all

   Run 16 cominations of --lazy, --ohh, --brotli and --rc per block and pick
   smallest results. This switch is deprecated and may be removed or changed
   in the future. Every combination should be tested separately on each zopfli run.

11. --n#

   Split uncompressed stream to this many blocks without doing any cost calculations.

12. --b#

   Split uncompressed stream to blocks for each of them to conain this many bytes
   without doing any cost calculations.

13. --cbs#

   Use predefined uncompressed stream split points without doing any cost calculations.
   The list must be in hexadecimal formal. Each split point must be separated by comma.
   First split point is always omitted and can be blank.
   E.g: ,513,5555,fe89,14532.

14. --cbsfile#

   Same as above but read values from text file. No new lines and other characters
   than [0-f], [,] are allowed.

15. --cbd#

   Save best block start positons to # file after compression.

16. --aas

   Use automatic splitting between predefined split points. This switch can be used
   with either: --n, --b, --cbs or --cbsfile.

17. --maxrec

   This switch changes how the recursion is performed in block splitter.
   By default it uses the value of 9 which is the original's default and
   can be changed by --bsr switch.
   After enabling this switch the recursion varies and is calculated by
   lz77 store range size / bsr. For example 900 / 9 = 100, 450 / 9 = 50.

18. --testrec#

   Tries recursion settings of 2 - [up to 20th failed attempt after last best]
   in splitter before compression to find split points which provide lower
   estimate size basing on parameters used to calculate them.
   When --maxrec is used it will try values of power of 2 with the
   maximum being below the lz77 store size down to 2.
   Passing # (optional) will change the default 20 failed attempts to other
   number. If 0 is used it will divide last best estimated cost by 16384
   and set it as maximum number of tries to check with an additional 20
   failed attempts if it exceeds this maximum number. # has no effect
   when --maxrec is used.

19. --bsr#

   Block splitting recursion. Changing this option will alter block splitting
   model. The default is 9. Setting this value too high will make Zopfli
   spend more time splitting stream into blocks, as well as, may result in
   fewer blocks without properly checking if decissions are optimal.
   When --maxrec is used this switch's behavior is changed and instead of
   dividing every range of lz77 store by this number, it then makes
   the recursion dynamic basing on lz77 store range size divided by
   the number set with this command.

20. --mls#

   Changes scoring model for length-distance used by Greedy LZ77 implementation.
   The default value is 1024. Different values may sometimes improve compression.

21. --sb#

   Minimum lz77 store range size for which byte-by-byte check for split points
   is performed. The default value is 1024. Increasing it slow downs the splitter.

22. --nosplitlast

   Don't split last after compression which is useful for passing predefined
   split points.

23. --pass#

   Re-evaluate and re-compress stream this many times after last compression attempt
   to find better split points basing on produced LZ77 store data from previous
   compression attempt. No further attempts will be made if most recent result produces
   bigger size than the previous one. In most cases this makes the final result smaller
   but increases overall compression time.

24. --slowfix

   Performs expensive fixed block calculations when evaluating split points. This slow
   downs block splitting but may provide better results as an actual fixed blocks'
   compression is used while splitting instead of simple estimating more or less the
   result.

25. --slowdyn#

   Performs expensive LZ77 Optimal (iteration mode) dynamic block calculations when
   evaluating split points. This slow downs block splitting A LOT, especially when
   high # number is used which indicates separate maximum unsuccessful iterations
   after last best for this mode.

26. --si#

   Stats to laststats in weight calculations. By default for zopfli current statistics
   are twice as important as previous ones (100% : 50%) for mixing and randomization.
   This has an impact on iterating proggress.

27. --cwmc

   Use Complementary-Multiply-With-Carry random number generator instead of the default
   Multiply-With-Carry. Both algorithms are made by G. Marsaglia.
   This provides different iteration progress.
   To read more: https://en.wikipedia.org/wiki/Multiply-with-carry

28. --rm#
   Random number generator modulo. Changes random numbers sequence. Default: 3.

29. --rw# and --rz#

   Initial random W and Z for iterations. By default it's 1 for W and 2 for Z.
   They are used as a seed.

30. --statsdb

   Write to and Read from ZopfliDB directory which contains cache/database of
   best statistic found per data range, Deflate block. Each result is stored
   in separate file, where each directory with its subdirectiories represent
   4 hexadecimal bytes of CRC32 checksum being one byte per directory. The file
   name consists of one of 4 compression modes used (AND/OR: --lazy --ohh, --brotli, --rc),
   block size in bytes and dat extension. They are x86 / x64 portable, store
   information about last iteration to continue work from or instantly use
   best statistics found in case amount of iterations to use is less or equal
   to the amount stored in cache file.
   The structure doesn't hold information about other switches used that may
   impact the compression like --mls or random number generator.
   The outcome from resuming work from such file will differ from running
   it from scratch until the same amount of iterations is performed.

31. --t#

   When # is > 0 use threaded compression. Each thread will have a block assigned
   to work on, so naturally more blocks to compress will result in better multithreading
   performance. Default is 1 (one MASTER controller and one SLAVE compressor thread).
   Setting this value to 0 will run Zopfli KrzYmod in compatibility mode making MASTER
   do all the work alone and verbose output iterations as per original.

32. --aff#

   Specify affinity masks to use per thread separated by comma. This switch may help
   for systems where the scheduler switches a thread from one CPU die to another.
   This may also perform better on Ryzen CPUs which use multiple CCX with Infinity
   Fabric to transfer cache from one CCX to another which results in small execution
   delays. E.g: --aff63,4032 - use 6 first system visible cores for every odd thread
   and 6 next system visible cores for every even thread, human counted.

33. --idle

   Runs Zopfli at lowest possible priority. Avoids computer slowness when all threads
   are assigned to Zopfli.

34. --v#

   Known as -v in original. Controls verbosity of Zopfli KrzYmod. The default is 2.
   * 0 - quiet mode, don't display anthing except some errors,
   * 1 - program title, percentage progress and file added when --zip & --dir are used,
   * 2 - displays block progression, bytes left to compress next to percentage progress,
         as well as, summary after every file being successfully compressed,
   * 3 - displays fixed/dynamic block comparison + per block summary,
   * 4 - additionally displays block split points, best iterations using separate lines
         and treesize (same functionality as -v in original release),
   * 5 - additionally displays current iteration being processed using same line until
         bit reduction occurs.
   * 6 - display best block result, useful only with --all

------------------------------------

ZopfliPNG longer descriptions of commands that were slimmed in last version:

1. --prefix=[fileprefix]

   Adds a prefix to output filenames. May also
   contain a directory path. When using a prefix, multiple input files
   can be given and the output filenames are generated with the
   prefix
   If --prefix is specified without value, 'zopfli_' is used.
   If input file names contain the prefix, they are not processed but
   considered as output from previous runs. This is handy when using
   *.png wildcard expansion with multiple runs.

2. --lossy_transparent

   remove colors behind alpha channel 0. No visual
   difference, removes hidden information.

4 --always_zopflify

   always output the image encoded by Zopfli, even if
   it's bigger than the original, for benchmarking the algorithm. Not
   good for real optimization.

5. -q

   use quick, but not very good, compression
   (e.g. for only trying the PNG filter and color types)

6. --iterations=[number]

   number of iterations, more iterations makes it
   slower but provides slightly better compression.
   Default: 15 for small files, 5 for large files.

7. --alpha_cleaners=[types]

   remove colors behind alpha channel 0. No
   visual difference, removes hidden information.
   b: black
   h: horizontal
   v: vertical
   a: average
   p: paeth
   w: white

8. --filters=[types]
   filter strategies to try:
   0-4: give all scanlines PNG filter type 0-4
   m: minimum sum
   y: distinct bytes
   w: distinct byte pairs
   e: entropy
   b: brute force (slow)
   i: incremental brute force (very slow)
   p: predefined (keep from input, this likely overlaps another strategy)
   g: genetic algorithm*
   By default, if this argument is not given, all strategies are tried.

9. Genetic algorithm filter options:

   --ga_population_size: number of genomes in pool. Default: 19
   --ga_max_evaluations: overall maximum number of evaluations (0 for
   unlimited). Default: 0
   --ga_stagnate_evaluations: number of sequential evaluations to try
   without improvement. Default: 15
   --ga_mutation_probability: probability of mutation per gene per
   generation. Default: 0.01
   --ga_crossover_probability: probability of crossover pergeneration.
   Default: 0.9
   --ga_number_of_offspring: number of offspring per generation.
   Default: 2

10. --zopfli_filters

   by default, if this argument is not given, the
   filter that is most likely the best for this image is chosen by
   trying faster compression with each given type. If this argument is
   used, all given filter types are tried with slow compression and the
   best result retained.

11. --palette_priorities=[types]

   palette priorities to try:
   p: popularity
   r: RGB
   y: Y'UV
   l: L*a*b*
   m: MSB
   By default, if this argument is not given, all strategies are tried.

12. --palette_directions=[types]

   palette directions to try:
   a: ascending
   d: descending
   By default, if this argument is not given, all strategies are tried.

13. --palette_transparencies=[types]

   palette transparencies to try:
   i: ignore
   s: sort
   f: first
   By default, if this argument is not given, all strategies are tried.

14. --palette_orders=[types]

   palette orders to try:
   p: none
   g: global
   d: distance
   w: distance, weighted by popularity
   n: distance, weighted by neighbor popularity
   By default, if this argument is not given, all strategies are tried.

15. --try_paletteless_size=[number]

   number of bytes under which to try
   non-paletted version of image that would normally use a palette.
   Default: 2048

16. --keepchunks=nAME,nAME,...

   keep metadata chunks with these names
   that would normally be removed, e.g. tEXt,zTXt,iTXt,gAMA, ... 
   Due to adding extra data, this increases the result size. Keeping
   bKGD or sBIT chunks may cause additional worse compression due to
   forcing a certain color type, it is advised to not keep these for
   web images because web browsers do not use these chunks. By default,
   ZopfliPNG only keeps (and losslessly modifies) the following chunks
   because they are essential: IHDR, PLTE, tRNS, IDAT and IEND.

------------------------------------


Additionally to mentioned above options KrzYmod Zopfli version also fix few issues found
in original release, for example incorrect Deflate stream size being raported.
Also supports Unix (gzip) and MS-DOS (zip) timestamps inside archives.

Bitcoin: 1KrzY1CwE532e6YjN4aCzgV19gtAzQMatJ
Paypal:  https://www.paypal.me/MrKrzYch00
^ in case You want to reward my work.

====================================
           by Mr_KrzYch00
====================================

Zopfli Compression Algorithm was created by Lode Vandevenne and Jyrki
Alakuijala, based on an algorithm by Jyrki Alakuijala. Further modifications
as described in the document above were done by Mr_KrzYch00.

For more information on Zopfli please refer to:
- README.zopfli
- README.zopflipng
- original zopfli project at https://github.com/google/zopfli
- frkay's fork: https://github.com/frkay/zopfli
- all other forks this fork may include changes from:
  https://github.com/frkay/zopfli/network

About

Advanced Zopfli fork: ZIP support, multi-threading, compression cache (zopfliDB), auto/manual block split, compression and speed control with thread affinity locking and more, experimental switches.

http://encode.su/threads/2176-Zopfli-amp-ZopfliPNG-KrzYmod

License:Apache License 2.0


Languages

Language:C++ 67.2%Language:C 32.0%Language:Makefile 0.7%