kspalaiologos / bzip3

A better and stronger spiritual successor to BZip2.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

segfault with "-t" , version 1.1.2

marcin-github opened this issue · comments

Hi

gdb -q /usr/bin/bzip3
Reading symbols from /usr/bin/bzip3...
Reading symbols from /usr/lib/debug//usr/bin/bzip3.debug...
(gdb) set args -t xxxx.json.bz3
(gdb) run
Starting program: /usr/bin/bzip3 -t 2018-12-26-1545843950-rdns.json.bz3
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".

Program received signal SIGSEGV, Segmentation fault.
__GI___fileno (fp=fp@entry=0x4) at fileno.c:35
35      fileno.c: Nie ma takiego pliku ani katalogu.
(gdb) bt
#0  __GI___fileno (fp=fp@entry=0x4) at fileno.c:35
#1  0x0000555555555aeb in main (argc=3, argv=<optimized out>) at src/main.c:215

Can't reproduce.

% bzip3 -t LICENSE.bz3
% bzip3 -t xxxx.json.bz3
fopen: No such file or directory

That said, you can try pulling from master, which automatically implies the -c option.

It looks that on 1.1.2 testing e.g. file that isn't bzip3 archive gave segfault. On 1.1.3 I can't reproduce it. (but performance is very low, about 1MB reading on i5-6500T)

You're either using gcc rather than clang, supplied your own (wrong) set of flags, or you're benchmarking bzip3 incorrectly.
Such a low performance hasn't been reported by anyone so far.

The decompression/testing speed on my machine is somewhere around 21MiBps; the Perl benchmark takes around 14 minutes on a 17.7GiB original file (~1GiB compressed). That's 17.7/14 = 1.26 GiB/min, 21.504 MiBps.

Note that, if you benchmark how quickly it reads the compressed file, then you'd have to divide the speed of 21.504MiB/s by 17.7 - that gives us 1.21MiB/s

bzip3 -d -j 4 on the other hand takes 4min 6s to decompress the same amount of data, giving us (17.7 * 1024) / 246 = 73.6 MiB/s of writing output, or 4.15MiB/s of reading the input.