pobrn / mktorrent

A simple command line utility to create BitTorrent metainfo files

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Segmentation fault

mrmadsen opened this issue · comments

Hashing xxx.file.r21.
Segmentation fault

with
mktorrent -l 22 -a announce.url -o taken.torrent /folder/destination/

I'll need a bit more information than that.

  • Which OS are you using?
  • Which architecture are you on? eg. x86 or x86_64
  • How did you build mktorrent? eg. which options did you enable in the Makefile
  • Does this happen all the time or only on single files, big files or directories with big files.
  • Does it happen when hashing the very first file or after hashing a few files?

Btw. I can make mktorrent segfault if I first run make, and then make USE_OPENSSL=1 USE_PTHREADS=1, so always do 'make clean' before you build with new options.

running os debian lenny 5.0.1 x86

happens both with singular large files and e.g. large directories containing rared files.......

had a friend try it out on ubuntu 9.xx server... same thing happened. hes running x86_64

Ok, but you still need to describe how you compile mktorrent and if it always happens or only sometimes.

From your original message it seems the segfault occurs while hashing without pthread support, so could you try to compile with USE_PTHREADS=1 just to see if the bug is in hash.c and not hash_pthreads.c

I was having the same issue creating a torrent from a 15GB directory. I recompiled without using USE_PTHREADS=1 and it is working for me now.

Which version of mktorrent are you using, and did you add USE_LARGE_FILES=1 on both compiles?

Ohh, and can you reproduce the error?

I was using 1.0, and had USE_LARGE_FILES=1 on both compiles. I just copied the make command from your site.

I don't really have any other large files to test it with, but if I see anything I'll try it out.

I am getting the same error on Debian Lenny 64bit with PT_THREADS enabled, works without pt_threads.

I would like to use pt_threads cos I have SAS drives and many cores.

I have managed to get pt_threads to work, dont know how.

It is MUCH faster, I use 4.

This is still an issue. Im on OSX 10.6, and have this issue with your recommended make string... Id also like to use pthreads.

And yes if i remove the pthreads option it works...

Same problem with 64bits Ubuntu 9.10 (kernel 2.6.31-20).
Segmentation fault when compiled with USE_PTHREADS=1

openSUSE 11.2 x86_64. SEG FAULT when compiled with USE_PTHREADS=1 USER_OPENSSL=1. WORKED AS EXPECTED when compiled with only USE_OPENSSL. ----felipe1982

Same here.
Debian stable - mktorrent 1.0 and latest git

make USE_PTHREADS=1 USE_OPENSSL=1 USE_LONG_OPTIONS=1

mktorrent -p -a http://tracker/announce.php -d -l 23 -p -v Paul.Kalkbrenner.A.Live.Documentary.2010.foo

mktorrent 1.0 (c) 2007, 2009 Emil Renner Berthing

Options:
Announce URLs:
http://tracker:80/announce.php
Torrent name: foo.bar
Metafile: /home/user/foo.torrent
Piece length: 8388608
Be verbose: yes
Write date: no
Web Seed URL: none
Comment: none

Adding finest-pklive_a_1080p.mkv
Adding finest-pklive_1080p.nfo
Adding finest-pklive_b_1080p.mkv

10356460678 bytes in all.
That's 1235 pieces of 8388608 bytes each.

Segmentation fault

OpenSUSE 11.3

Linux desk 2.6.34.7-0.7-desktop #1 SMP PREEMPT 2010-12-13 11:13:53 +0100 x86_64 x86_64 x86_64 GNU/Linux

Location : /usr/local/src/mktorrent-1.0

make USE_PTHREADS=1 USE_OPENSSL=1 USE_LONG_OPTIONS=1 USE_LARGE_FILES=1

from Location --- /usr/local/src/mktorrent-1.0

./mktorrent -v -a http://tracker/announce.php -o test.torrent -p /home/nfs_download/Test

Metafile: /usr/local/src/mktorrent-1.0/test.torrent
Piece length: 262144
Be verbose: yes
Write date: yes
Web Seed URL: none
Comment: none

2035376440 bytes in all.
That's 7765 pieces of 262144 bytes each.

Hashed 7765 of 7765 pieces.
Writing metainfo file... done.

Now,

make install
cc -O2 -Wall -DPRIoff=""ld"" -DVERSION=""1.0"" -c sha1.c
cc -O2 -Wall -DPRIoff=""ld"" -DVERSION=""1.0"" -c hash.c
cc -O2 -Wall ftw.o init.o sha1.o hash.o output.o main.o -o mktorrent
install -d /usr/local/bin
install -m755 mktorrent /usr/local/bin/mktorrent

/usr/local/bin/mktorrent -v -a http://tracker/announce.php -o test.torrent -p /home/nfs_download/Test

Metafile: /usr/local/src/mktorrent-1.0/test.torrent
Piece length: 262144
Be verbose: yes
Write date: yes
Web Seed URL: none
Comment: none

2035376440 bytes in all.
That's 7765 pieces of 262144 bytes each.

Segmentation fault

Only diff. is it does not work after i use make install it still work from the source directory without install.

Is anyone else has same issue or can be reproduced ?

The reprocedure that k2patel gave gives this in gdb:

Program received signal SIGSEGV, Segmentation fault.
0x0804b13e in open (m=0xbfffec74) at /usr/include/bits/fcntl2.h:54
54        return __open_alias (__path, __oflag, __va_arg_pack ());
(gdb) backtrace
#0  0x0804b13e in open (m=0xbfffec74) at /usr/include/bits/fcntl2.h:54
#1  make_hash (m=0xbfffec74) at hash.c:91
#2  0x0804b8f9 in main (argc=8, argv=0xbfffed74) at main.c:169
(gdb) 

The line at hash.c:91 is

if ((fd = open(f->path, OPENFLAGS)) == -1) {

The issue seems to be with accessing members of the f struct.

Error found:

If USE_PTHREADS=1 USE_OPENSSL=1 USE_LONG_OPTIONS=1 USE_LARGE_FILES=1 is used as flags for make, should these flags also be used for make install.

So correct compile is:

make USE_PTHREADS=1 USE_OPENSSL=1 USE_LONG_OPTIONS=1 USE_LARGE_FILES=1 
make install USE_PTHREADS=1 USE_OPENSSL=1 USE_LONG_OPTIONS=1 USE_LARGE_FILES=1

This sould solved this issue.

If it does not solve the issue, please report:
*Which OS are you using?
*Which architecture are you on? eg. x86 or x86_64
*How did you build mktorrent? eg. which options did you enable in the Makefile
*Does this happen all the time or only on single files, big files or directories with big files.
*Does it happen when hashing the very first file or after hashing a few files?

if possible please run the mktorrent with the options you have, and -v and then paste the output.

This issue is very old, it should probably be closed by now.

Since there have been no significant updates since 2011, in agreement with @Calinou, I will close this issue. However, if anyone is still affected, don't hesitate to reopen it.