kartorz / utils

Some little util applications most of which are coded by script languages, e.g. python.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Feature request 3

mbusb opened this issue · comments

Sorry, unknowingly closed the issue 2. So opened it again here :-))

Update to previous comment. The md5sum mentioned in the below link and downloaded ISO seems to be same. But I am not sure why 7zip indicated integrity failure. Even ISO did not boot after installing into USB drive.

http://ftp.vim.org/knoppix/md5-old/ADRIANE_KNOPPIX_V7.0.4CD-2012-08-20-EN.iso.md5
[sundar@localhost ~]$ md5sum /media/Fun/distros/ADRIANE_KNOPPIX_V7.0.4CD-2012-08-20-EN.iso
e1773a007c9eca89448bd3165297d6c4 /media/Fun/distros/ADRIANE_KNOPPIX_V7.0.4CD-2012-08-20-EN.iso
[sundar@localhost ~]$
Therefore I assume that you have implemented the last pending feature as well. 👍
If above is true then your script is super fast that the famous 7zip:-)) I have done three way comparison with 7zip and md5sum and here are the result:-
RESULT OF ISODUMP.PY TEST
[sundar@localhost multibootusb]$ time python2.7 ./test.py
real 0m0.264s
user 0m0.034s
sys 0m0.012s
RESULT OF MD5SUM TEST
[sundar@localhost multibootusb]$ time md5sum /home/sundar/Downloads/distros/salix64-xfce-14.1RC1.iso
real 0m9.027s
user 0m2.627s
sys 0m0.360s
RESULT OF 7ZIP TEST
[sundar@localhost multibootusb]$ time ./tools/7zip/linux/7z t /home/sundar/Downloads/distros/salix64-xfce-14.1RC1.iso
real 0m6.676s
user 0m0.058s
sys 0m0.546s

Therefore you are the clear winner:-))

Hi,

It's no official method to check integrity and data error in inner of
iso image.
In my opinion, checking the last file record is a economic method,
because the last file is recorded at the last position,
if it is integrity, then the whole image should be integrity, maybe It's
no need to check every file.

And iso standard records file block by block. If the damage part is
dummy area for padding, The iso should not be checked as
a broken image. You can extract file to check if it's broken.

On 03/20/2014 12:45 AM, multibootusb wrote:

Update to previous comment. The md5sum mentioned in the below link and downloaded ISO seems to be same. But I am not sure why 7zip indicated integrity failure. Even ISO did not boot after installing into USB drive.

http://ftp.vim.org/knoppix/md5-old/ADRIANE_KNOPPIX_V7.0.4CD-2012-08-20-EN.iso.md5
[sundar@localhost ~]$ md5sum /media/Fun/distros/ADRIANE_KNOPPIX_V7.0.4CD-2012-08-20-EN.iso
e1773a007c9eca89448bd3165297d6c4 /media/Fun/distros/ADRIANE_KNOPPIX_V7.0.4CD-2012-08-20-EN.iso
[sundar@localhost ~]$
Therefore I assume that you have implemented the last pending feature as well. 👍
Correct me if I am wrong.


Reply to this email directly or view it on GitHub:
#3 (comment)

Regards

Thanks for the info.
Just now I have checked isodump.py as pre release testing under windows. It seems to have an issue when extracting to USB drive. It took very long time to complete the job. Size of the ISO is just 30 mb. Here are the results of extraction to hard disk and USB drive:-

C:\Users\Sundar\Documents\multibootusb>old_files\ptime.exe isodump.py iso:/ -r -
o l:\multibootusb\slitaz-cooking ..\slitaz-cooking.iso

ptime 1.0 for Win32, Freeware - http://www.pc-tools.net/
Copyright(C) 2002, Jem Berkes jberkes@pc-tools.net

=== isodump.py iso:/ -r -o l:\multibootusb\slitaz-cooking ..\slitaz-cooking.iso

RRIP: rrip_offset 0
writeDir(/)->(l:\multibootusb\slitaz-cooking) with pattern()

Execution time: 71.543 s

C:\Users\Sundar\Documents\multibootusb>old_files\ptime.exe isodump.py iso:/ -r -
o test ..\slitaz-cooking.iso

ptime 1.0 for Win32, Freeware - http://www.pc-tools.net/
Copyright(C) 2002, Jem Berkes jberkes@pc-tools.net

=== isodump.py iso:/ -r -o test ..\slitaz-cooking.iso ===
RRIP: rrip_offset 0
writeDir(/)->(test) with pattern()

Execution time: 1.958 s

C:\Users\Sundar\Documents\multibootusb>old_files\ptime.exe isodump.py iso:/ -r -
o test ..\slitaz-cooking.iso

It seems take a long time to wirte USB dirver.

On 03/20/2014 10:45 AM, multibootusb wrote:

Thanks for the info.
Just now I have checked isodump.py as pre release testing under windows. It seems to have an issue when extracting to USB drive. It took very long time to complete the job. Size of the ISO is just 30 mb. Here are the results of extraction to hard disk and USB drive:-

C:\Users\Sundar\Documents\multibootusb>old_files\ptime.exe isodump.py iso:/ -r -
o l:\multibootusb\slitaz-cooking ..\slitaz-cooking.iso

ptime 1.0 for Win32, Freeware - http://www.pc-tools.net/
Copyright(C) 2002, Jem Berkes jberkes@pc-tools.net

=== isodump.py iso:/ -r -o l:\multibootusb\slitaz-cooking ..\slitaz-cooking.iso

RRIP: rrip_offset 0
writeDir(/)->(l:\multibootusb\slitaz-cooking) with pattern()

Execution time: 71.543 s

C:\Users\Sundar\Documents\multibootusb>old_files\ptime.exe isodump.py iso:/ -r -
o test ..\slitaz-cooking.iso

ptime 1.0 for Win32, Freeware - http://www.pc-tools.net/
Copyright(C) 2002, Jem Berkes jberkes@pc-tools.net

=== isodump.py iso:/ -r -o test ..\slitaz-cooking.iso ===
RRIP: rrip_offset 0
writeDir(/)->(test) with pattern()

Execution time: 1.958 s

C:\Users\Sundar\Documents\multibootusb>old_files\ptime.exe isodump.py iso:/ -r -
o test ..\slitaz-cooking.iso


Reply to this email directly or view it on GitHub:
#3 (comment)

Regards

That is what exactly I was talking about. Is it possible to reduce the wire
time. I had the same issue when using shutil.copy.

If the extraction is this slow then it is impossible to implement under
windows.

I have subjected Ubuntu ISO for checking extraction time. It took more than
30 min.

Could you tell me the time offset between "cp iso to USB" and "extract
iso to USB" ?

Regards

Sorry. What is offset time?

Sorry, I mean "cp" time minus "extracting" time. How slowly
extracting file then cp iso image.

Regards

OK. Will report soon.

Here are the results:-
On Linux:-
[sundar@sundar-pc multibootusb]$ time cp /media/Windows/Users/Sundar/Documents/slitaz-cooking.iso /media/SUNDAR/multibootusb/slitaz-cooking/
real 0m2.246s
user 0m0.000s
sys 0m0.085s
[sundar@sundar-pc multibootusb]$ time python ./isodump.py iso:/ -r -o /media/SUNDAR/multibootusb/slitaz-cooking/ /media/Windows/Users/Sundar/Documents/slitaz-cooking.iso
RRIP: rrip_offset 0
writeDir(/)->(/media/SUNDAR/multibootusb/slitaz-cooking/) with pattern()
real 0m2.948s
user 0m0.100s
sys 0m0.107s

On Windows:-
C:\Users\Sundar\Documents\multibootusb>old_files\ptime.exe xcopy ..\slitaz-cooking.iso L:\multibootusb\slitaz-cooking
ptime 1.0 for Win32, Freeware - http://www.pc-tools.net/
Copyright(C) 2002, Jem Berkes jberkes@pc-tools.net
=== xcopy ..\slitaz-cooking.iso L:\multibootusb\slitaz-cooking ===
..\slitaz-cooking.iso
1 File(s) copied
Execution time: 5.542 s
C:\Users\Sundar\Documents\multibootusb>old_files\ptime.exe isodump.py iso:/ -r -
o L:\multibootusb\slitaz-cooking ..\slitaz-cooking.iso
ptime 1.0 for Win32, Freeware - http://www.pc-tools.net/
Copyright(C) 2002, Jem Berkes jberkes@pc-tools.net
=== isodump.py iso:/ -r -o L:\multibootusb\slitaz-cooking ..\slitaz-cooking.iso ===
RRIP: rrip_offset 0
writeDir(/)->(L:\multibootusb\slitaz-cooking) with pattern()
Execution time: 70.764 s

Therefore the offset under Linux is ~0.7 sec and under Windows it is ~65 sec. I am not sure why this is happening. Is it something to do with buffer when writing files?

Maybe it's relative about writing buffer.

Regards

I was thinking about same but have no clue to solve it.

Is there any progress on removing windows specific bug?

Should can be fixed. Need a little time.

On 03/23/2014 09:05 AM, multibootusb wrote:

Is there any progress on removing windows specific bug?


Reply to this email directly or view it on GitHub:
#3 (comment)

Regards

OK. The day you fix isodump then next day I will release multibootusb.

Hi,
Sorry for replying so late. I am a little busy nowadays. Anyway I
try to fix the buffer for writing file,
It can impove the writing speed of USB on windows. But It seems it is
the best this script can do
that wring a 30M iso file to usb takes 30s.

Regards

30 second is bit too much I think. Let me check the windows copy file time
to USB disk. Probably time may differ if ISO size is more. Normally ISO
size are more than 650 mb.

But why does this huge difference between windows and Linux writing speed?
After implementing we can compare on different ISO and Linux as well.

As I told you earlier, I am just waiting for this bug to be fixed. So that
I can release stable release to public. And this release going to be
awesome for you and me. ;-))
On Mar 28, 2014 6:31 PM, "Joni Lee" notifications@github.com wrote:

Hi,
Sorry for replying so late. I am a little busy nowadays. Anyway I
try to fix the buffer for writing file,
It can impove the writing speed of USB on windows. But It seems it is
the best this script can do
that wring a 30M iso file to usb takes 30s.

Regards

Reply to this email directly or view it on GitHubhttps://github.com//issues/3#issuecomment-38916196
.

I test on win7 on Virtualbox, Could you test using different iso files.
It is faster on Linux but when you unmount USB it will take a while to
write cache to USB.
In my test case, Wring a 30M iso file to USB on Linux is about 15s while
on Win7 is about 30s.
I am not sure if this script can do better on Windows.

On 03/28/2014 09:10 PM, multibootusb wrote:

30 second is bit too much I think. Let me check the windows copy file time
to USB disk. Probably time may differ if ISO size is more. Normally ISO
size are more than 650 mb.

But why does this huge difference between windows and Linux writing speed?
After implementing we can compare on different ISO and Linux as well.

As I told you earlier, I am just waiting for this bug to be fixed. So that
I can release stable release to public. And this release going to be
awesome for you and me. ;-))
On Mar 28, 2014 6:31 PM, "Joni Lee" notifications@github.com wrote:

Hi,
Sorry for replying so late. I am a little busy nowadays. Anyway I
try to fix the buffer for writing file,
It can impove the writing speed of USB on windows. But It seems it is
the best this script can do
that wring a 30M iso file to usb takes 30s.

Regards

Reply to this email directly or view it on GitHubhttps://github.com//issues/3#issuecomment-38916196
.


Reply to this email directly or view it on GitHub:
#3 (comment)

Regards

Ok tested on both linux and windows with 30 mb ISO file. On linux it takes around 7-8 sec and on Windows it is around 14-15 sec. 7Zip took surprisingly more time with 15-18 sec. Windows system command "xcopy" took only 5 sec though.

Tested again with 1.46 gb file. On windows, the system command "xcopy" took 271 sec to copy file and isodump took 383 sec to extract files. On comparison, 7zip took 306 sec to extract same 1.46 gb ISO file to USB drive.

Could it be due to buffer flushing?

Added the "f_output.flush()" after "f_output.write(buf)" line.

Used zenwalk ISO for test which is a size of 650 mb. This ISO contain smaller packages but larger in number. 7Zip took 561 sec and isodump took only 371 sec.

Centos (658 mb) took 179 sec while using isodump and 7Zip took 135 sec.

What I have understood is that isodump takes more time to write only if file size is huge. In centos case "squashfs.img" which is about 630 mb out of 658 and in zorinos, "filesystem.squashfs" which is amounting 1.4 gb out of 1.46 gb.

Now the bug has been narrowed down to specific file size. If you can provide some solution to this then the problem will be solved.

Thanks for you information. I will fix it.

Regards

You test has pointed the problem. I set a large buffer 100M for writing
and add this argument to open function.
I make a iso containing only a 36M file.
Here is the code and time cost.
*
*time 0:
iso9660fs.writeDir("/", "F:/xx")
/r_size = 100M//
//f_output = open(detFile, 'wb', r_size)//
//
// buf = self.isoFile.read(r_size)//
//
/---- time 0.395061722609/

// f_output.write(buf)//

/---- time 0.442821993308

/// # while True end.//
// f_output.close()////
/

  •   ---- time  15.3392208048*/
    
    //

/That means it takes less time for read/write from/to a buffer. The
most of time costs by flush to USB stick.
If add a code f_output.flush() after f_output.write(buf), the
f_output.flush() will takes as many time as f_output.close().
I don't know how to decrease time for function file.flush() yet.

/

/

Regards

Hi,
Tested the script with increased buffer (100 mb) and included f_output.flush(). There is a huge performance increase. With the above two enabled the Ubuntu ISO, size 880 mb, was successfully extracted in 202 sec. 7zip did it in ~250 sec

Next tested with zorin ISO, size 1.46 gb with same buffer and flush enabled. This time it took 249 sec and 7zip took 303 sec.
When I removed f_output.flush() and retested the script again. This time it took more time, about 60 sec extra.

Therefore I consider the problem is solved. You may change/ include theses changes in to original script.

At last a line of contribution from my side. :-))

Hi,

Thanks for you test.

You mean appending f_output.flush() before f_output.close() like the
following code.

/ buf = self.isoFile.read(r_size)//
// f_output.write(buf)//
// # while True end./
** f_output.flush()
/ f_output.close()/

On 04/02/2014 02:14 AM, multibootusb wrote:

Hi,
Tested the script with increased buffer (100 mb) and included f_output.flush(). There is a huge performance increase. With the above two enabled the Ubuntu ISO, size 880 mb, was successfully extracted in 202 sec. 7zip did it in ~250 sec

Next tested with zorin ISO, size 1.46 gb with same buffer and flush enabled. This time it took 249 sec and 7zip took 303 sec.
When I removed f_output.flush() and retested the script again. This time it took more time, about 60 sec extra.

Therefore I consider the problem is solved. You may change/ include theses changes in to original script.

At last a line of contribution from my side. :-))


Reply to this email directly or view it on GitHub:
#3 (comment)

Regards

Just after the line f_output.write(buf). Because as you know that space
matter a lot in python. ;-) and increase the buffer size to 100 mb.

Ok, I got it.

On 04/02/2014 01:27 PM, multibootusb wrote:

Just after the line f_output.write(buf). Because as you know that space
matter a lot in python. ;-) and increase the buffer size to 100 mb.


Reply to this email directly or view it on GitHub:
#3 (comment)

Regards