OpenRCE / sulley

A pure-python fully automated and unattended fuzzing framework.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Too high memory usage / memory leak?

edevil opened this issue · comments

I'm running a simple fuzzing script (fu.py):

from sulley import *
from requests import http_get, http_header, http_post

sess = sessions.session(session_filename="tmp2.log", sleep_time=0.1)

target = sessions.target('XXX',8000)
target.netmon = pedrpc.client("XXX", 26001)
target.procmon = pedrpc.client("XXX", 26002)
target.procmon_options = {
"proc_name" : "ZZZ",
"stop_commands" : ['ZZZ'],
"start_commands" : ['ZZZ'],
}

sess.add_target(target)
sess.connect(s_get("HTTP VERBS"))
sess.connect(s_get("HTTP METHOD"))
sess.connect(s_get("HTTP REQ"))
sess.connect(s_get("HTTP HEADER ACCEPT"))
sess.connect(s_get("HTTP HEADER ACCEPTCHARSET"))
sess.connect(s_get("HTTP HEADER ACCEPTDATETIME"))
sess.connect(s_get("HTTP HEADER ACCEPTENCODING"))
sess.connect(s_get("HTTP HEADER ACCEPTLANGUAGE"))
sess.connect(s_get("HTTP HEADER AUTHORIZATION"))
sess.connect(s_get("HTTP HEADER CACHECONTROL"))
sess.connect(s_get("HTTP HEADER CLOSE"))
sess.connect(s_get("HTTP HEADER CONTENTLENGTH"))
sess.connect(s_get("HTTP HEADER CONTENTMD5"))
sess.connect(s_get("HTTP HEADER COOKIE"))
sess.connect(s_get("HTTP HEADER DATE"))
sess.connect(s_get("HTTP HEADER DNT"))
sess.connect(s_get("HTTP HEADER EXPECT"))
sess.connect(s_get("HTTP HEADER FROM"))
sess.connect(s_get("HTTP HEADER HOST"))
sess.connect(s_get("HTTP HEADER IFMATCH"))
sess.connect(s_get("HTTP HEADER IFMODIFIEDSINCE"))
sess.connect(s_get("HTTP HEADER IFNONEMATCH"))
sess.connect(s_get("HTTP HEADER IFRANGE"))
sess.connect(s_get("HTTP HEADER IFUNMODIFIEDSINCE"))
sess.connect(s_get("HTTP HEADER KEEPALIVE"))
sess.connect(s_get("HTTP HEADER MAXFORWARDS"))
sess.connect(s_get("HTTP HEADER PRAGMA"))
sess.connect(s_get("HTTP HEADER PROXYAUTHORIZATION"))
sess.connect(s_get("HTTP HEADER RANGE"))
sess.connect(s_get("HTTP HEADER REFERER"))
sess.connect(s_get("HTTP HEADER TE"))
sess.connect(s_get("HTTP HEADER UPGRADE"))
sess.connect(s_get("HTTP HEADER USERAGENT"))
sess.connect(s_get("HTTP HEADER VIA"))
sess.connect(s_get("HTTP HEADER WARNING"))
sess.connect(s_get("HTTP HEADER XATTDEVICEID"))
sess.connect(s_get("HTTP HEADER XDONOTTRACK"))
sess.connect(s_get("HTTP HEADER XFORWARDEDFOR"))
sess.connect(s_get("HTTP HEADER XREQUESTEDWITH"))
sess.connect(s_get("HTTP HEADER XWAPPROFILE"))
sess.connect(s_get("HTTP VERBS POST"))
sess.connect(s_get("HTTP VERBS POST ALL"))
sess.connect(s_get("HTTP VERBS POST REQ"))
sess.fuzz()

This has been running for a few hours and the process is currently using 1.8GB of RAM(!).

$ ps up 47192
USER    PID  %CPU %MEM      VSZ    RSS   TT  STAT STARTED      TIME COMMAND
andre 47192 100.0 22.4  5335628 1879092 s002  R+    3:56PM 1039:44.05 python fu.py

Is this expected? Am I doing something wrong?

commented

I believe this is expected. The way things work right now is the test cases aren't generated dynamically IIRC, so it loads all the potential test-cases in memory.

It's very silly, and going to be fixed in Sulley 2.

Btw, this is not Sulley's fault I think, but it is related. After running the fuzzer for a few days it crashed:

Traceback (most recent call last):
  File "fu.py", line 59, in <module>
    sess.fuzz()
  File "/Users/andre/work/vc/sulley/sulley/sessions.py", line 539, in fuzz
    self.export_file()
  File "/Users/andre/work/vc/sulley/sulley/sessions.py", line 356, in export_file
    fh.write(zlib.compress(cPickle.dumps(data, protocol=2)))
OverflowError: size does not fit in an int
(discosite)[andre@anarres ~/work/vc/sulley]$ python fu.py 
[2014-11-17 10:18:39,769] [INFO] -> current fuzz path:  -> HTTP VERBS
[2014-11-17 10:18:39,770] [INFO] -> fuzzed 0 of 271214 total cases

It looks like it's zlib's fault but still, something to watch out for. The log file was not written as well.

commented

Eesh, yeah that's a nasty bug in 32 bit zlib.

Apparently these guys ran into it too: joblib/joblib#122

2GB is not a great maximum size to be able to compress. It's probably worth getting off zlib for the next release then. I'll cut another issue for that.

Thanks for the info! Am I cool to close this out, or do you have any other questions?

No, I'm good. :)