JusticeRage / Manalyze

A static analyzer for PE executables.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

manalyze memory/CPU time exhaustion

rc0r opened this issue · comments

commented

Hi,

fuzzing manalyze discovered the following crash:

original sample - DoS.dll (28K) (md5: acf1bffb70226d182bc0fef847f5c867)

The crash surfaced because afl-fuzz uses a memory limit during fuzzing. Running manalyze directly on the provided sample did not crash the process on my quite decent box. However massive amounts of virtual memory (>80Gb) were used when processing the file. This probably just didn't cause any real havoc because I have quite a large swap partition. Nevertheless this whole process bogged down my box for several minutes:

$ time manalyze DoS.dll
# ...
manalyze   170.76s user 280.52s system 65% cpu 11:28.31 total

To simulate a less powerful machine I used ulimit -v 10000000 limiting the virtual memory to ~10G. Using this setup manalyze SIGABRT's very soon:

$ ulimit -v 10000000 # kbytes
$ time manalyze DoS.dll
# ...
terminate called after throwing an instance of 'std::bad_alloc'
  what():  std::bad_alloc
[2]    17055 abort (core dumped)  ./manalyze 
manalyze   7.59s user 8.62s system 14% cpu 1:52.14 total

$ ls -l core.17055 
-rw------- 1 rc0r rc0r 9.5G Oct 24 11:20 core.17055

I did not try running this on a system with much less memory available then I had. But at best I'd expect the memory allocation to fail as in the ulimited test I did.

Let me know if you need any further info or assistance in order to diagnose the problem!

Thanks for the report!
I will look into this as soon as I can!

Hi again. I've finally looked into this sample and was able to trace back the issue.
Basically, the resource table contains garbage and declares an insane amount of items to parse. Resources are loaded lazily in Manalyze, which means that an object describing them (offset in the file, size, etc.) is created even if the data ends up being bogus. The broken resource table causes many (millions?) resource objects to be created there:

res = boost::make_shared<Resource>(type,
...which will exhaust the heap on many systems.

Now, about countermeasures. As far as the PE specification is concerned, the file is valid. AFAIK, it's perfectly legal to have millions of resources, so the file can't be rejected outright. What I've tried to do is add additional sanity checking when it comes to resources, in the following form:

  • Offsets located outside the file are immediately discarded in a few places.
  • Making sure that no duplicate resources (same size and offset in the file) can be created. It's likely

These checks are a little weak, as someone could manually craft a PE in such a way that they could be circumvented while still blowing up the memory. It should however do a good job rejecting any random modifications to the resource table.
Any thoughts?

In any case, thanks again for bringing this up!