fortra / nanodump

The swiss army knife of LSASS dumping

Home Page:https://www.coresecurity.com/core-labs/articles/nanodump-red-team-approach-minidumps

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

No error handling with append() for large dumps, corrupted data returned

williamknows opened this issue · comments

Hey, really appreciate the project.

On systems where there's large memory dumps (>DUMP_MAX_SIZE), you get the "dump is too big" errors produced by append() as expected (line ~30 in nanodump.c). However, it doesn't set any state to indicate an error has occurred.

This means you could end up with a load of the dump size errors, and then when it's time to send the data back, it still does (resulting in a lot data being exfiltrated), but it's obviously corrupted as it's not all there so you can't analyse it.

You make a valid point here, I could return a BOOL in append and check that on every call.
I will give that a try and let you know.
I appreciate your interest and comments!

Just curious, did this actually happend to you? If so, what was the size or the dump?

Yeah, happened to me yesterday, twice actually, increased it once, then had to do it again. The dumps were huge. One 110MB and one 183MB (downloaded size).

Wow that's crazy, I had no idea they could get that big.
Ok, now I check every "append" call and terminate if it is false.
Thank you!