0vercl0k / wtf

wtf is a distributed, code-coverage guided, customizable, cross-platform snapshot-based fuzzer designed for attacking user and / or kernel-mode targets running on Microsoft Windows and Linux user-mode (experimental!).

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

[Question] Memory for testcase

RosenZhu opened this issue · comments

Hi there, me again.

In order to insert a testcase, wtf uses Backend_t::VirtWrite(). If I understand it correctly, wtf writes to the next page if size of remaining test case is larger then Page::Size (64k).

My question is, for whv, the size of whole RAM cannot be changed during fuzzing, right? So, should I concern about the number of pages? For example, originally when the backend of whv is set up, we use, for example, 2 pages (128k) to hold the test case. But, by mutation, the size of test case may be larger than 2 pages. Then, there's not enough space to hold the testcase.

Do we need to add pages for testcases?

wtf has the option max_len to limit the max size of testcase. Is it related to the question? (I cannot find anywhere that the max_len is related to the size of RAM).

Best,

Hello 👋🏽

That's correct the RAM is not changeable at runtime and VirtWrite emulates a memory write; so it crosses a page boundary if it needs to (4k not 64k though).

If you write your testcase in a region of memory with a fixed sized, I'd recommend to put your target into a state where it allocates the biggest testcase you will insert and take the crashdump at this point. You can insert smaller testcase at the end of the page and update size/start pointers in your target directly. I do that in the HEVD sample, you should check it out if you haven't.

Another way is that if you're lucky enough you could also grab a crashdump right before the malloc that allocates memory for your testcase and this way you can insert a new size at runtime for every testcase and it should do fine.

max_len is the size of the biggest testcase the generators can generate; so it's not related to the RAM or what not.

Hope this helps!

Cheers

I do that in the HEVD sample, you should check it out if you haven't.

Thanks for this information. I will check this.

Another way is that if you're lucky enough you could also grab a crashdump right before the malloc that allocates memory for your testcase and this way you can insert a new size at runtime for every testcase and it should do fine.

This is where I'm confused. If we insert a new size, will it be a problem because it might exceed the limit of RAM? Or, if we can catch the malloc, hypervisor will dynamically increase the memory size when a new size is too large?

Good question!

The idea is when you take a snapshot of your target, the entirety of RAM isn't used which means the OS can service allocations from inside wtf just fine assuming you aren't allocating big amount of memory; in that case you could see the kernel trying to swap out some memory to disk which will fail because wtf doesn't have any i/o.

In any case, there is no dynamic support to increase the memory size. It is technically very easy to do it for the backends to allocate a larger region, etc. but the issue is how do you let the guest Windows know that it has a bigger space it can use.

A similar problem is to be able to allocate memory from outside the execution environment; in theory it wouldn't be hard to materialize a virtual memory range by modifying the page tables ourselves, but then the Windows kernel is not aware of our shenanigans and as a result its logic could clash with our modifications. I don't know how to do this in a bulletproof way, so that is why it isn't supported.

I hope this helps!

Cheers

Thanks for your patience to answer my questions!