OMH4ck / mufuzz

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

mufuzz can't detect the bug, however AFL can

jiliguluss opened this issue · comments

Here is the code named 'readfile.c', and the code is built to binary named 'readfile':

#include <stdio.h>
#include <stdlib.h>
#include <string.h>

#define MAX_LENGTH 10

void processString(const char* str) {
    char buffer[MAX_LENGTH];
    strcpy(buffer, str); // buffer overflow
    buffer[MAX_LENGTH - 1] = '\0';
    printf("Processed string: %s\n", buffer);
}

int main(int argc, char* argv[]) {
    if (argc < 2) {
        printf("Usage: %s <filename>\n", argv[0]);
        return 1;
    }
    const char* filename = argv[1];
    FILE* file = fopen(filename, "r");
    if (file == NULL) {
        printf("Error opening file: %s\n", filename);
        return 1;
    }
    char line[1024];
    while (fgets(line, sizeof(line), file)) {
        line[strcspn(line, "\n")] =
            '\0';  // Remove newline character, if present
        processString(line);
    }
    fclose(file);
    return 0;
}

The initial corpus has one file whose content is 'text'.

When I use AFL, AFL can find crash very fast. The AFL command is:

./afl-fuzz -i corpus -o result -- ./readfile @@

image

When I use Mufuzz, Mufuzz can't find the crash. The Mufuzz command is:

taskset -c 0-9 cargo run --release -- -c "./readfile @@" -i corpus -o result --core 10

image

The machine configuration is as follows:
8 CPUs, 15G, Ubuntu 20.04.2

So, what's wrong with the result? Did I use mufuzz in the wrong way?

commented

I tested it can it and mufuzz find the bugs super fast.
image

I wonder whether you use the same binary for AFLpp and mufuzz, because when I compile the program using:

AFLplusplus/afl-cc -o readfile readfile.c

The program will not crash even we give it a very long input.

If I use AFLplusplus/afl-cc -O0 -o readfile readfile.c instead, it crashes. So could you please check the binary you give to mufuzz by testing readfile input_txt, where input_txt contains a super long string, to see whether it crash?

See my test output:

➜  test_bins git:(main) ✗ ../../AFLplusplus/afl-cc --version
afl-cc++4.06a by Michal Zalewski, Laszlo Szekeres, Marc Heuse - mode: LLVM-PCGUARD
Ubuntu clang version 15.0.7
Target: x86_64-pc-linux-gnu
Thread model: posix
InstalledDir: /usr/lib/llvm-15/bin
➜  test_bins git:(main) ✗ ../../AFLplusplus/afl-cc -o readfile readfile.c
afl-cc++4.06a by Michal Zalewski, Laszlo Szekeres, Marc Heuse - mode: LLVM-PCGUARD
SanitizerCoveragePCGUARD++4.06a
[+] Instrumented 7 locations with no collisions (non-hardened mode) of which are 0 handled and 0 unhandled selects.
➜  test_bins git:(main) ✗ ./readfile alongstring
Processed string: aaaaaaaaa
➜  test_bins git:(main) ✗ ../../AFLplusplus/afl-cc -O0 -o readfile readfile.c
afl-cc++4.06a by Michal Zalewski, Laszlo Szekeres, Marc Heuse - mode: LLVM-PCGUARD
SanitizerCoveragePCGUARD++4.06a
[+] Instrumented 6 locations with no collisions (non-hardened mode) of which are 0 handled and 0 unhandled selects.
➜  test_bins git:(main) ✗ ./readfile alongstring
Processed string: aaaaaaaaa
[1]    162713 segmentation fault  ./readfile alongstring

When I compile with afl-gcc, the binary can crash. The command is:
afl-gcc -o readfile readfile.c

However when I compile with afl-clang, the binary can't crash. The command is:
afl-clang -o readfile readfile.c

I don't set any environment variables. I don't know what causes this difference.

Besides this question, I have another two questions:

  1. If I set the number of core less than 5, the mufuzz gets error:
    thread 'main' panicked at 'assertion failed: sender_num % 5 == 0 || sender_num < 5'
    What's the reason to set the number of core to a multiple of 5?

  2. The readme says saving the corpus/crashes to disk has not been implemented yet. It seems that crash deduplication has not been implemented either.
    I wonder if there are any plans to implement these two features in the future? If yes, when will it be completed? If not, can you give me some guidance on where it can be modified?

Thank you for providing such an efficient fuzz tool. It does seem to be much faster than afl.

commented

However when I compile with afl-clang, the binary can't crash. The command is:
afl-clang -o readfile readfile.c

You need to set -O0 otherwise afl will add the optimization flag, I guess.

If I set the number of core less than 5, the mufuzz gets error:
thread 'main' panicked at 'assertion failed: sender_num % 5 == 0 || sender_num < 5'
What's the reason to set the number of core to a multiple of 5?

This is a adhoc design choice. No special reason :P

The readme says saving the corpus/crashes to disk has not been implemented yet. It seems that crash deduplication has not been implemented either.
I wonder if there are any plans to implement these two features in the future? If yes, when will it be completed? If not, can you give me some guidance on where it can be modified?

We actually have the functionality here, just haven't integrated it yet. So you can use the functionality to save the crash when you find it. (https://github.com/OMH4ck/mufuzz/blob/main/src/monitor/output_writer.rs)
We will integrate it, just not sure when :(

Thanks for your suggestion. But I have tried alf-clang with -O0 and -O1, neither can get crash.

Will the integration be complicated? I don't know how to code with Rust, but now I need to save crash when running mufuzz.

commented

Thanks for your suggestion. But I have tried alf-clang with -O0 and -O1, neither can get crash.

Interesting. It should be due to some optimization.

Will the integration be complicated? I don't know how to code with Rust, but now I need to save crash when running mufuzz.

It should not.

commented

I am supporting it with this PR (#2), which should solve your problem.

Thank you very much for your update.

I compared the results saved by mufuzz and afl:

$ mufuzz/readfile$ ls -al result_afl
total 96
drwx------ 5 sqa sqa 4096 Nov 17 09:16 .
drwxrwxr-x 7 sqa sqa 4096 Nov 17 09:46 ..
drwx------ 2 sqa sqa 4096 Nov 17 09:09 crashes
-rw------- 1 sqa sqa 6 Nov 17 09:16 .cur_input
-rw------- 1 sqa sqa 65536 Nov 17 09:15 fuzz_bitmap
-rw------- 1 sqa sqa 794 Nov 17 09:16 fuzzer_stats
drwx------ 2 sqa sqa 4096 Nov 17 09:09 hangs
-rw------- 1 sqa sqa 4038 Nov 17 09:16 plot_data
drwx------ 3 sqa sqa 4096 Nov 17 09:15 queue
$ mufuzz/readfile$ ls -al result_mufuzz/
total 312
drwxrwxr-x 5 sqa sqa 4096 Nov 17 09:24 .
drwxrwxr-x 7 sqa sqa 4096 Nov 17 09:46 ..
drwxrwxr-x 2 sqa sqa 278528 Nov 17 09:24 crash
-rw-rw-r-- 1 sqa sqa 4 Nov 17 09:24 .cur_input0
-rw-rw-r-- 1 sqa sqa 1 Nov 17 09:24 .cur_input1
-rw-rw-r-- 1 sqa sqa 4 Nov 17 09:24 .cur_input2
-rw-rw-r-- 1 sqa sqa 51 Nov 17 09:24 .cur_input3
drwxrwxr-x 2 sqa sqa 4096 Nov 17 09:24 hang
-rw-rw-r-- 1 sqa sqa 274 Nov 17 09:24 plot_data
drwxrwxr-x 2 sqa sqa 4096 Nov 17 09:24 queue

AFL: the crashes folder save the unique crashes(2 files), the queue folder save the instreasting seeds(8 files).
MuFuzz: the crash folder save the all crashes(10000+ files), the queue folder is empty.
It seems mufuzz doesn't remove duplicates from crashes, and doesn't save the intreasting seeds.

Then there is a more serious problem. When I use afl-gcc with asan and -O to compile, the binary can't reproduce the crash, and the mufuzz runs much slower.

Time: 2, Total exec: 0, current speed: 0/s, average speed: 0/s, per core: 0/s, timeout exec: 0, crash: 0, Interesting inputs 0, timeout rate: 0.000%, cpu usage: 0%, average cpu usage 0.0%
Time: 4, Total exec: 3182, current speed: 1590/s, average speed: 795/s, per core: 198/s, timeout exec: 0, crash: 0, Interesting inputs 1, timeout rate: 0.000%, cpu usage: 100%, average cpu usage 100.0%
Time: 6, Total exec: 3182, current speed: 0/s, average speed: 530/s, per core: 132/s, timeout exec: 0, crash: 0, Interesting inputs 1, timeout rate: 0.000%, cpu usage: 100%, average cpu usage 100.0%
Time: 8, Total exec: 7030, current speed: 1922/s, average speed: 878/s, per core: 219/s, timeout exec: 0, crash: 0, Interesting inputs 1, timeout rate: 0.000%, cpu usage: 100%, average cpu usage 100.0%
Time: 10, Total exec: 7030, current speed: 0/s, average speed: 702/s, per core: 175/s, timeout exec: 0, crash: 0, Interesting inputs 1, timeout rate: 0.000%, cpu usage: 99%, average cpu usage 99.8%
Time: 12, Total exec: 9802, current speed: 1385/s, average speed: 816/s, per core: 204/s, timeout exec: 0, crash: 0, Interesting inputs 1, timeout rate: 0.000%, cpu usage: 98%, average cpu usage 99.4%
Time: 14, Total exec: 10802, current speed: 499/s, average speed: 771/s, per core: 192/s, timeout exec: 2, crash: 0, Interesting inputs 1, timeout rate: 0.019%, cpu usage: 99%, average cpu usage 99.3%
Time: 16, Total exec: 12650, current speed: 923/s, average speed: 790/s, per core: 197/s, timeout exec: 2, crash: 0, Interesting inputs 1, timeout rate: 0.016%, cpu usage: 99%, average cpu usage 99.3%
Time: 18, Total exec: 14536, current speed: 943/s, average speed: 807/s, per core: 201/s, timeout exec: 4, crash: 0, Interesting inputs 1, timeout rate: 0.028%, cpu usage: 99%, average cpu usage 99.2%
Time: 20, Total exec: 16346, current speed: 904/s, average speed: 816/s, per core: 204/s, timeout exec: 4, crash: 0, Interesting inputs 1, timeout rate: 0.024%, cpu usage: 99%, average cpu usage 99.2%

commented

You might want to optimize it a little bit according to the need. Feel free to leave this issue open and I might find some time to finish it (not in the near future).

Ok, looking forward to your further updates, thanks.