nbs-system / naxsi

NAXSI is an open-source, high performance, low rules maintenance WAF for NGINX

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Nginx workers segfault with nginx 1.9.5, naxsi 0.54 if http2 is enabled

selivan opened this issue · comments

UPD: Problem occurs only if http2 is enabled in listen directive.

Here are core dump files, apport crash report and manulay built packages I used on Ubuntu 14.04: https://yadi.sk/d/6m32n8IFjRqrc

nasxi 0.54

nginx -V
nginx version: nginx/1.9.5
built with OpenSSL 1.0.1f 6 Jan 2014
TLS SNI support enabled
configure arguments: --add-module=/home/selivan/work/naxsi/nginx-1.9.5-custom/debian/modules/naxsi/naxsi_src/ --add-module=/home/selivan/work/naxsi/nginx-1.9.5-custom/debian/modules/nginx-upstream-fair/ --with-cc-opt='-g -O2 -fPIE -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions -fPIE -pie -Wl,-z,relro -Wl,-z,now' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-ipv6 --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_addition_module --with-http_geoip_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_sub_module --with-http_xslt_module --without-mail_pop3_module --without-mail_imap_module --without-mail_smtp_module --with-http_v2_module

Only module I used except naxsi is nginx-upstream-fair.

Problem appeared after couple of minutes of nginx working with production load, synthetic tests did not reproduce it.
nginx.conf:

http {
        # Naxsi
        include /etc/nginx/naxsi_core.rules;
...
server {
        server_name *********;
        include           /etc/nginx/naxsi.rules;
        set $naxsi_extensive_log 1;
        location /RequestDenied {
            return 418;
            access_log /var/log/nginx/denied.log;
        }
...

naxsi_core.rules taken unchanged from source.

naxsi.rules:

#Enables learning mode
LearningMode;
SecRulesEnabled;
#SecRulesDisabled;
DeniedUrl "/RequestDenied";
## check rules
CheckRule "$SQL >= 8" BLOCK;
CheckRule "$RFI >= 8" BLOCK;
CheckRule "$TRAVERSAL >= 4" BLOCK;
CheckRule "$EVADE >= 4" BLOCK;
CheckRule "$XSS >= 8" BLOCK;

Problem also appeared with naxsi 0.54rc3, which I accidentally built first.

gdb /usr/sbin/nginx -c /var/lib/nginx/cores/core
...
Reading symbols from /usr/sbin/nginx...Reading symbols from /usr/lib/debug/.build-id/98/189920b31b8739442099c12774ed5f5a5e04f7.debug...done.
done.
[New LWP 20339]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Core was generated by `nginx: worker process                           '.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x00007feb1dabaf25 in nx_find_wl_in_hash (mstr=mstr@entry=0x7feb1f4449e8, cf=cf@entry=0x7feb1f049018, zone=zone@entry=HEADERS)
    at /home/selivan/work/naxsi/nginx-1.9.5-custom/debian/modules/naxsi/naxsi_src//naxsi_runtime.c:352
352         mstr->data[i] = tolower(mstr->data[i]);

Backtrace:

(gdb) bt
#0  0x00007feb1dabaf25 in nx_find_wl_in_hash (mstr=mstr@entry=0x7feb1f4449e8, cf=cf@entry=0x7feb1f049018, zone=zone@entry=HEADERS)
    at /home/selivan/work/naxsi/nginx-1.9.5-custom/debian/modules/naxsi/naxsi_src//naxsi_runtime.c:352
#1  0x00007feb1dabb4ca in ngx_http_dummy_is_rule_whitelisted_n (req=req@entry=0x7feb1f443c90, cf=cf@entry=0x7feb1f049018, 
    r=r@entry=0x7feb1f030178, name=name@entry=0x7feb1f4449e8, zone=zone@entry=HEADERS, target_name=target_name@entry=0)
    at /home/selivan/work/naxsi/nginx-1.9.5-custom/debian/modules/naxsi/naxsi_src//naxsi_runtime.c:641
#2  0x00007feb1dabcad5 in ngx_http_apply_rulematch_v_n (r=r@entry=0x7feb1f030178, ctx=ctx@entry=0x7feb1f45dff8, req=req@entry=0x7feb1f443c90, 
    name=name@entry=0x7feb1f4449e8, value=value@entry=0x7feb1f4449f8, zone=zone@entry=HEADERS, nb_match=1, target_name=target_name@entry=0)
    at /home/selivan/work/naxsi/nginx-1.9.5-custom/debian/modules/naxsi/naxsi_src//naxsi_runtime.c:1049
#3  0x00007feb1dabd40f in ngx_http_basestr_ruleset_n (pool=<optimized out>, name=name@entry=0x7feb1f4449e8, value=value@entry=0x7feb1f4449f8, 
    rules=0x7feb1f01ce20, req=req@entry=0x7feb1f443c90, ctx=ctx@entry=0x7feb1f45dff8, zone=zone@entry=HEADERS)
    at /home/selivan/work/naxsi/nginx-1.9.5-custom/debian/modules/naxsi/naxsi_src//naxsi_runtime.c:1429
#4  0x00007feb1dabd7f3 in ngx_http_dummy_headers_parse (main_cf=main_cf@entry=0x7feb1f008e08, cf=cf@entry=0x7feb1f049018, 
    ctx=ctx@entry=0x7feb1f45dff8, r=r@entry=0x7feb1f443c90)
    at /home/selivan/work/naxsi/nginx-1.9.5-custom/debian/modules/naxsi/naxsi_src//naxsi_runtime.c:2130
#5  0x00007feb1dabef8e in ngx_http_dummy_data_parse (ctx=ctx@entry=0x7feb1f45dff8, r=r@entry=0x7feb1f443c90)
    at /home/selivan/work/naxsi/nginx-1.9.5-custom/debian/modules/naxsi/naxsi_src//naxsi_runtime.c:2154
#6  0x00007feb1dac1020 in ngx_http_dummy_access_handler (r=0x7feb1f443c90)
    at /home/selivan/work/naxsi/nginx-1.9.5-custom/debian/modules/naxsi/naxsi_src//naxsi_skeleton.c:1247
#7  0x00007feb1da5c743 in ngx_http_core_rewrite_phase (r=0x7feb1f443c90, ph=0x7feb1f0ca1f8) at src/http/ngx_http_core_module.c:894
#8  0x00007feb1da57f05 in ngx_http_core_run_phases (r=r@entry=0x7feb1f443c90) at src/http/ngx_http_core_module.c:840
#9  0x00007feb1da57ff7 in ngx_http_handler (r=r@entry=0x7feb1f443c90) at src/http/ngx_http_core_module.c:823
#10 0x00007feb1da6401e in ngx_http_process_request (r=0x7feb1f443c90) at src/http/ngx_http_request.c:1901
#11 0x00007feb1da95958 in ngx_http_v2_run_request (r=0x7feb1f443c90) at src/http/v2/ngx_http_v2.c:3445
#12 ngx_http_v2_state_header_complete (h2c=0x7feb1f3de700, pos=0x7feb1f12ab33 "", end=0x7feb1f12ab33 "") at src/http/v2/ngx_http_v2.c:1704
#13 0x00007feb1da96ad1 in ngx_http_v2_state_header_block (h2c=0x7feb1f3de700, pos=0x7feb1f12ab33 "", end=0x7feb1f12ab33 "")
    at src/http/v2/ngx_http_v2.c:1266
#14 0x00007feb1da92e6d in ngx_http_v2_read_handler (rev=0x7feb1f0fe790) at src/http/v2/ngx_http_v2.c:357
#15 0x00007feb1da4d2d1 in ngx_epoll_process_events (cycle=0x7feb1f005020, timer=<optimized out>, flags=<optimized out>)
    at src/event/modules/ngx_epoll_module.c:822
#16 0x00007feb1da43a7a in ngx_process_events_and_timers (cycle=cycle@entry=0x7feb1f005020) at src/event/ngx_event.c:242
#17 0x00007feb1da4aa85 in ngx_worker_process_cycle (cycle=cycle@entry=0x7feb1f005020, data=data@entry=0x0) at src/os/unix/ngx_process_cycle.c:753
#18 0x00007feb1da4949a in ngx_spawn_process (cycle=cycle@entry=0x7feb1f005020, proc=proc@entry=0x7feb1da4aa30 <ngx_worker_process_cycle>, 
    data=data@entry=0x0, name=name@entry=0x7feb1dace476 "worker process", respawn=respawn@entry=-3) at src/os/unix/ngx_process.c:198
#19 0x00007feb1da4ad30 in ngx_start_worker_processes (cycle=cycle@entry=0x7feb1f005020, n=1, type=type@entry=-3)
    at src/os/unix/ngx_process_cycle.c:358
#20 0x00007feb1da4bb4f in ngx_master_process_cycle (cycle=0x7feb1f005020) at src/os/unix/ngx_process_cycle.c:130
#21 0x00007feb1da27ac4 in main (argc=<optimized out>, argv=<optimized out>) at src/core/nginx.c:415

Hi,

From the full stacktrace, it seems that you are using HTTP/2, and naxsi has not been tested yet with it (we will start working on it soon).
Can you try to remove the http2 parameter from the listen directive in your configuration and see if the crash still occurs ?

Seems that removing http2 fixed the problem, thank you :)

Maybe related to the "Bugfix: a segmentation fault might occur in a worker process when using HTTP/2." in the latest nginx version.
http://nginx.org/en/CHANGES

I'll test tomorrow.

commented

I just tested it and unfortunately it is still not working.

commented

Same issue with nginx 1.9.7

EDIT : same with 1.9.8 & 1.9.9

commented

@blotus : Hello, is there any news on this issue ? I really want to use naxsi with http2 :)

Hi @rfnx :)

We didn't start working on the http2 compat.
I hope we will find some time in Jan to work on it, but it is not going to come that soon. http/2 brings a lot of new things that will definitely have serious impacts :)

commented

@buixor Thanks for the answer !

@buixor: may be you could insert some warning or even error if naxsi is used with HTTP/2? It would be a small and easy change. Now it may be a bit confusing for new users.

This is so sad :(

Hi,

Sorry for delay :)
Yes, we are going to add a warning at least. Sorry, I didn't really have time to look at the subject in depth, I just already know it's going to be tricky as http/2 seems to have a lot of tricky things (from the RFC) that might have security implication.

I'll keep you posted ;)

Hate to make this thread sound like a broken record, but do you have any news concerning http2 compatibility?

Hi,

Don't worry, you request is totally legit.
No news yet, except I started looking at it (no code written yet), I will try to squeeze some time to work on it, but it's definitely in my plans :)

Can you let me the status of this now that nginx 1.10.0 stable is released . I believe the bug affects this version too?

Yep, unlike 1.9.x, which was mainline version, 1.10 is stable version, like 1.8 was. Is there anything you guys are going to do with it? I would even pay money for it (if I could convince my bosses that it's good investment).

Hello,

This is indeed something we need to work on, but as you might understand, we'd better not take the topic lightly. I plan to work on it as soon as we are done with release 0.55 (we have a few open bugs to close first), but tbh we didn't start working on http2 yet.

It will come, I know some of you would have hoped sooner, please a bit more patience.
I will come back to you as soon as we have something testable ;)

Hi,

just started to do some basic tests with http2, and it seems to be working so far.
If any of you can possibly submit a test that triggers any bug, please do so !

I am using vers=0.55rc1 with nginx-1.10.0 and having http2 enabled in ssl vhost.

So far I have not seen any crashes and a test was blocked by naxsi just fine

2016/04/28 22:02:00 [error] 1463#1463: *891 NAXSI_FMT: ip=122.174.198.241&server=domain.net&uri=/index.php&learning=0&vers=0.55rc1&total_processed=1&total_blocked=1&block=1&cscore0=$XSS&score0=8&zone0=ARGS&id0=1302&var_name0=a, client: 122.174.198.241, server: domain.net, request: "GET /index.php?a=%3C%3E HTTP/2.0", host: "domain.net"

The initial comment clarified: "Problem appeared after couple of minutes of nginx working with production load, synthetic tests did not reproduce it." This would make debugging the issue difficult.

@Promaethius : yes exactly, that's why I'm begging for some test cases.
@AnoopAlias : thanks, my initial tests seem to pass as well

@selivan : any chance you have more indications on what caused the crash ?

@buixor: was long time ago, all that I saved is in this ticket. And I can't test new build on production load now, but I'll see if I can use it for 5-10 minutes on one of backends some days later.

commented

@buixor Hello, thanks again for your work. I tried and naxsi still crashed. This is very easy to reproduce : for me, it happens everytime I click the "connection" button on my wordpress site, to go to the admin connection page (default to /wp-admin).

Configuration :

  • nginx version : 1.10.0;
  • naxsi version : 0.55rc1;
  • http/2 enabled;
  • Linux kernel with grsecurity.

My system log after the crash :

kernel: grsec: From 127.0.0.6: Segmentation fault occurred at 0000000000ad37e4 in /usr/bin/nginx[nginx:11381] uid/euid:33/33 gid/egid:33/33, parent /usr/bin/nginx[nginx:11379] uid/euid:0/0 gid/egid:0/0
kernel: grsec: From 127.0.0.6: bruteforce prevention initiated for the next 30 minutes or until service restarted, stalling each fork 30 seconds.  Please investigate the crash report for /usr/bin/nginx[nginx:11381] uid/euid:33/33 gid/egid:33/33, parent /usr/bin/nginx[nginx:11379] uid/euid:0/0 gid/egid:0/0
systemd-coredump[11419]: Process 11381 (nginx) of user 33 dumped core.

                                                        Stack trace of thread 11381:
                                                        #0  0x00000000004c5964 nx_find_wl_in_hash (nginx)
                                                        #1  0x00000000004c5e85 ngx_http_dummy_is_rule_whitelisted_n (nginx)
                                                        #2  0x00000000004c7599 ngx_http_apply_rulematch_v_n (nginx)
                                                        #3  0x00000000004c7ee2 ngx_http_basestr_ruleset_n (nginx)
                                                        #4  0x00000000004c82c5 ngx_http_dummy_headers_parse (nginx)
                                                        #5  0x00000000004c9980 ngx_http_dummy_data_parse (nginx)
                                                        #6  0x00000000004cbc9e n/a (nginx)
                                                        #7  0x00000000004563fc ngx_http_core_rewrite_phase (nginx)
                                                        #8  0x0000000000451895 ngx_http_core_run_phases (nginx)
                                                        #9  0x000000000045c973 ngx_http_process_request (nginx)
                                                        #10 0x00000000004890a6 n/a (nginx)
                                                        #11 0x0000000000489d76 n/a (nginx)
                                                        #12 0x0000000000489fbe n/a (nginx)
                                                        #13 0x0000000000488855 n/a (nginx)
                                                        #14 0x000000000043e750 ngx_event_process_posted (nginx)
                                                        #15 0x0000000000444921 n/a (nginx)
                                                        #16 0x0000000000443350 ngx_spawn_process (nginx)
                                                        #17 0x0000000000444c84 n/a (nginx)
                                                        #18 0x00000000004455a4 ngx_master_process_cycle (nginx)
                                                        #19 0x0000000000421838 main (nginx)
                                                        #20 0x0000031d74329710 __libc_start_main (libc.so.6)
                                                        #21 0x0000000000421d09 _start (nginx)

                                                        Stack trace of thread 11392:
                                                        #0  0x0000031d761333e8 pthread_cond_timedwait@@GLIBC_2.3.2 (libpthread.so.0)
                                                        #1  0x00000000006f3095 n/a (nginx)
                                                        #2  0x00000000006efb3a n/a (nginx)
                                                        #3  0x000000000050b86e n/a (nginx)
                                                        #4  0x00000000006f4588 n/a (nginx)
                                                        #5  0x0000031d7612d424 start_thread (libpthread.so.0)
                                                        #6  0x0000031d743f0cbd __clone (libc.so.6)

Thanks :)

Any chance you could provide a dump of the http request?
On 29 Apr 2016 22:53, "rfnx" notifications@github.com wrote:

@buixor https://github.com/buixor Hello, thanks again for your work. I
tried again and I still have a crash with naxsi. This is very easy to
reproduce : for me, it happens everytime I click the "connection" button on
my wordpress site, to go to the admin connection page (default to
/wp-admin).

Configuration :

  • nginx version : 1.10.0;
  • naxsi version : 0.55rc1;
  • http/2 enabled;
  • Linux kernel with grsecurity.

My system log after the crash :

kernel: grsec: From 127.0.0.6: Segmentation fault occurred at 0000000000ad37e4 in /usr/bin/nginx[nginx:11381] uid/euid:33/33 gid/egid:33/33, parent /usr/bin/nginx[nginx:11379] uid/euid:0/0 gid/egid:0/0
kernel: grsec: From 127.0.0.6: bruteforce prevention initiated for the next 30 minutes or until service restarted, stalling each fork 30 seconds. Please investigate the crash report for /usr/bin/nginx[nginx:11381] uid/euid:33/33 gid/egid:33/33, parent /usr/bin/nginx[nginx:11379] uid/euid:0/0 gid/egid:0/0
systemd-coredump[11419]: Process 11381 (nginx) of user 33 dumped core.

                                                    Stack trace of thread 11381:
                                                    #0  0x00000000004c5964 nx_find_wl_in_hash (nginx)
                                                    #1  0x00000000004c5e85 ngx_http_dummy_is_rule_whitelisted_n (nginx)
                                                    #2  0x00000000004c7599 ngx_http_apply_rulematch_v_n (nginx)
                                                    #3  0x00000000004c7ee2 ngx_http_basestr_ruleset_n (nginx)
                                                    #4  0x00000000004c82c5 ngx_http_dummy_headers_parse (nginx)
                                                    #5  0x00000000004c9980 ngx_http_dummy_data_parse (nginx)
                                                    #6  0x00000000004cbc9e n/a (nginx)
                                                    #7  0x00000000004563fc ngx_http_core_rewrite_phase (nginx)
                                                    #8  0x0000000000451895 ngx_http_core_run_phases (nginx)
                                                    #9  0x000000000045c973 ngx_http_process_request (nginx)
                                                    #10 0x00000000004890a6 n/a (nginx)
                                                    #11 0x0000000000489d76 n/a (nginx)
                                                    #12 0x0000000000489fbe n/a (nginx)
                                                    #13 0x0000000000488855 n/a (nginx)
                                                    #14 0x000000000043e750 ngx_event_process_posted (nginx)
                                                    #15 0x0000000000444921 n/a (nginx)
                                                    #16 0x0000000000443350 ngx_spawn_process (nginx)
                                                    #17 0x0000000000444c84 n/a (nginx)
                                                    #18 0x00000000004455a4 ngx_master_process_cycle (nginx)
                                                    #19 0x0000000000421838 main (nginx)
                                                    #20 0x0000031d74329710 __libc_start_main (libc.so.6)
                                                    #21 0x0000000000421d09 _start (nginx)

                                                    Stack trace of thread 11392:
                                                    #0  0x0000031d761333e8 pthread_cond_timedwait@@GLIBC_2.3.2 (libpthread.so.0)
                                                    #1  0x00000000006f3095 n/a (nginx)
                                                    #2  0x00000000006efb3a n/a (nginx)
                                                    #3  0x000000000050b86e n/a (nginx)
                                                    #4  0x00000000006f4588 n/a (nginx)
                                                    #5  0x0000031d7612d424 start_thread (libpthread.so.0)
                                                    #6  0x0000031d743f0cbd __clone (libc.so.6)


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
#227 (comment)

commented

@buixor Do you want the headers ?

I also have had a crash. I don't know how to reproduce it yet.

Reading symbols from /usr/sbin/nginx...Reading symbols from /usr/lib/debug/.build-id/7c/6f589235091c270b13731c005969fc22f4c29d.debug...done.
done.
[New LWP 30858]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/i386-linux-gnu/libthread_db.so.1".
Core was generated by `nginx: worker process                   '.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x080f880e in nx_find_wl_in_hash (mstr=mstr@entry=0x8823150, cf=cf@entry=0x86ea644, zone=zone@entry=HEADERS) at /home/bbigras/nginx/nginx-1.9.15/debian/modules/naxsi/naxsi_src/naxsi_runtime.c:342
342         mstr->data[i] = tolower(mstr->data[i]);
(gdb) bt
#0  0x080f880e in nx_find_wl_in_hash (mstr=mstr@entry=0x8823150, cf=cf@entry=0x86ea644, zone=zone@entry=HEADERS) at /home/bbigras/nginx/nginx-1.9.15/debian/modules/naxsi/naxsi_src/naxsi_runtime.c:342
#1  0x080f8d5c in ngx_http_dummy_is_rule_whitelisted_n (req=req@entry=0x88225c8, cf=cf@entry=0x86ea644, r=r@entry=0x86c32a4, name=name@entry=0x8823150, zone=zone@entry=HEADERS,
    target_name=target_name@entry=0) at /home/bbigras/nginx/nginx-1.9.15/debian/modules/naxsi/naxsi_src/naxsi_runtime.c:629
#2  0x080fa5b7 in ngx_http_apply_rulematch_v_n (r=r@entry=0x86c32a4, ctx=ctx@entry=0x882330c, req=req@entry=0x88225c8, name=name@entry=0x8823150, value=value@entry=0x8823158, zone=zone@entry=HEADERS,
    nb_match=0, target_name=target_name@entry=0) at /home/bbigras/nginx/nginx-1.9.15/debian/modules/naxsi/naxsi_src/naxsi_runtime.c:1030
#3  0x080faf0c in ngx_http_basestr_ruleset_n (pool=0x88225a0, name=name@entry=0x8823150, value=value@entry=0x8823158, rules=0x86bd8c0, req=req@entry=0x88225c8, ctx=ctx@entry=0x882330c,
    zone=zone@entry=HEADERS) at /home/bbigras/nginx/nginx-1.9.15/debian/modules/naxsi/naxsi_src/naxsi_runtime.c:1397
#4  0x080fb2e3 in ngx_http_dummy_headers_parse (main_cf=main_cf@entry=0x85b8598, cf=cf@entry=0x86ea644, ctx=ctx@entry=0x882330c, r=r@entry=0x88225c8)
    at /home/bbigras/nginx/nginx-1.9.15/debian/modules/naxsi/naxsi_src/naxsi_runtime.c:2078
#5  0x080fcd98 in ngx_http_dummy_data_parse (ctx=ctx@entry=0x882330c, r=r@entry=0x88225c8) at /home/bbigras/nginx/nginx-1.9.15/debian/modules/naxsi/naxsi_src/naxsi_runtime.c:2102
#6  0x080fed3f in ngx_http_dummy_access_handler (r=0x88225c8) at /home/bbigras/nginx/nginx-1.9.15/debian/modules/naxsi/naxsi_src/naxsi_skeleton.c:1228
#7  0x080973c3 in ngx_http_core_rewrite_phase (r=0x88225c8, ph=0x879df94) at src/http/ngx_http_core_module.c:901
#8  0x08092e29 in ngx_http_core_run_phases (r=0x88225c8) at src/http/ngx_http_core_module.c:847
#9  0x080c62f8 in ngx_http_v2_process_request_body (r=0x88225c8, pos=pos@entry=0x87bf711 "{\"Nom\":\"M05021\",\"Type\":\"NonPeinte\"}\320b\r&=LtA\352\373$\343\261\005L\034\067\341Y\357Ր؂ڇ",
    size=size@entry=35, last=1) at src/http/v2/ngx_http_v2.c:3587
#10 0x080c7513 in ngx_http_v2_state_read_data (h2c=0x86a5720, pos=0x87bf711 "{\"Nom\":\"M05021\",\"Type\":\"NonPeinte\"}\320b\r&=LtA\352\373$\343\261\005L\034\067\341Y\357Ր؂ڇ",
    end=0x87bf734 "\320b\r&=LtA\352\373$\343\261\005L\034\067\341Y\357Ր؂ڇ") at src/http/v2/ngx_http_v2.c:922
#11 0x080c8a0a in ngx_http_v2_read_handler (rev=<optimized out>) at src/http/v2/ngx_http_v2.c:362
#12 0x0807f78c in ngx_event_process_posted (cycle=<optimized out>, cycle@entry=0x85b64b8, posted=<optimized out>) at src/event/ngx_event_posted.c:33
#13 0x0807f35a in ngx_process_events_and_timers (cycle=cycle@entry=0x85b64b8) at src/event/ngx_event.c:259
#14 0x08085976 in ngx_worker_process_cycle (cycle=0x85b64b8, data=0x0) at src/os/unix/ngx_process_cycle.c:753
#15 0x080843a6 in ngx_spawn_process (cycle=cycle@entry=0x85b64b8, proc=0x80858f0 <ngx_worker_process_cycle>, data=0x0, name=0x8171fbb "worker process", respawn=respawn@entry=2)
    at src/os/unix/ngx_process.c:198
#16 0x080869f2 in ngx_reap_children (cycle=0x85b64b8) at src/os/unix/ngx_process_cycle.c:621
#17 ngx_master_process_cycle (cycle=<optimized out>, cycle@entry=0x85b2498) at src/os/unix/ngx_process_cycle.c:174
#18 0x080633de in main (argc=3, argv=0xbf8d5dd4) at src/core/nginx.c:367

@rfnx : yes if you can! it seems to be where the trouble happens :)

+1 - Looking forward to getting NAXSI deployed with HTTP2 in production.

Yep, really looking forward as well.

However, even admitting I find & fix the mentioned issue, you should all take it with caution (and like no guarantees) until we performed serious testing and declaring it "free of super obvious potential bypasses because http/2 is very tricky"

:)

commented

@buixor the request :

Host: www.domain.com
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:44.0) Gecko/20100101 Firefox/44.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.8,en-US;q=0.5,en;q=0.3
Accept-Encoding: gzip, deflate, br
DNT: 1
Referer: https://www.domain.com/
Cookie: _pk_id.1.7821=31a0039524567e4f.454213867.255.14211887.1462687252.; _pk_ref.1.7821=%5B%22%22%2C%2C%2C%2C1462641772%2C%22https%3A%2C%22stats.domain.com%2Findex.php%3Fmodule%3DCoreHome%26action%3Dindex%26idSite%3D1%26period%3Dday%26date%3Dyesterday%22%5D; wp-settings-time-2=1467806457; wp-settings-1=unfold%3D1%26mfold%3Do%26post_dfw%3Doff%26posts_list_mode%3Dlist%26libraryContent%3Dbrowse%26editor%3Dtinymce%26hidetb%3D1; wp-settings-time-1=1467806457; wp-settings-2=post_dqs%3Ddsf; _pk_ses.1.7821=*
Connection: keep-alive

Hello,

thanks @rfnx !
From a first hand look, seems that the headers are now stored (already lower cased) in r/o memory zone, thus the segfault when naxsi tried to lower-case it.

I will start patching the http2 branch, however please don't consider it production ready ;p

commented

@buixor thank YOU.

I'm not closing this one to track progress on http2 support

I am using it on this server https://cyberguerrilla.info with no problems

so are we good to go? :)

so far seems good, but I didn't have the time to properly fuzz http2 and naxsi, so it's even more "at your own risk" than before :D

The issue seems to be not fixed with 0.55rc2 . I have a person complaining about this happening on his server which I have no access to (so can't get you any more details) - https://support.sysally.net/boards/1/topics/3161 . But the important thing is that the issue seems not fixed.

@AnoopAlias : should be branch http2, I didn't merge in master yet :)
It will come in 0.56 I guess !

I was getting workers segfaulting 100% of the time. Applied fix and segfaults have stopped.

Hello,

For people playing around http2 / naxsi, have a look at : #294

Nginx 1.11.1 with ssl http2 or just with ssl, throws segfault.

How to get rid of it?

Change to:
#SecRulesEnabled;
SecRulesDisabled;

Which is really not an option. So, it seems naxsi fails to work with ANY SSL connection. (not sure if disabling http2 upon compiling nginx would help)

Anyway, if naxsi cannot work in SSL mode and we're all turning SSL it's a disaster....

Are you saying that a base nginx 1.11.1 or naxsi compiled 1.11.1 throws a segfault? And are you using the http2 naxsi branch?

Nginx 1.11.1 compiled with naxsi throws a segfault. I compiled it myself using default compilation options for Debian Jessie + naxsi module addon.

OK. At what point does it segfault? What is it doing? And are you using the http2 branch for naxsi?

I'm sorry, what do you mean http2 branch? I use Nginx 1.11.1 with http2 enabled on ssl http2 setup.

About when it happens:
server { with listen xxx ssl http2 and naxsi enabled in SecRulesEnabled mode.
Site is Wordpress. When I browse the website - no troubles at all. But when I login to it, and try to logout - connection terminates and I get segfault.
If I try to add Post, I get few connections terminated on wordpress editor scripts (also segfault).

This does happen with both Wordpress.rules attached and detached.

That is similar to most of the problems people have posted here. Buixor came up with a version of naxsi that, for the time being, works for nginx. The issue had to do with accessing and rewriting a read only memory location for headers. You can find that code under branch > http2

Thank you!!! Compiling to test :)

It has worked for a lot of folks, including me, but be aware that it is highly experimental. And be sure to reference issue #294 like buixor mentions above #294

Works!!! You're savior!!!! 👍 Thanks a lot!!!

#294 should be fixed now, waiting for feedbacks before closing the issue :)

Hi,

We're still getting segfaults here. The specific request that reliably breaks Nginx is this:

curl 'https://example.com/' -H 'Upgrade-Insecure-Requests: 1' -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.82 Safari/537.36' --compressed -vso/dev/null -H "cookie: __atuvc=1%7C27%2C0%7C28%2C0%7C29%2C"

This last cookie header is something sent by Chrome. There are others, but this is the one.

We merged Naxsi's master and http2 branches. Testing in our stage server with no traffic. I'll try to upload a core dump somewhere for you.

Thanks!

Here are the relevant parts:

Program received signal SIGSEGV, Segmentation fault.
0x00000000004fca04 in nx_find_wl_in_hash (mstr=0x7ffeb547e6a0, cf=0x39bfa28, zone=HEADERS)
    at /usr/src/debug/nginx-1.10.1/naxsi/naxsi_src/naxsi_runtime.c:354
354     mstr->data[i] = tolower(mstr->data[i]);
(gdb) bt
#0  0x00000000004fca04 in nx_find_wl_in_hash (mstr=0x7ffeb547e6a0, cf=0x39bfa28, zone=HEADERS)
    at /usr/src/debug/nginx-1.10.1/naxsi/naxsi_src/naxsi_runtime.c:354
#1  0x00000000004fce77 in ngx_http_dummy_is_rule_whitelisted_n (req=0x4987060, cf=0x39bfa28, r=0x4f6f9a0,
    name=0x7ffeb547e6a0, zone=HEADERS, target_name=0)
    at /usr/src/debug/nginx-1.10.1/naxsi/naxsi_src/naxsi_runtime.c:641
#2  0x00000000004fd0ab in ngx_http_apply_rulematch_v_n (r=0x4f6f9a0, ctx=0x4f5a2b0, req=0x4987060,
    name=0x7ffeb547e6a0, value=0x4987cd8, zone=HEADERS, nb_match=1, target_name=0)
    at /usr/src/debug/nginx-1.10.1/naxsi/naxsi_src/naxsi_runtime.c:1042
#3  0x00000000004fddb2 in ngx_http_basestr_ruleset_n (pool=<value optimized out>, name=0x7ffeb547e6a0,
    value=0x4987cd8, rules=0x4ec2190, req=0x4987060, ctx=0x4f5a2b0, zone=HEADERS)
    at /usr/src/debug/nginx-1.10.1/naxsi/naxsi_src/naxsi_runtime.c:1455
#4  0x00000000004fdf5a in ngx_http_dummy_headers_parse (main_cf=0x51d9ed0, cf=0x39bfa28, ctx=0x4f5a2b0, r=0x4987060)
    at /usr/src/debug/nginx-1.10.1/naxsi/naxsi_src/naxsi_runtime.c:2140
#5  0x00000000004ffa06 in ngx_http_dummy_data_parse (ctx=0x4f5a2b0, r=0x4987060)
    at /usr/src/debug/nginx-1.10.1/naxsi/naxsi_src/naxsi_runtime.c:2164
#6  0x0000000000503554 in ngx_http_dummy_access_handler (r=0x4987060)
    at /usr/src/debug/nginx-1.10.1/naxsi/naxsi_src/naxsi_skeleton.c:1176
#7  0x0000000000484f72 in ngx_http_core_rewrite_phase (r=0x4987060, ph=0x45831b8)
    at src/http/ngx_http_core_module.c:901
#8  0x0000000000481bfd in ngx_http_core_run_phases (r=0x4987060) at src/http/ngx_http_core_module.c:847

By the way. It has something to do with cookie escaping.

Cookies like 'cookie: bla=%20', 'cookie:bla=', etc. trigger the segfault. Other headers don't.

I'm using the standard Naxsi rules.

Hello @marcelomd ,

I didn't manage to reproduce the bug.

Can you confirm you can reproduce like this ? (from naxsi_src directory)

make clean all && /tmp/nginx/objs/nginx && curl -k 'https://127.0.0.1:4242/' -H 'Upgrade-Insecure-Requests: 1' -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.82 Safari/537.36' --compressed -vso/dev/null -H "cookie: __atuvc=1%7C27%2C0%7C28%2C0%7C29%2C"

Thanks :)

Hey @buixor

The config used in this test triggers the segfault in our staging servers, but not in a VM, even with our full Nginx build.

Trying to pinpoint exactly what the difference is. Hold on.

Thanks!

Nope. I was wrong. Nginx segfaults with your test.

My local curl/libcurl in the dev VM did not support HTTP/2 and it was falling back to HTTP1, which works.

After updating curl, Nginx segfaults =\

ah properly I am the one dumb, my curl was properly not up to date etc.
I will do some more testing and come back to you soon :)

Hehehe Happens to me all the time.

I submitted a PR with a fix. Not beautiful, but I spent the day testing and it seems to work.

I should mention for anyone willing to try, the above mentioned implies a performance hit (which I did not have the time to measure), as we alloc and copy strings for every match in the header zone. Fair warning =)

Hey folks...

I'm having the very same segfault issue in my dev environment using the http2 naxsi branch. What is the most intriguing part is that the issue only happens when I'm using Firefox (latest version)... I've tested the very same call using Chrome / IE and even Edge (all latest versions as well) and they all work fine!
As soon as I disable http2 on Firefox the call works as expected.
The call that triggers the segfault is a response from ADFS containing a very large token in var SAMLResponse.

I don't know exactly if this is a poor http2 implementation on Firefox or if this is a Naxsi issue.

Does anyone have any clue about this?

This segfault is triggered when Naxsi checks if a matching rule is whitelisted (seems to be specific to the header zone). So probably this specific ADFS header is matching some rule.

My guess is that if Naxsi blocks this request with HTTP 1 (you may need to lower scores), you are seeing this issue.

What if you issue this same request with other browser? With curl?

Hey @marcelomd, thanks for the quick reply.

I just (30 seconds ago) figured that Chrome removed support for NPN and only accepts ALPN now. In sum, Chrome is falling back to HTTP/1.1 because ALPN is available only for OpenSSL 1.0.2+ which almost none of the upstream distros support yet, except Ubuntu 16.04. That's why it is working on Chrome / IE and Edge, they are falling back to HTTP/1.1 and Firefox is not.

I have naxsi on learning mode and none whitelists set.

If you use a Debian based distro, you can use OpenSSL 1.0.2 easily with nginx without installing it. Uncompress OpenSSL 1.0.2h somewhere but don't build it (you'll built it with nginx with ./configure --with-openssl). It's easy since we only rebuild the nginx package with an extra configure flag. You need deb-src lines to get the source with apt-get source.

warning: you'll need to track OpenSSL's security updates and update it and rebuild if you want to stay secure.

export DEBFULLNAME='My Name'
export DEBEMAIL='my email'

to have dch and debuild
apt-get install devscripts

install the dependencies
apt-get build-dep nginx

apt-get source nginx
cd nginx-1.11.3
add --with-openssl=/opt/openssl-1.0.2h \ in COMMON_CONFIGURE_ARGS in debian/rules
increment the package version number with dch -i
build without signing debuild -i -us -uc -b (you can sign it if you have a gpg key matching your email)

commented

just two cents, http2 support in nginx is still buggy. i had to disable
it because it would stop serving sites altogether (just blank pages)
even without naxsi. then i read this:
https://blog.crashed.org/fixing-nginx-bugs-with-nghttp2/

:{

Hello everybody,

Can you please collect coredumps for me of those segfaults ?
Don't expect any update on this next week, I will be chilling in the sun :/
However I plan to have some time on this when I come back.
In the meanwhile, people can use @marcelomd MR #309 with branch http2 that should address the issue.

Cheers,

I'll just fallback to http/1.1 for now and get back to the naxsi master branch. Once we have a fully supported platform for http2 then I'll get back to it. Thanks everyone.

Hello,

Any crash after applying #309 patch ?
If the solution seems ok, I'll look at how to do it without memory copy.

cheers :)

Hey,

We've been running #309 in production with a few clients for a couple of weeks. Not a single error =)

Thanks a lot!

Sorry if I'm doing an absolutely absurd or dumb question but is it possible to have the ssl module compiled as a dynamic module?
I'd bet the answer is a big and sound no... just wondering.
That would help me to have openssl 1.0.2 with ALPN support + HTTP2 without having to rely on recompiling the whole nginx.

@danlsgiga NGiNX wise no, not that I am aware of.
Naxsi, i have no idea, the module itself can be added as dynamic module.
If I may suggest, use LibreSSL as openssl library... But thats what I did

I think I read that even if you want it as a dynamic module you still have to build it at the same time than nginx.

@BrunoQC true, most debian based distro's should have them

SSL module can be loaded dynamically (from first batch of modules, here). Plus, Nginx uses system's openssl, so you can play around with that.
I'm not sure about HTTP/2 module.

Ok, so from my understading if I get an Ubuntu 16.04, compile nginx with the ssl module dynamically and put only the module in a CentOS 7 installation... would that work?

@marcelomd the stream module is used for proxying..if Im correct
I may be mistaken.. https://nginx.org/en/docs/stream/ngx_stream_ssl_module.html

I compiled a nginx with dynamic modules, without stream I can still use SSL..

@danlsgiga I have no idea, I compile nginx for debian based distro's..
There might be differences in versions dependencies for the files.. You can see them via ldd [filename.so]
So it might crash there..

@combro2k Duh... my mistake =)

What is the status of this?

+1 for the good work here <3

Hello !

It seems "stable", and it will get merged for next release.
We do need to do some extra testing tho, to ensure that http2 doesn't open any potential partial/complete bypasses (which I didn't extensively test)

If anyone is up for help & feedback, please ping me :)

We haven't played with Naxsi for a while as it has been stable for months. #309 doesn't seem to degrade performance as much as I expected.

Anyway, we're always willing to help. =)

nginx-upstream-fair module itself triggers segfault on 1.11.8+

@viktor-zhuromskyy : hello, can you provide more info on that crash ? :)

I've compiled nginx 1.11.12 + naxsi with http2 branch + openssl 1.1.0e and after around 3 hours troubleshooting why when I enable http2 on the listen directive chrome breaks I've found that one MainRule is causing the whole http2 implementation to fail.

MainRule "str:0x" "msg:0x, possible hex encoding" "mz:BODY|URL|ARGS|$HEADERS_VAR:Cookie" "s:$SQL:2" id:1002;

Once I commented this rule on the core.rules, everything started working just fine. I have no idea why.

P.S: I'm also using the proxy_protocol on the listen directive but it doesn't seem to be the root cause.

UPDATE: Another finding... Looks like $HEADERS_VAR:Cookie is the piece causing the whole issue. Even if I change Cookie to another string it works, only with Cookie the http2 crashes.

I have the same problem with nginx 1.10.3 + naxsi 0.55.3 when http2 is enabled.
Will this be fixed?

I actually had to remove all the $HEADERS_VAR:Cookie from all MainRules in the core.rules file. For some reason, having it with HTTP2 causes the segfaults. Removed them and no issues for 2 days.

I know that removing it is not ideal but having HttpOnly; Secure in the headers kinda overcome most of the cookies attacks, so I'll leave it out for now.

Does anyone have more insights on the risks and implications of leaving that out from the core rules?

Hello,

@danlsgiga / @Louvremaster : did this happen with the http2 branch ?

Thanks !

ps: sorry for delay, I was kinda busy, http2 branch is going for merge in next major

Hey @buixor, yeah, http2 branch and the latest nginx release.

That is bad news :( I'm going to look at it when I come back from hollidays, end of April

Hello,
I have same problem, nginx 1.10.3, naxsi 0.55.2 and http2 enabled :|

@danlsgiga @ManuelRighi Did you guys try PR #309 on top of http2 branch?

We've been running HTTP/2 + Naxsi for months with no errors.

The segfaults were caused by Naxsi trying to modify the header part of the request, which lives in read-only memory. See previous posts on this thread.

Unless these are new segfaults, #309 should solve them =)

Thanks @marcelomd ... I try http2 branch ;)

Eh I guess I should really merge this one into http2 branch, @marcelomd : no update to do on this one ?

@buixor Nothing new. Works beautifully for us as it is. Go ahead =)

@buixor No problem at all with http2 branch plus MR #309, would be nice to have it in master. 👍

Hi @marcelomd, yeah, I was just using the http2 branch without the suggested PR #309. @buixor It would be nice to have that merged in http2 since I have a job that builds nginx automatically from that branch.
Thanks for the patches!!

merged into http2, which itself should make its way to master for 0.56 :)

Beautiful! Thanks @buixor