reactor / reactor-netty

TCP/HTTP/UDP/QUIC client/server with Reactor over Netty

Home Page:https://projectreactor.io

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Connection: close header causes TCP RST over https

mchristiansen opened this issue · comments

When a TLS connection is not kept alive reactor-netty is adding the ChannelFutureListener ChannelFutureListener.CLOSE. This calls Channel.close(), immediately closing both outbound and inbound pipes since it is a DuplexChannel. The connection is TLS, so the client attempts to respond to the server's TCP close_notify with its own close_notify, but because the inbound pipe is closed the client receives a TCP RST.

The reason this is an issue for me is due to the fact that many of the healthchecks we have do not re-use connections (either http 1.0 or connection: close header set), and so we see a consistent flood of target tcp resets in our load balancer metrics as a result.

Expected Behavior

TCP connection should be terminated without a RST from the server

Actual Behavior

Server winds up sending TCP RST to client

Screenshot 2023-06-20 at 11 13 35 AM

Steps to Reproduce

Run reactory-netty server

import io.netty.handler.ssl.util.SelfSignedCertificate;
import java.time.Duration;
import reactor.core.publisher.Mono;
import reactor.netty.DisposableServer;
import reactor.netty.http.Http11SslContextSpec;
import reactor.netty.http.server.HttpServer;

public class ReactorNettyMain {

  public static void main(final String[] args) throws Exception {

    SelfSignedCertificate ssc = new SelfSignedCertificate();

    DisposableServer server =
        HttpServer.create()
            .port(8443)
            .secure(spec -> spec.sslContext(Http11SslContextSpec.forServer(ssc.key(), ssc.cert())))
            .route(
                routes ->
                    routes.get(
                        "/ping",
                        (request, response) ->
                            response.sendString(
                                Mono.delay(Duration.ofMillis(200))
                                    .map(l -> "pong")
                                    .log("http-server"))))
            .bindNow();
    server.onDispose().block();
  }
}

Hit server with request via curl

curl -k --header 'Connection: close' https://localhost:8443/ping

Watch traffic using tcpdump and observe RST.

Possible Solution

Only the outbound channel should be immediately closed by the server, which should then read inbound until EOF, then close.

Your Environment

  • Reactor version(s) used: 1.1.7
  • Other relevant libraries versions (eg. netty, ...):
  • JVM version (java -version): openjdk 17.0.4
  • OS and version (eg. uname -a): Ventura 13.4

@mchristiansen Can you try to configure close_notify read timeout and tell us whether you still see this behaviour.
Please take a look at the documentation https://projectreactor.io/docs/netty/snapshot/reference/index.html#http-server-ssl-tls-timeout for an example how you can set this configuration.

@violetagg I updated the close_notify read timeout as you suggested and it indeed does fix the issue. Thank you so much!

I'm curious about the default being 0ms. The TLS RFC does say that it's not required that the initiator of the close wait for the responding close_notify, but it does require the other party respond, so the behavior that I saw resuling in a TCP RST would be consistent regardless of the client with the default setting. Is there some disadvantage I'm not thinking of in changing this to something > 0?

@mchristiansen Connections will stay opened for more time, in case the clients do not respond immediately and you configure close_notify read timeout > 0. It depends on the use case whether the number of the opened connections might affect the system where the server is located or not.