reactor / reactor-netty

TCP/HTTP/UDP/QUIC client/server with Reactor over Netty

Home Page:https://projectreactor.io

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

reactor.netty.http.server.tls.handshake.time metrics missing

jtorkkel opened this issue · comments

Current situation:

The following TLS metrics are available.

TCP server has reactor.netty.tcp.server.tls.handshake.time

HTTP client has reactor.netty.http.client.tls.handshake.time
TCP client has reactor.netty.tcp.client.address.resolver

No HTTP server metrics for TLS handshake
Should be available in ChannelMetricsRecorder

Desired situation:
reactor.netty.http.server.tls.handshake.time metrics available

Environment:

  • SpringBoot 3.1.4
  • Windows, Linux
  • Reactor Netty 1.1.11

@jtorkkel This is intentional. PTAL #1075

Thanks, I failed to understand why also reactor.netty.http.server.tls.handshake.time was removed.
Why just not remove REMOTE_ADDR from it too.

The same REMOTE_ADDRESS issue exists in reactor.netty.tcp.server.tls.handshake.time

Hi, I did not fully understood the comment.

I would be happy just get TLS sessions counts and sum timing to see the socket creation activity. Too many would hints issue in caller connection pool handling or in intermediate proxy/loadbalancer.

Current metrics

  • reactor_netty_http_server_connections_total
  • reactor_netty_http_server_connections_active

Do not show number of opened and closed connections, just current.

In HTTP client connections count can be seen from

  • reactor.netty.http.client.connect.time (all)
  • reactor.netty.http.client.tls.handshake.time (TLS only)

Often calls comes outside our monitoring system so something similar counters would be valuable on server side tool

So I think the metrics removal was unnecessary, would have been enough to remove just remote_address.

And in addition would be valuable to add also

  • reactor_netty_http_server_connections_opened (counter, no timing)

To count also non TLS connection created

@jtorkkel

Do not show number of opened and closed connections, just current.

It was decided that monitoring of the current opened/active connections on the server is beneficial for most of the use cases. Using the Monitoring System capabilities one can monitor how those were changed with the time.
We do provide an API for the use cases where different metrics are needed. A custom implementation of reactor.netty.http.server.HttpServerMetricsRecorder can be provided.

reactor.netty.http.server.HttpServerMetricsRecorder#recordServerConnectionActive
reactor.netty.http.server.HttpServerMetricsRecorder#recordServerConnectionInactive
reactor.netty.http.server.HttpServerMetricsRecorder#recordServerConnectionOpened
reactor.netty.http.server.HttpServerMetricsRecorder#recordServerConnectionClosed
reactor.netty.http.server.HttpServerMetricsRecorder#recordStreamOpened
reactor.netty.http.server.HttpServerMetricsRecorder#recordStreamClosed

So I think the metrics removal was unnecessary, would have been enough to remove just remote_address.

It was decided to remove the whole metric, because an information without remote address will not be a meaningful information for the most of the use cases. We do provide an API for the use cases where metrics for TLS time are needed. A custom implementation of reactor.netty.http.server.HttpServerMetricsRecorder can be provided.

reactor.netty.http.server.HttpServerMetricsRecorder#recordTlsHandshakeTime

Details and example, how custom reactor.netty.http.server.HttpServerMetricsRecorder can be provided, are available in the reference documentation https://projectreactor.io/docs/netty/release/reference/index.html#_metrics_4

Hi,

I think the proper customization is not possible due final usage in some classes.

I created own MicrometerHttpServerMetricsRecorder by forking reactor.netty.http.server.MicrometerHttpServerMetricsRecorder

and changing

   @Override
    public void incrementErrorsCount(SocketAddress remoteAddress) {
        // noop
    }

to method which also remove the port

    @Override
    public void recordTlsHandshakeTime(SocketAddress remoteAddress, Duration time, String status) {
        String address = reactor.netty.Metrics.formatSocketAddress(remoteAddress);
        address = address.split(":", 1)[0];  // remove the port, works only for IPv4
        Timer timer = getTlsHandshakeTimer(name + TLS_HANDSHAKE_TIME, address, status);
        if (timer != null) {
            timer.record(time);
        }
    }

Now tls metrics are recorded but remote_address still source IP-address and port.

reactor_netty_http_server_tls_handshake_time_seconds_count{remote_address="0:0:0:0:0:0:0:1:41962",status="SUCCESS",} 1.0
reactor_netty_http_server_tls_handshake_time_seconds_sum{remote_address="0:0:0:0:0:0:0:1:41962",status="SUCCESS",} 0.0226016
reactor_netty_http_server_tls_handshake_time_seconds_count{remote_address="127.0.0.1:41793",status="SUCCESS",} 1.0
reactor_netty_http_server_tls_handshake_time_seconds_sum{remote_address="127.0.0.1:41793",status="SUCCESS",} 0.0248838
reactor_netty_http_server_tls_handshake_time_seconds_count{remote_address="127.0.0.1:41831",status="SUCCESS",} 1.0
reactor_netty_http_server_tls_handshake_time_seconds_sum{remote_address="127.0.0.1:41831",status="SUCCESS",} 0.0264458

I notices that public void recordTlsHandshakeTime(SocketAddress remoteAddress, Duration time, String status) which I added is not called at all, it is only used to register exposing the metrics.

TLS recording happen in

MicrometerChannelMetricsRecorder.java method public final Timer getTlsHandshakeTimer(String name, String address, String status) which is called directly from ReactorNettyTimerObservationHandler onStop, and correcponsding onStart just start timer and put it into the context. Also here public void recordTlsHandshakeTime(SocketAddress remoteAddress, Duration time, String status) is not called.

This means that I cannot customize and remove the remote_addess by passing fixed address wo port as methos is not used.

I can only use MeterFilter to filter remote_address but that is very expensive as called for all metrics !

public class ReactorNettyServerTLSFilter implements MeterFilter {

    private static final String REACTOR_NETTY_HTTP_SERVER_TLS = "reactor.netty.http.server.tls.handshake";

    @Override
    public Meter.Id map(Meter.Id id) {
        String metricName = id.getName();
        if (metricName.startsWith(REACTOR_NETTY_HTTP_SERVER_TLS)) {
            return id.replaceTags(
                    id.getTags()
                            .stream()
                            .filter(tag -> !"remote.address".equalsIgnoreCase(tag.getKey()))
                            .collect(toList())
            );
        }
        return id;
    }
}

Let me check this one

Just to clarify that you do not use package reactor.netty.http.server and class name MicrometerHttpServerMetricsRecorder for your custom implementation, do you?

I copid and renamed MicrometerHttpServerMetricsRecorder to class CustomMicrometerHttpServerMetricsRecorder and made following changes

// import from same package instead of importing from reactor.netty
import static com.example.example.CustomHttpServerMeters.CONNECTIONS_ACTIVE;
import static com.example.example.CustomHttpServerMeters.STREAMS_ACTIVE;

class CustomMicrometerHttpServerMetricsRecorder extends MicrometerHttpMetricsRecorder implements HttpServerMetricsRecorder {

    final String name;
    final String protocol;

    public CustomMicrometerHttpServerMetricsRecorder() {
        super(HTTP_SERVER_PREFIX, PROTOCOL_VALUE_HTTP);
        this.name=HTTP_SERVER_PREFIX;
        this.protocol = PROTOCOL_VALUE_HTTP;
    }

    @Override
    public void recordTlsHandshakeTime(SocketAddress remoteAddress, Duration time, String status) {
        String address = reactor.netty.Metrics.formatSocketAddress(remoteAddress);
        address = address.split(":", 1)[0];
        Timer timer = getTlsHandshakeTimer(name + TLS_HANDSHAKE_TIME, address, status);
        if (timer != null) {
            timer.record(time);
        }
    }
}

I also copied enums HttpServerMeters to resolve references to CONNECTIONS_ACTIVE and STREAMS_ACTIVE alternative to import it.

Then I activated it

@Configuration
public class NettyServerMetricsCustomizer implements WebServerFactoryCustomizer<NettyReactiveWebServerFactory> {

    @Override
    public void customize(NettyReactiveWebServerFactory factory) {

        factory.addServerCustomizers(server -> true,
                CustomMicrometerHttpServerMetricsRecorder::new));
    }

@jtorkkel I created an example application and I do not see any issue. https://github.com/violetagg/GH-2940

Could you please try it. You need to setup a key store in the application.properties file, then when you start the application, request curl https://localhost:8080/actuator/metrics/com.example.gh0.tls.handshake.time -v -k. You should be able to see the corresponding metric.

wait I think I see the issue ...

I added two additional commits to the example. Now it should be OK.

Thanks, I tried your code. I can see that now tls can be customised and port can be removed.

com_example_gh0_tls_handshake_time_seconds_count{remote_address="127.0.0.1",status="SUCCESS",} 29.0

However I noticed that connections_active was not under "com.example.gh0*" but under reactor_netty_http_server_connections_active, apparently in override metrics name cannot be given, but should not matter when com.example.gh0 --> reactor.netty.http.server

And I cannot see *connections_total counters, their implementation was in MicrometerChannelMetricsRecorder but now not called. If I copy them, they become visible too under name reactor_netty_http_server_connections_total

But they count incorrectly and values goes more and more negative

com_example_gh0_connections_total{local_address="127.0.0.1:8081",uri="http",} -2.0
com_example_gh0_connections_total{local_address="0:0:0:0:0:0:0:1:8081",uri="http",} -2.0

reactor_netty_http_server_connections_active{local_address="127.0.0.1:8081",uri="http",} 1.0
reactor_netty_http_server_connections_active{local_address="0:0:0:0:0:0:0:1:8081",uri="http",} 1.0

And still report cumulative count for all local_address due `LongAdder but you are looking it separately in #2954

@jtorkkel Please take a look at the latest commit from the example app repository

However I noticed that connections_active was not under "com.example.gh0*" but under reactor_netty_http_server_connections_active, apparently in override metrics name cannot be given, but should not matter when com.example.gh0 --> reactor.netty.http.server

You can give the Gauge whatever name you want.

The last comments seem to be questions that would be better suited to Gitter or Stack Overflow. As mentioned in README, we prefer to use GitHub issues only for bugs and enhancements. Feel free to update this issue with a link to the re-posted question (so that other people can find it) or add some more details if you feel there is an issue with Reactor Netty functionality.

Thanks,

I still see that total metrics is not working correctly, luckily I was able to "repro it".
Issue seems to happen with standard metrics too

	@Override
	public void customize(NettyReactiveWebServerFactory factory) {
		factory.addServerCustomizers(server -> server.metrics(true,uri -> uri  ));
	}
reactor_netty_http_server_connections_total{local_address="127.0.0.1:8081",uri="http",} -22.0
reactor_netty_http_server_connections_total{local_address="0:0:0:0:0:0:0:1:8081",uri="http",} -22.0
{
"name": "reactor.netty.http.server.connections.total",
"measurements": [
{
"statistic": "VALUE",
"value": -44
}
],
"availableTags": [
{
"tag": "uri",
"values": [
"http"
]
},
{
"tag": "local.address",
"values": [
"127.0.0.1:8081",
"0:0:0:0:0:0:0:1:8081"
]
}
]
}

I enabled all netty logs logging.level.reactor.netty=TRACE and noticed that someone tried to open non-TLS connection to TLS port. Turn out that I had browser window opened which tried to open Prometheus endpoint without TLS.
But same can be triggered by calling actuator endpoint without TLS.

I added logging to recordTlsHandshakeTime/recordServerConnectionOpened/recordServerConnectionClosed/recordServerConnectionActive/recordServerConnectionInactive.

When TLS fails it can be seen that Opened is not called at all but Closed is called.

2023-11-02T11:44:53.628+02:00 DEBUG 11588 --- [ctor-http-nio-5] reactor.netty.tcp.SslProvider            : [abd66895, L:/[0:0:0:0:0:0:0:1]:8081 - R:/[0:0:0:0:0:0:0:1]:49537] SSL enabled using engine SSLEngine[hostname=null, port=-1, Session(1698918293627|SSL_NULL_WITH_NULL_NULL)]
2023-11-02T11:44:53.628+02:00 DEBUG 11588 --- [ctor-http-nio-5] r.n.http.server.HttpServerOperations     : [abd66895, L:/[0:0:0:0:0:0:0:1]:8081 - R:/[0:0:0:0:0:0:0:1]:49537] New http connection, requesting read
2023-11-02T11:44:53.629+02:00 DEBUG 11588 --- [ctor-http-nio-5] r.netty.transport.TransportConfig        : [abd66895, L:/[0:0:0:0:0:0:0:1]:8081 - R:/[0:0:0:0:0:0:0:1]:49537] Initialized pipeline DefaultChannelPipeline{(reactor.left.sslHandler = io.netty.handler.ssl.SslHandler), (reactor.left.sslReader = reactor.netty.tcp.SslProvider$SslReadHandler), (reactor.left.channelMetricsHandler = reactor.netty.channel.ChannelMetricsHandler), (reactor.left.httpCodec = io.netty.handler.codec.http.HttpServerCodec), (reactor.left.httpTrafficHandler = reactor.netty.http.server.HttpTrafficHandler), (reactor.left.httpMetricsHandler = reactor.netty.http.server.HttpServerMetricsHandler), (reactor.right.reactiveBridge = reactor.netty.channel.ChannelOperationsHandler)}
2023-11-02T11:44:53.630+02:00  INFO 11588 --- [ctor-http-nio-5] ustomMicrometerHttpServerMetricsRecorder : TLSStart /[0:0:0:0:0:0:0:1]:49537 ERROR

2023-11-02T11:44:53.631+02:00  INFO 11588 --- [ctor-http-nio-5] ustomMicrometerHttpServerMetricsRecorder : TLSEnd /[0:0:0:0:0:0:0:1]:49537 ERROR 1

2023-11-02T11:44:53.631+02:00 DEBUG 11588 --- [ctor-http-nio-5] r.netty.transport.ServerTransport        : [abd66895, L:/[0:0:0:0:0:0:0:1]:8081 - R:/[0:0:0:0:0:0:0:1]:49537] onUncaughtException(SimpleConnection{channel=[id: 0xabd66895, L:/[0:0:0:0:0:0:0:1]:8081 - R:/[0:0:0:0:0:0:0:1]:49537]})

io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record: 474554202f6d782f73746174757320485454502f312e310d0a486f73743a206c6f63616c686f73743a383038310d0a436f6e6e656374696f6e3a206b6565702d616c6976650d0a43616368652d436f6e74726f6c3a206d61782d6167653d300d0a7365632d63682d75613a20224368726f6d69756d223b763d22313138222c2022476f6f676c65204368726f6d65223b763d22313138222c20224e6f743d413f4272616e64223b763d223939220d0a7365632d63682d75612d6d6f62696c653a203f300d0a7365632d63682d75612d706c6174666f726d3a202257696e646f7773220d0a557067726164652d496e7365637572652d52657175657374733a20310d0a557365722d4167656e743a204d6f7a696c6c612f352e30202857696e646f7773204e542031302e303b2057696e36343b2078363429204170706c655765624b69742f3533372e333620284b48544d4c2c206c696b65204765636b6f29204368726f6d652f3131382e302e302e30205361666172692f3533372e33360d0a4163636570743a20746578742f68746d6c2c6170706c69636174696f6e2f7868746d6c2b786d6c2c6170706c69636174696f6e2f786d6c3b713d302e392c696d6167652f617669662c696d6167652f776562702c696d6167652f61706e672c2a2f2a3b713d302e382c6170706c69636174696f6e2f7369676e65642d65786368616e67653b763d62333b713d302e370d0a5365632d46657463682d536974653a206e6f6e650d0a5365632d46657463682d4d6f64653a206e617669676174650d0a5365632d46657463682d557365723a203f310d0a5365632d46657463682d446573743a20646f63756d656e740d0a4163636570742d456e636f64696e673a20677a69702c206465666c6174652c2062720d0a4163636570742d4c616e67756167653a20656e2d55532c656e3b713d302e390d0a436f6f6b69653a2067726166616e615f73657373696f6e3d66333437316264363761663334313061306133353961616139336237356338643b2067726166616e615f73657373696f6e5f6578706972793d313639383736333036370d0a0d0a
	at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1313) ~[netty-handler-4.1.100.Final.jar:4.1.100.Final]
	at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1383) ~[netty-handler-4.1.100.Final.jar:4.1.100.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:529) ~[netty-codec-4.1.100.Final.jar:4.1.100.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:468) ~[netty-codec-4.1.100.Final.jar:4.1.100.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:290) ~[netty-codec-4.1.100.Final.jar:4.1.100.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) ~[netty-common-4.1.100.Final.jar:4.1.100.Final]
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[netty-common-4.1.100.Final.jar:4.1.100.Final]
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-common-4.1.100.Final.jar:4.1.100.Final]
	at java.base/java.lang.Thread.run(Thread.java:833) ~[na:na]

2023-11-02T11:44:53.632+02:00 DEBUG 11588 --- [ctor-http-nio-5] r.netty.transport.ServerTransport        : [abd66895, L:/[0:0:0:0:0:0:0:1]:8081 ! R:/[0:0:0:0:0:0:0:1]:49537] onUncaughtException(SimpleConnection{channel=[id: 0xabd66895, L:/[0:0:0:0:0:0:0:1]:8081 ! R:/[0:0:0:0:0:0:0:1]:49537]})

io.netty.handler.codec.DecoderException: io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record: 474554202f6d782f73746174757320485454502f312e310d0a486f73743a206c6f63616c686f73743a383038310d0a436f6e6e656374696f6e3a206b6565702d616c6976650d0a43616368652d436f6e74726f6c3a206d61782d6167653d300d0a7365632d63682d75613a20224368726f6d69756d223b763d22313138222c2022476f6f676c65204368726f6d65223b763d22313138222c20224e6f743d413f4272616e64223b763d223939220d0a7365632d63682d75612d6d6f62696c653a203f300d0a7365632d63682d75612d706c6174666f726d3a202257696e646f7773220d0a557067726164652d496e7365637572652d52657175657374733a20310d0a557365722d4167656e743a204d6f7a696c6c612f352e30202857696e646f7773204e542031302e303b2057696e36343b2078363429204170706c655765624b69742f3533372e333620284b48544d4c2c206c696b65204765636b6f29204368726f6d652f3131382e302e302e30205361666172692f3533372e33360d0a4163636570743a20746578742f68746d6c2c6170706c69636174696f6e2f7868746d6c2b786d6c2c6170706c69636174696f6e2f786d6c3b713d302e392c696d6167652f617669662c696d6167652f776562702c696d6167652f61706e672c2a2f2a3b713d302e382c6170706c69636174696f6e2f7369676e65642d65786368616e67653b763d62333b713d302e370d0a5365632d46657463682d536974653a206e6f6e650d0a5365632d46657463682d4d6f64653a206e617669676174650d0a5365632d46657463682d557365723a203f310d0a5365632d46657463682d446573743a20646f63756d656e740d0a4163636570742d456e636f64696e673a20677a69702c206465666c6174652c2062720d0a4163636570742d4c616e67756167653a20656e2d55532c656e3b713d302e390d0a436f6f6b69653a2067726166616e615f73657373696f6e3d66333437316264363761663334313061306133353961616139336237356338643b2067726166616e615f73657373696f6e5f6578706972793d313639383736333036370d0a0d0a
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:499) ~[netty-codec-4.1.100.Final.jar:4.1.100.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:290) ~[netty-codec-4.1.100.Final.jar:4.1.100.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) ~[netty-common-4.1.100.Final.jar:4.1.100.Final]
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[netty-common-4.1.100.Final.jar:4.1.100.Final]
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-common-4.1.100.Final.jar:4.1.100.Final]
	at java.base/java.lang.Thread.run(Thread.java:833) ~[na:na]
Caused by: io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record: 474554202f6d782f73746174757320485454502f312e310d0a486f73743a206c6f63616c686f73743a383038310d0a436f6e6e656374696f6e3a206b6565702d616c6976650d0a43616368652d436f6e74726f6c3a206d61782d6167653d300d0a7365632d63682d75613a20224368726f6d69756d223b763d22313138222c2022476f6f676c65204368726f6d65223b763d22313138222c20224e6f743d413f4272616e64223b763d223939220d0a7365632d63682d75612d6d6f62696c653a203f300d0a7365632d63682d75612d706c6174666f726d3a202257696e646f7773220d0a557067726164652d496e7365637572652d52657175657374733a20310d0a557365722d4167656e743a204d6f7a696c6c612f352e30202857696e646f7773204e542031302e303b2057696e36343b2078363429204170706c655765624b69742f3533372e333620284b48544d4c2c206c696b65204765636b6f29204368726f6d652f3131382e302e302e30205361666172692f3533372e33360d0a4163636570743a20746578742f68746d6c2c6170706c69636174696f6e2f7868746d6c2b786d6c2c6170706c69636174696f6e2f786d6c3b713d302e392c696d6167652f617669662c696d6167652f776562702c696d6167652f61706e672c2a2f2a3b713d302e382c6170706c69636174696f6e2f7369676e65642d65786368616e67653b763d62333b713d302e370d0a5365632d46657463682d536974653a206e6f6e650d0a5365632d46657463682d4d6f64653a206e617669676174650d0a5365632d46657463682d557365723a203f310d0a5365632d46657463682d446573743a20646f63756d656e740d0a4163636570742d456e636f64696e673a20677a69702c206465666c6174652c2062720d0a4163636570742d4c616e67756167653a20656e2d55532c656e3b713d302e390d0a436f6f6b69653a2067726166616e615f73657373696f6e3d66333437316264363761663334313061306133353961616139336237356338643b2067726166616e615f73657373696f6e5f6578706972793d313639383736333036370d0a0d0a
	at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1313) ~[netty-handler-4.1.100.Final.jar:4.1.100.Final]
	at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1383) ~[netty-handler-4.1.100.Final.jar:4.1.100.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:529) ~[netty-codec-4.1.100.Final.jar:4.1.100.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:468) ~[netty-codec-4.1.100.Final.jar:4.1.100.Final]
	... 17 common frames omitted

2023-11-02T11:44:53.633+02:00  INFO 11588 --- [ctor-http-nio-5] ustomMicrometerHttpServerMetricsRecorder : TotalClosedBefore -1 /[0:0:0:0:0:0:0:1]:8081

2023-11-02T11:44:53.633+02:00  INFO 11588 --- [ctor-http-nio-5] ustomMicrometerHttpServerMetricsRecorder : TotalClosedAfter -2 /[0:0:0:0:0:0:0:1]:8081

In success case it can be seen that Opened is called after TLS successful only.
Maybe Opened should be called before TLS already.

In success case also active is increased after TLS, maybe should be also called "before" TLS

2023-11-02T11:44:34.402+02:00 DEBUG 11588 --- [ctor-http-nio-3] reactor.netty.tcp.SslProvider            : [a0d40952, L:/127.0.0.1:8081 - R:/127.0.0.1:48799] SSL enabled using engine SSLEngine[hostname=null, port=-1, Session(1698918274399|SSL_NULL_WITH_NULL_NULL)]
2023-11-02T11:44:34.425+02:00 DEBUG 11588 --- [ctor-http-nio-3] r.n.http.server.HttpServerOperations     : [a0d40952, L:/127.0.0.1:8081 - R:/127.0.0.1:48799] New http connection, requesting read
2023-11-02T11:44:34.427+02:00 DEBUG 11588 --- [ctor-http-nio-3] r.netty.transport.TransportConfig        : [a0d40952, L:/127.0.0.1:8081 - R:/127.0.0.1:48799] Initialized pipeline DefaultChannelPipeline{(reactor.left.sslHandler = io.netty.handler.ssl.SslHandler), (reactor.left.sslReader = reactor.netty.tcp.SslProvider$SslReadHandler), (reactor.left.channelMetricsHandler = reactor.netty.channel.ChannelMetricsHandler), (reactor.left.httpCodec = io.netty.handler.codec.http.HttpServerCodec), (reactor.left.httpTrafficHandler = reactor.netty.http.server.HttpTrafficHandler), (reactor.left.httpMetricsHandler = reactor.netty.http.server.HttpServerMetricsHandler), (reactor.right.reactiveBridge = reactor.netty.channel.ChannelOperationsHandler)}
2023-11-02T11:44:34.496+02:00  INFO 11588 --- [ctor-http-nio-3] ustomMicrometerHttpServerMetricsRecorder : TLSStart /127.0.0.1:48799 SUCCESS

2023-11-02T11:44:34.503+02:00  INFO 11588 --- [ctor-http-nio-3] ustomMicrometerHttpServerMetricsRecorder : TLSEnd /127.0.0.1:48799 SUCCESS 1

2023-11-02T11:44:34.504+02:00  INFO 11588 --- [ctor-http-nio-3] ustomMicrometerHttpServerMetricsRecorder : TotalOpenedBefore 0 /127.0.0.1:8081

2023-11-02T11:44:34.504+02:00  INFO 11588 --- [ctor-http-nio-3] ustomMicrometerHttpServerMetricsRecorder : TotalOpenedAfter 1 /127.0.0.1:8081

2023-11-02T11:44:34.509+02:00 DEBUG 11588 --- [ctor-http-nio-3] r.n.http.server.HttpServerOperations     : [a0d40952, L:/127.0.0.1:8081 - R:/127.0.0.1:48799] Increasing pending responses, now 1
2023-11-02T11:44:34.519+02:00  INFO 11588 --- [ctor-http-nio-3] ustomMicrometerHttpServerMetricsRecorder : ActiveBefore 0 /127.0.0.1:8081

2023-11-02T11:44:34.519+02:00  INFO 11588 --- [ctor-http-nio-3] ustomMicrometerHttpServerMetricsRecorder : ActiveAfter 1 /127.0.0.1:8081

2023-11-02T11:44:34.520+02:00 DEBUG 11588 --- [ctor-http-nio-3] reactor.netty.http.server.HttpServer     : [a0d40952-1, L:/127.0.0.1:8081 - R:/127.0.0.1:48799] Handler is being applied: org.springframework.http.server.reactive.ReactorHttpHandlerAdapter@72ddbede
2023-11-02T11:44:34.565+02:00  INFO 11588 --- [ctor-http-nio-3] reactor.Mono.Just.1                      : | onSubscribe([Synchronous Fuseable] Operators.ScalarSubscription)
2023-11-02T11:44:34.566+02:00  INFO 11588 --- [ctor-http-nio-3] reactor.Mono.Just.1                      : | request(unbounded)
2023-11-02T11:44:34.566+02:00  INFO 11588 --- [ctor-http-nio-3] reactor.Mono.Just.1                      : | onNext(monoCancel)
2023-11-02T11:44:34.570+02:00  INFO 11588 --- [ctor-http-nio-3] reactor.Mono.Just.1                      : | onComplete()
2023-11-02T11:44:34.576+02:00 DEBUG 11588 --- [ctor-http-nio-3] r.n.http.server.HttpServerOperations     : [a0d40952-1, L:/127.0.0.1:8081 - R:/127.0.0.1:48799] Decreasing pending responses, now 0
2023-11-02T11:44:34.587+02:00  INFO 11588 --- [ctor-http-nio-3] ustomMicrometerHttpServerMetricsRecorder : InactiveBefore 1 /127.0.0.1:8081

2023-11-02T11:44:34.587+02:00  INFO 11588 --- [ctor-http-nio-3] ustomMicrometerHttpServerMetricsRecorder : InActiveAfter 0 /127.0.0.1:8081

2023-11-02T11:44:34.587+02:00 DEBUG 11588 --- [ctor-http-nio-3] r.n.http.server.HttpServerOperations     : [a0d40952-1, L:/127.0.0.1:8081 - R:/127.0.0.1:48799] Last HTTP packet was sent, terminating the channel
2023-11-02T11:44:34.588+02:00 DEBUG 11588 --- [ctor-http-nio-3] r.netty.channel.ChannelOperations        : [a0d40952-1, L:/127.0.0.1:8081 - R:/127.0.0.1:48799] [HttpServer] Channel inbound receiver cancelled (operation cancelled).
2023-11-02T11:44:34.590+02:00 DEBUG 11588 --- [ctor-http-nio-3] r.n.http.server.HttpServerOperations     : [a0d40952-1, L:/127.0.0.1:8081 - R:/127.0.0.1:48799] Last HTTP response frame
2023-11-02T11:44:34.597+02:00 DEBUG 11588 --- [ctor-http-nio-3] reactor.netty.channel.FluxReceive        : [a0d40952-1, L:/127.0.0.1:8081 - R:/127.0.0.1:48799] [terminated=false, cancelled=true, pending=0, error=null]: dropping frame LAST_CONTENT(decodeResult: success, content: PooledSlicedByteBuf(ridx: 0, widx: 1, cap: 1/1, unwrapped: PooledUnsafeDirectByteBuf(ridx: 380, widx: 380, cap: 418)))
2023-11-02T11:44:34.598+02:00 DEBUG 11588 --- [ctor-http-nio-3] reactor.netty.ReactorNetty               : [a0d40952-1, L:/127.0.0.1:8081 - R:/127.0.0.1:48799] Non Removed handler: reactor.left.readTimeoutHandler, context: null, pipeline: DefaultChannelPipeline{(reactor.left.sslHandler = io.netty.handler.ssl.SslHandler), (reactor.left.channelMetricsHandler = reactor.netty.channel.ChannelMetricsHandler), (reactor.left.httpCodec = io.netty.handler.codec.http.HttpServerCodec), (reactor.left.httpTrafficHandler = reactor.netty.http.server.HttpTrafficHandler), (reactor.left.httpMetricsHandler = reactor.netty.http.server.HttpServerMetricsHandler), (reactor.right.reactiveBridge = reactor.netty.channel.ChannelOperationsHandler)}
2023-11-02T11:44:34.598+02:00 TRACE 11588 --- [ctor-http-nio-3] r.netty.channel.ChannelOperations        : [a0d40952, L:/127.0.0.1:8081 - R:/127.0.0.1:48799] Disposing ChannelOperation from a channel

java.lang.Exception: ChannelOperation terminal stack
	at reactor.netty.channel.ChannelOperations.terminate(ChannelOperations.java:490) ~[reactor-netty-core-1.1.12.jar:1.1.12]
	at io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java:173) ~[netty-common-4.1.100.Final.jar:4.1.100.Final]
	at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:166) ~[netty-common-4.1.100.Final.jar:4.1.100.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:470) ~[netty-common-4.1.100.Final.jar:4.1.100.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:566) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) ~[netty-common-4.1.100.Final.jar:4.1.100.Final]
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[netty-common-4.1.100.Final.jar:4.1.100.Final]
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-common-4.1.100.Final.jar:4.1.100.Final]
	at java.base/java.lang.Thread.run(Thread.java:833) ~[na:na]

2023-11-02T11:44:34.602+02:00 DEBUG 11588 --- [ctor-http-nio-3] r.n.channel.ChannelOperationsHandler     : [a0d40952, L:/127.0.0.1:8081 - R:/127.0.0.1:48799] Received a TLS close_notify, closing the channel now.
2023-11-02T11:44:34.604+02:00  INFO 11588 --- [ctor-http-nio-3] ustomMicrometerHttpServerMetricsRecorder : TotalClosedBefore 1 /127.0.0.1:8081

2023-11-02T11:44:34.605+02:00  INFO 11588 --- [ctor-http-nio-3] ustomMicrometerHttpServerMetricsRecorder : TotalClosedAfter 0 /127.0.0.1:8081

@jtorkkel Let's continue with this new issue in #2956. We prefer to handle separate problems in separate Github issues.