etcd-io / etcd

Distributed reliable key-value store for the most critical data of a distributed system

Home Page:https://etcd.io

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Blackhole failpoint in the proxy does not block all updates

siyuanfoundation opened this issue · comments

Bug report criteria

What happened?

When mocking a network partition in e2e test with the proxy.BlackholeTx() and proxy.BlackholeRx(), the partitioned follower node can still received all the write updates happening during that blackhole period.

When the partitioned node was the original leader, new write updates are not applied to the partitioned node.

This bug makes the reliability of existing tests depending on this failpoint questionable.

What did you expect to happen?

The blackhole failpoint should drop all packets sent to the partitioned node, and it should not receive any write updates happening during that blackhole period.

How can we reproduce it (as minimally and precisely as possible)?

#17736

cd tests/e2e
go test -run TestBlackholeByMockingPartitionFollower -v

Anything else we need to know?

No response

Etcd version (please run commands below)

$ etcd --version
# paste output here
etcd Version: 3.6.0-alpha.0
Git SHA: 733aa6bd8
Go Version: go1.22.0
Go OS/Arch: linux/amd64

$ etcdctl version
# paste output here
etcdctl version: 3.6.0-alpha.0
API version: 3.6

Etcd configuration (command line flags or environment variables)

paste your configuration here

Etcd debug information (please run commands below, feel free to obfuscate the IP address or FQDN in the output)

$ etcdctl member list -w table
# paste output here

$ etcdctl --endpoints=<member list> endpoint status -w table
# paste output here

Relevant log output

No response

Can I attempt this issue :) @siyuanfoundation

Can I attempt this issue :) @siyuanfoundation

Of course :) Thank you for volunteering!

Thank you! :)

(Can you assign the issue to me ^_^?)

Please note that we don't yet know the reason for blackholing working only on leader and not follower. I suspect that either blackhole proxy setup is wrong, blackhole is not doing its job or there is some traffic going around the proxy.

To root cause the issue someone needs to use network analysis tools like tcpdump or wireshark to find how traffic is still going to the follower.

@siyuanfoundation @serathius I think I managed to figure out why the blackhole is leaking traffic, and did a PoC to pass the e2e test that @siyuanfoundation provided with this commit.

The main idea is that Raft nodes have 2 ways of communication - stream and pipeline. From what I see, the pipeline is indeed leaking traffic.

Let's break the 2 communication means down in detail.

Stream

Stream is covered by the proxy implementation since we have different port values for --listen-peer-urls (where, for example, node A is actually listening to traffic) and --initial-advertise-peer-urls (node B and C think where node A is listening for traffic).

A proxy is used to forward traffic in between, so a node can see all the traffic sent from others. As the stream is initiated from the outside node to it (long-pooling), the traffic from external will pass through the proxy for sure. Thus, when we blackhole traffic, we can rewrite the data and set it to nil (and data with 0 length will be a dropped packet).

Pipeline

For pipeline, let's see the following arch drawing (sorry for my elementary-level drawing skill).
IMG_0812

Because the pipeline is initiated from the inside (node A) to the external nodes (node B and C), thus, it will bypass the proxy, like the red arrow is showing when node A is trying to create a pipeline and talk to node B and C. I believe this is the case because --listen-peer-urls is what node A (that initiates the traffic) will be using, thus, in this case, the proxy is completely not used on the node A part and thus a bypass happened. To be more concrete, node A will take port 20001 as its starting port (as set in --listen-peer-urls) and try to reach node B via port 20008 (since this is what is set with --initial-advertise-peer-urls.

Since I am still very new to the codebase, I am not sure if my understanding is correct. But the PoC seems to work and it's based on the above thinking.

If you guys think my understanding and explanation make sense somehow, please let me know how you would like to proceed with the fixes, as this PoC commit is clearly a hack (the blackhole flag is passed via file ... since I just want something that can work in a short time ^_^)


Sidenote, on my machine, I sometimes run into one of the following 2 failures.

But, if we make the following 2 changes in timing, I haven't seen the following 2 errors in 5 runs each. (go test -timeout 120s -run ^TestBlackholeByMockingPartitionLeader$ go.etcd.io/etcd/tests/v3/e2e -v -count=5, also with TestBlackholeByMockingPartitionFollower)

  • Add a 5s delay after unblackholing (for the catch-up to work)
  • Shorten the wait time for the open connection to expire to 5s

context deadline exceeded

Not sure what this is about yet.

    blackhole_test.go:83: Writing 20 keys to the cluster (more than SnapshotCount entries to trigger at least a snapshot.)
    blackhole_test.go:114: 
                Error Trace:    /Users/taatsch9/go/src/siyuan-etcd/tests/e2e/blackhole_test.go:114
                                                        /Users/taatsch9/go/src/siyuan-etcd/tests/e2e/blackhole_test.go:84
                                                        /Users/taatsch9/go/src/siyuan-etcd/tests/e2e/blackhole_test.go:34
                Error:          Received unexpected error:
                                [/Users/taatsch9/go/src/siyuan-etcd/bin/etcdctl --endpoints=http://localhost:20005 put key-0 value-0] match not found.  Set EXPECT_DEBUG for more info Errs: [unexpected exit code [1] after running [/Users/taatsch9/go/src/siyuan-etcd/bin/etcdctl --endpoints=http://localhost:20005 put key-0 value-0]], last lines:
                                {"level":"warn","ts":"2024-04-12T03:03:33.025153+0200","logger":"etcd-client","caller":"v3@v3.6.0-alpha.0/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x140001ee960/localhost:20005","method":"/etcdserverpb.KV/Put","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
                                Error: context deadline exceeded
                                 (expected "OK", got []). Try EXPECT_DEBUG=TRUE

revision mismatch after unblackholing

I think without waiting for some time, the unblackholed node is still just coming back to the network and thus the catch up is not yet done.

    blackhole_test.go:91: Unblackholing traffic from and to member "TestBlackholeByMockingPartitionFollower-test-2"
    cluster.go:1045: members agree on a leader, members:map[12416079282240904009:1 13770228943176794332:2 16914881897345358027:0] , leader:12416079282240904009
    blackhole_test.go:132: 
                Error Trace:    /Users/taatsch9/go/src/etcd/tests/e2e/blackhole_test.go:132
                                                        /Users/taatsch9/go/src/etcd/tests/e2e/blackhole_test.go:104
                                                        /Users/taatsch9/go/src/etcd/tests/e2e/blackhole_test.go:38
                Error:          Not equal: 
                                expected: 21
                                actual  : 1
                Test:           TestBlackholeByMockingPartitionFollower
                Messages:       revision mismatch

Because the pipeline is initiated from the inside (node A) to the external nodes (node B and C), thus, it will bypass the proxy, like the red arrow is showing when node A is trying to create a pipeline and talk to node B and C.

@henrybear327 YES. The node dials to other members and read messages. The target follower is reaching out to other member's proxy.

p.msgAppV2Reader.start()

The member setups http server handle and setup a stream message writer.

func (h *streamHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {

The leader still can forward heartbeat and message to isolated follower. However, the outgoing connection from isolated follower to leader has been dropped. so, both the ReadIndex and proposal forward will be dropped from isolated follower.

Maybe we can introduce new filter to drop any traffic from isolated follower. (Like L7 firewall 😂 )

Awesome findings @henrybear327 ! It is very well explained too! I think your findings are correct as shown by the codes @fuweid pointed to.
As for the sidenote failures, is it after applying your fix?

Hey @siyuanfoundation it's before applying the 2 timing changes I would run into issues!

I think we can still follow current design based on L4.

However, we need to change modifyTx and modifyRx functions to support filter packets based on /proc/net/tcp(6) (tcp tuple and inode).

func modifyTx(srcConn, dstConn net.Conn, data []byte) []byte {}
func modifyRx(srcConn, dstConn net.Conn, data []byte) []byte {}

// proxy pkg
type Server interface {
     ...
     TxFilter(func(srcConn, dstConn net.Conn, data []byte) []byte)
     RxFilter(func(srcConn, dstConn net.Conn, data []byte) []byte)
}

For the customized filter, we use /proc/$isolated_member_pid/net/tcp(6) to get target tcp tuple's inode and then walk through /proc/$isolated_member_pid/fd/ to ensure that target tcp tuple is hold by isolated_member. If so, we can just drop packets.

https://www.kernel.org/doc/Documentation/networking/proc_net_tcp.txt

It will first list all listening TCP sockets, and next list all established
TCP connections. A typical entry of /proc/net/tcp would look like this (split 
up into 3 parts because of the length of the line):

   46: 010310AC:9C4C 030310AC:1770 01 
   |      |      |      |      |   |--> connection state
   |      |      |      |      |------> remote TCP port number
   |      |      |      |-------------> remote IPv4 address
   |      |      |--------------------> local TCP port number
   |      |---------------------------> local IPv4 address
   |----------------------------------> number of entry

   00000150:00000000 01:00000019 00000000  
      |        |     |     |       |--> number of unrecovered RTO timeouts
      |        |     |     |----------> number of jiffies until timer expires
      |        |     |----------------> timer_active (see below)
      |        |----------------------> receive-queue
      |-------------------------------> transmit-queue

   1000        0 54165785 4 cd1e6040 25 4 27 3 -1
    |          |    |     |    |     |  | |  | |--> slow start size threshold, 
    |          |    |     |    |     |  | |  |      or -1 if the threshold
    |          |    |     |    |     |  | |  |      is >= 0xFFFF
    |          |    |     |    |     |  | |  |----> sending congestion window
    |          |    |     |    |     |  | |-------> (ack.quick<<1)|ack.pingpong
    |          |    |     |    |     |  |---------> Predicted tick of soft clock
    |          |    |     |    |     |              (delayed ACK control data)
    |          |    |     |    |     |------------> retransmit timeout
    |          |    |     |    |------------------> location of socket in memory
    |          |    |     |-----------------------> socket reference count
    |          |    |-----------------------------> inode
    |          |----------------------------------> unanswered 0-window probes
    |---------------------------------------------> uid

timer_active:
  0  no timer is pending
  1  retransmit-timer is pending
  2  another timer (e.g. delayed ack or keepalive) is pending
  3  this is a socket in TIME_WAIT state. Not all fields will contain 
     data (or even exist)
  4  zero window probe timer is pending

For example, if we want to block the proxy -> isolated_member traffic, we can use customized modifiedRx:

var isolatedMemberPid = x

func modifiedRx(srcConn, dstConn net.Conn, data []byte) []byte {
       if ensureTupleFromPid(isolatedMemberPid, srcConn.RemoteAddr(), dstConn.RemoteAddr()) {
            return nil
       }
       return data
}

It works in my local. Since the e2e is using localhost, we can use inode to locate the target process.

Yesterday, synced with @henrybear327 offline about using ss or lsof.
I think read /proc/$$/net/tcp would be accepted because we don't depend on any commandlines.

ping @serathius @ahrtr @aojea to seek review on this approach.

The L4-level blocking sounds interesting! :) The main benefit would be we can block off the incoming and outgoing traffic for a specific pid without touching the etcdserver process's internal! :)

The PoC that I have is using a different approach (but would require some internal failpoint injection): the main idea is that we are using channels to pass around messages the node is receiving and sending to/from the stream/pipeline, we can use failpoints to drop the messages coming in and out of the channels before they really get sent or processed by the node.

This approach works, but there exists a small window where the message is already in the process of being sent that will not be covered by this failpoint channel blocking. Shouldn't be a big problem though since during e2e tests, the message size shouldn't be big.


Based on @fuweid 's approach, I have another idea on L7-level blocking. https://github.com/henrybear327/etcd/commits/experiment/round_tripper/

Because both pipeline and stream are using RoundTrip() under the hood. I think we can also use a custom RoundTrip() and trigger it on-demand by failpoint. In this way, we can intercept all outgoing traffic initiated from a specific node to be dropped.

We also need to work on the handler, too, to drop the incoming new connections.

I think it's not as clean as the L4 blocking, but I think this is the code-level-wise blocking by leveraging the entry points of networking in/out functions, so we don't need external tools.

The leader still can forward heartbeat and message to isolated follower. However, the outgoing connection from isolated follower to leader has been dropped. so, both the ReadIndex and proposal forward will be dropped from isolated follower.

So , this is not totally network partition, is just some of the flows get partitioned?

L7 or L4 filters

Before going to the implementations, it seems to me that this should happen at a higher level, if nodes are able to detect inconsistent network setups they should be able to announce or refuse to do any operations, no?

The node will dial out to its peer using StreamReader and pipeline. Thus, the proxy is not in effect on those dialed out traffic. So yes, the current proxy only partially partition a node's traffic from the network.

May I ask for more details for your second point? I don't quite get it :) Thank you!

May I ask for more details for your second point? I don't quite get it :) Thank you!

I'm not deep into the etcd internal details, but I expect the nodes to keep an state of their neighbours, and make decisions based on this state ... in this case it seems that there is some state that is "partial" , is not possible for the nodes to detect this and decide to stop operations?

So , this is not totally network partition, is just some of the flows get partitioned?

Hi @aojea yes. Each member has L4 proxy to forward peer's traffic to itself, described by the following flow.

Current member ID: A
Other member ID: B,C

Connections built by B, C are like this. And A will push message to B, C. 
The connections are hijacked in HTTP handler.

B -----> A-Proxy -----> A
           ^ 
           |
           C 

So, A also wants B, C to push message to itself. A will build connections like 

A -----> B-Proxy ----> B
|
+--------> C-Proxy ---> C

However, currently the `BlackholeTx` and `BlackholeRx` only blocks traffic between `X-Proxy <---> X`.
So, if A is follower, B is leader. Since traffic between <B-Proxy <---> B> is working, B can still push messages to A.

I was thinking that if X-Proxy <---> X tunnel can detect where the traffic comes from by using remote address and inode.
During E2E testing, all the traffic are in localhost. So, we can use inode to locate that process.