jgroups-extras / jgroups-raft

Implementation of the RAFT consensus protocol in JGroups

Home Page:https://jgroups-extras.github.io/jgroups-raft/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

The election process may elect a leader who has left the group

yfei-z opened this issue · comments

The election process is only happened on coordinator and in a separate thread. Assume there a cluster with 3 nodes A, B and C. A is the coordinator, and C has longer log, so C being elected to the leader, before the election result being published out, C leave the group and the view has been updated in another thread, then the election result is applied to all nodes.
Check my comment in the source code.

	protected void runVotingProcess() {
		long new_term=raft.createNewTerm();
		raft.votedFor(null);
		/*
		before this point, B and C have joined the group and the view is updated
		 */
		votes.reset(view.getMembersRaw());
		num_voting_rounds++;
		long start=System.currentTimeMillis();
		sendVoteRequest(new_term);

		// wait for responses from all members or for vote_timeout ms, whichever happens first:
		votes.waitForAllResponses(vote_timeout);
		long time=System.currentTimeMillis()-start;

		int majority=raft.majority();
		if(votes.numberOfValidResponses() >= majority) {
			Address leader=determineLeader();
			log.info("%s: collected votes from %s in %d ms (majority=%d) -> leader is %s (new_term=%d)",
					local_addr, votes.getValidResults(), time, majority, leader, new_term);
			/*
			In this point, C has left the group and the view is updated
			 */
			sendLeaderElectedMessage(leader, new_term); // send to all - self
			raft.setLeaderAndTerm(leader, new_term);
			stopVotingThread();
		}
		else
			log.trace("%s: collected votes from %s in %d ms (majority=%d); starting another voting round",
					local_addr, votes.getValidResults(), time, majority);
	}

Thanks for reporting, @yfei-z. I added some more coverage to view changes during the election, but this scenario might still be an issue. I'll run a few tests to confirm it and open a PR with a fix if necessary.

There is another situation, if the coordinator itself leave the group before it send the elected message out, then the cluster will have no leader for this term, and the new coordinator won't start the election process for next term.

I think view change during election process will cause a lot of problems. Another one is if 2/3 of nodes are electing, before coordinator set itself to be the leader, view has changed to 1/3, only coordinator is online, then the cluster is working without majority nodes. Although stopVotingThread will wait at most 100ms before set the leader to null under lost case, but I think it still possible.

How about letting the election thread handle view change events serially

I created some tests for all the examples you mentioned, and they all resulted in a liveness problem, no safety issues, even the one split in (1/3 and 2/3). I believe we don't need to go all in on handling the changes serially. I'll give some more context and possible solutions.

For the first, we can synchronize the voting thread stop/start. The start/stop uses the intrinsic lock. Maybe we'll need to add a synchronized block before sending the message and stopping to check the thread interrupt state. Another option is if the view result is leader_lost, we can stop and start the voting thread.

The other two scenarios you mentioned happen because the ELECTION does not consider who was the past coordinator. This issue causes the network partition or the coordinator leaving cases to result in a no_change. In the network partition, you don't end with two working partitions. Instead, it would end with two leaderless partitions that don't take commands, even though one has a majority. On the other hand, ELECTION2 also calculates the previous coordinator. However, it has an additional pre-voting phase before the election.

I have the tests in place, so I'll see something simpler to solve it. And, also take the chance and see if I think of some problems.

I'll release a new version this week.

1.0.13.Final is available in central for use.