Issues with lost executor handling
thinkharderdev opened this issue · comments
Is your feature request related to a problem or challenge? Please describe what you are trying to do.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
(This section helps Arrow developers understand the context and why for this feature, in addition to the what)
While working on the cluster state refactor and observing some issues with handling of lost executors in our system, I think there are a couple of issues with the current approach:
- In multi-scheduler deployments, an executor will register itself and send heartbeats with a particular scheduler.
- Any scheduler can scheduler tasks on the executor
- However, in the case of a
SIGTERM
the executor will callexecutor_stopped
on the scheduler from 1.
This doesn't work particularly well with the curated task architecture since the scheduler from step 1 may not be the owner of the jobs which the executor is running when it gets the SIGTERM
. At best this may cause the lost executor handling to not work correctly (or be delayed since the owning scheduler has to wait for a timeout). At worst it could corrupt the job state and cause unpredictable errors in the scheduler.
More broadly, resetting tasks immediately when an executor gets a SIGTERM
is a bit limiting. The executor may have anywhere from 30s - 2 minutes (or maybe more) to cleanup and finish any existing work after receiving a SIGTERM
and ideally we could take advantage of that to minimize disruptions.
Describe the solution you'd like
A clear and concise description of what you want to happen.
We have an existing mechanism to broadcast executor state to all schedulers using the executor heartbeat. These will also go to a particular scheduler but can be broadcast fairly easily through ClusterState
. So I would propose the following:
- Remove the
executor_stopped
rpc. - Instead of sending the
executor_stopped
rpc onSIGTERM
the executor can just send a heartbeat where it's state changes fromActive
toDead
(or maybe something more descriptive likeShuttingDown
). - When receiving that heartbeat the scheduler can put that executor in a "quarantine" state which means:
a. Stop scheduling new tasks on it
b. Wait for some (configurable) interval before resetting tasks
This can ensure that we handle job updates on the appropriate scheduler and that we can attempt to finish outstanding work before potentially re-computing tasks/stages.
Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
The executor could maybe track all schedulers for which it has active tasks and send an executor_stopped
rpc to all of them but the rpc itself seems somewhat duplicative of the existing heartbeat mechanism
Additional context
Add any other context or screenshots about the feature request here.
I will take a look.
@yahoNanJing
I remember in the past we had some discussion with and come up a solution for this.
@yahoNanJing @mingmwang I prototyped something on our fork here coralogix@9887f77
The basic gist is:
- Add a new executor status
Fenced
indicating executor is shutting down - When executor begins shutdown, immediately send a heartbeat with status
Fenced
- Schedulers should only consider executors with
Active
status as alive. - Executor still sends
executor_stopped
rpc immediately when it begin shutdown - But when the scheduler receives that rpc is waits a configurable amount of time (default 30s) before removing the executor
If this seems sensible I can work on upstreaming it.