os: StartProcess ETXTBSY race on Unix systems
rsc opened this issue · comments
Modern Unix systems appear to have a fundamental design flaw in the interaction between multithreaded programs, fork+exec, and the prohibition on executing a program if that program is open for writing.
Below is a simple multithreaded C program. It creates 20 threads all doing the same thing: write an exit 0 shell script to /var/tmp/fork-exec-N (for different N), and then fork and exec that script. Repeat ad infinitum. Note that the shell script fds are opened O_CLOEXEC, so that an fd being written by one thread does not leak into the fork+exec's shell script of a different thread.
On my Linux workstation, this program produces a never-ending stream of ETXTBSY errors. The problem is that O_CLOEXEC is not enough. The fd being written by one thread can leak into the forked child of a second thread, and it stays there until that child calls exec. If the first thread closes the fd and calls exec before the second thread's child does exec, then the first thread's exec will get ETXTBSY, because somewhere in the system (specifically, in the child of the second thread), there is an fd still open for writing the first thread's shell script, and according to modern Unix rules, one must not exec a program if there exists any fd anywhere open for writing that program.
Five years ago this bit us because cmd/go installed cmd/cgo (that is, copied the binary from a temporary location to somewhere/bin/cgo) and then executed it. To fix this we put a sleep+retry loop around the fork+exec of cgo when it gets ETXTBSY. Now (as of last week or so) we don't ever install cmd/cgo and execute it in the same cmd/go process, so that specific race is gone, although as I write this cmd/go still has the sleep+retry loop, which I intend to remove.
Last week this bit us again because cmd/go updated a build stamp in the binary, closed it, and executed it. The resulting flaky ETXTBSY failures were reported as #22220. A pending CL fixes this by not updating the build stamp in temporary binaries, which are the main ones we execute. There's still one case where we write+execute a program, which is go test -cpuprofile x.prof pkg
. The cpuprofile flag (and a few others) cause cmd/go to leave the pkg.test in the current directory for debugging purposes but also run the test. Luckily running the test is currently the final thing cmd/go does, and it waits for any other fork+exec'ed programs to finish before fork+exec'ing the test. So the race cannot happen in this case.
In general this race is going to happen every time anyone writes a program that both writes and executes a program. It's easy to imagine other build systems running into this, but also programs that do things like unzip a zip file and then run a program inside it - think a program supervisor or mini container runtime. As soon as there are multiple threads doing fork+exec at the same time, and one of them is doing fork+exec of a program that was previously open for write in the same process, you have a mysterious flaky problem.
It seems like maybe Go should take care of this, if possible. We've now hit it twice in cmd/go, five years apart, and at least this past time it took the better part of a day to figure out. (I don't remember how long it took five years ago, in part because I don't remember anything about discovering it five years ago. I also don't want to rediscover all this five years from now.)
There are a few hacks we could use:
- In os.StartProcess, if we see ETXTBSY, sleep 100ms and try again, maybe a few times, up to say 1 second of sleeping. In general we don't know how long to sleep.
- Arrange with a locking mechanism that close must never complete during a fork+exec sequence. The end of the fork+exec sequence needs to be the point where we know the close-on-exec fds have been closed. Unfortunately there is no portable way to identify that point.
- If the exec fails and the child tells us and exits, we can wait for the exit. That's easy.
- If the exec succeeds, we find out because the exec closes the child's end of the status pipe, and we get EOF.
- If we know that an OS does close-on-exec work in increasing fd order, then we could also track the maximum fd we've opened and move the status pipe above that. Then seeing the status pipe close would mean all other fds are closed too.
- If the OS had a "close all fds above x", we could use that. (I don't know of any that do, but it sure would help.)
- It may not be OK to block all closes on a wedged fork+exec (in general an exec'ed program may be loaded from some slow network server).
- Note that vfork(2) is not a solution. Vfork is defined as the parent does not continue executing until the child is no longer using the parent's memory image. In the case of a successful exec, at least on Linux, vfork releases the memory image before doing any of the close-on-exec work, so the parent continues running before the child has closed the fds we care about.
None of these seem great. The ETXTBSY sleep, up to 1 second, might be the best option. It would certainly reduce the flake rate and in many cases would probably make it undetectable. It would not help exec of very slow-to-load programs, but that's not the common case.
I wondered how Java deals with this, and the answer seems to be that Java doesn't deal with this. https://bugs.openjdk.java.net/browse/JDK-8068370 was filed in 2014 and is still open.
#include <pthread.h>
#include <stdio.h>
#include <string.h>
#include <unistd.h>
#include <fcntl.h>
#include <stdlib.h>
#include <sys/wait.h>
#include <errno.h>
#include <stdint.h>
void* runner(void*);
int
main(void)
{
int i;
pthread_t pid[20];
for(i=1; i<20; i++)
pthread_create(&pid[i], 0, runner, (void*)(uintptr_t)i);
runner(0);
return 0;
}
char script[] = "#!/bin/sh\nexit 0\n";
void*
runner(void *v)
{
int i, fd, pid, status;
char buf[100], *argv[2];
i = (int)(uintptr_t)v;
snprintf(buf, sizeof buf, "/var/tmp/fork-exec-%d", i);
argv[0] = buf;
argv[1] = 0;
for(;;) {
fd = open(buf, O_WRONLY|O_CREAT|O_TRUNC|O_CLOEXEC, 0777);
if(fd < 0) {
perror("open");
exit(2);
}
write(fd, script, strlen(script));
close(fd);
pid = fork();
if(pid < 0) {
perror("fork");
exit(2);
}
if(pid == 0) {
execve(buf, argv, 0);
exit(errno);
}
if(waitpid(pid, &status, 0) < 0) {
perror("waitpid");
exit(2);
}
if(!WIFEXITED(status)) {
perror("waitpid not exited");
exit(2);
}
status = WEXITSTATUS(status);
if(status != 0)
fprintf(stderr, "exec: %d %s\n", status, strerror(status));
}
return 0;
}
Change https://golang.org/cl/71570 mentions this issue: cmd/go: skip updateBuildID on binaries we will run
Change https://golang.org/cl/71571 mentions this issue: cmd/go: delete ETXTBSY hack that is no longer needed
Userspace workarounds seem flawed or less than ideal. This is a kernel
problem, like O_CLOEXEC. Perhaps lobby for a O_CLOFORK that's similar
but close on fork instead. The writer would open, write, close, fork,
exec so wouldn't make use of it, but any other thread that forks
wouldn't carry the FD with it so the writer's close would succeeding in
nailing the sole, final, reference to the "open file description", as
POSIX calls it.
O_CLOFORK
is a good idea. Does anybody want to suggest that to the Linux kernel maintainers? I expect that if someone can get it into Linux it will flow through to the other kernels.
I'm going to repeat a hack I described elsewhere that I believe would work for pure Go programs.
- record the highest file descriptor returned by
syscall.Open
,syscall.Socket
,syscall.Dup
, etc. - add a new
RWMutex
in syscall:forkMutex
- during
syscall.Close
, acquire a read lock onforkMutex
- in
syscall.forkAndExecInChild
acquire a write lock onforkMutex
, and - open a pipe in the parent (as we already do if
UidMappings
is set), and - in the child, loop through the descriptors up to the highest one,
- closing each one that is marked close-on-exec, then close the pipe to the parent
- in the parent, when the pipe is closed, release the
forkMutex
lock
The effect of this should be that when syscall.Close
returns, we know for sure that there is no forked child that has an open copy of the descriptor.
The disadvantages are that all forks are serialized, and that all forks waste time closing descriptors that will shortly be closed anyhow. Also, of course, forks temporarily block closes, but that is unlikely to be significant.
O_CLOFORK
is a good idea. Does anybody want to suggest that to the Linux kernel maintainers?
I'm happy to have a go, but I'm a nobody on that list. I was assuming folks here might have the ear of a Google kernel developer or two in that area that would vet the idea and suggest it to the list if worthy. :-)
during
syscall.Close
, acquire a read lock onforkMutex
And syscall.Dup2
and Dup3
as they may cause newfd
to close.
Do syscall.Open
et al also synchronise with forkMutex
somehow? I'm wondering if they can be creating more FDs, either above or below the highwater mark, whilst forkAndExecInChild
is looping, closing close-on-exec ones.
Is there a place to file a feature request against the Linux kernel? I know nothing about the kernel development process. I hear it uses git.
Agree about Dup2
and Dup3
.
As far as I can see it doesn't matter if syscall.Open
and friends create a new FD while the child is looping, because the child won't see the new descriptor anyhow.
@ianlancetaylor thanks, yes, the explicit closes would solve the problem with slow execs, which would be nice. That might make this actually palatable. You also don't even need the extra pipe if you use vfork in this approach.
I agree with @RalphCorderoy that there's a race between the "maintain the max" and "fork", in that Open might create a new fd, then fork runs in a different thread before Open can update the max. But since fds are created lowest-available, it should suffice for the child to assume that max is, say, 10 larger than it is.
Also note that this need not be an RWMutex (and for that matter the current syscall.ForkMutex need not be an RWMutex either). It just needs to be an "either-or" mutex. An RWMutex allows N readers or 1 writer. The mutex we need would allow N of type A or N of type B, just never a mix. If we built that (not difficult, I don't think), then programs that never fork would not serialize any of their closes, and programs that fork a lot but don't close things would not serialize any of their forks.
O_CLOFORK would require having fcntl F_SETFL/F_GETFL support for that bit too, and it would complicate fork a little more than it already is. An alternative that would be equally fine for us would be a "close all fd's above" or "tell me the maximum fd of my process" syscall. I don't know if a new bit or a new syscall is more likely.
I should maybe also note that macOS fixes this problem by putting #if 0 around the ETXTBSY check in the kernel implementation of exec. That would be a third option for Linux although probably less likely than the other two.
I've emailed linux-kernel@vger.kernel.org. Will reference an archive once it appears.
If they're unpersuaded, then there's the POSIX folks at Open Group; they have a bug tracker.
linux-kernel mailing-list archive of post: https://marc.info/?l=linux-kernel&m=150834137201488
What's the plan here for Go 1.10?
@RalphCorderoy, looks like you never got a reply, eh?
Looks like Solaris and macOS and OpenBSD have O_CLOFORK
already. Hopefully it will catch on further.
I'm currently running into this (I think?) on Ubuntu, using Go 1.13.5, calling ioutil.WriteFile
to write a binary, immediately followed by exec.Command
. Is there a suggestion for the best way to detect this in user space? Stat the file until you don't get ETXTBUSY?
A colleague pointed me to this bug in context of a wider discussion about O_CLOFORK
. When each fork is expected to proceed to exec (as is the case here), it is possible to solve the problem via open file description locks in 4 extra syscalls, without requiring any cooperation between threads.
The high-level algorithm for writing a file for execution is as follows:
- open an fd with
O_WRONLY | O_CLOEXEC
- write into fd
- place open file description lock on the fd
- close the fd
- open a new fd with
O_RDONLY | O_CLOEXEC
(same path as step 1) - place open file description lock on it
- close the fd
If an fd opened in step 1 leaked to another process as a result of concurrent thread issuing a fork()
, we wait for it to be closed at step 6. An fd opened at step 5 may also leak, but won't cause ETXTBUSY
as it is open read-only.
The diff to the program shown in the opening comment would be just:
@@ -41,6 +44,20 @@ runner(void *v)
exit(2);
}
write(fd, script, strlen(script));
+ if (flock(fd, LOCK_EX) < 0) {
+ perror("flock");
+ exit(2);
+ }
+ close(fd);
+ fd = open(buf, O_RDONLY|O_CLOEXEC, 0777);
+ if(fd < 0) {
+ perror("open (readonly)");
+ exit(2);
+ }
+ if (flock(fd, LOCK_SH) < 0) {
+ perror("flock (readonly)");
+ exit(2);
+ }
close(fd);
pid = fork();
if(pid < 0) {
@amonakov Thanks for the comment. That is an interesting suggestion.
I guess that to make this work automatically in Go we would have to detect when an executable file is opened with write access. Unfortunately this would seem to require an extra fstat
system call for every file opened for write access. That is not so great. Perhaps we could restrict it to only calls that use O_CREATE
as that is likely the most common case that causes problems.
But then there seems to be a race condition. The fork
can happen at any time. If the fork
happens after we call open
but before we call flock
, then it seems that the same problem can occur. In the problematic case the fork
doesn't know anything about the file that we are writing. The problem is that the file is held open by the child process. Using the flock
technique makes this much less likely to be a problem, but I don't think it completely eliminates the problem.
... make this work automatically in Go ...
I don't think that would work: permission bits could be changed independently after close()
. In any case, my solution has two assumptions, that file was opened with O_CLOEXEC
, and that long-lived forks do not appear. For that reason I'd say it's not appropriate to roll it up into some standard function. It could live as a separate close
-like function where the purpose and requirements could be clearly documented.
But then there seems to be a race condition. The fork can happen at any time. If the fork happens after we call open but before we call flock, then it seems that the same problem can occur.
No, forked child shares the open file description with the parent, so a later flock
in the parent still affects it.
@amonakov Thanks.
For what it's worth, all files opened using the Go standard library have O_CLOEXEC
set. And Go doesn't support long-lived forks, as fork
doesn't work well with multi-threaded programs, and all Go programs are multi-threaded. So I don't think those are issues.
That said, personally I would not want to add new API to close an executable file. That seems awkward and hard to understand. I'd much rather persuade kernels to support O_CLOFORK
. Of course any particular program can use your technique.
Relatively recent (sad) thread re: linux O_CLOFORK: https://lore.kernel.org/lkml/20200525081626.GA16796@amd/T/
Thanks to reading the linux-kernel thread @spoerri mentions above, I see POSIX has added FD_CLOFORK and O_CLOFORK: https://www.austingroupbugs.net/view.php?id=1318
I see that some of the Linux kernel developers are pretty skeptical about the need for this; is anybody reading this issue able to point them to the problem described here? It's not a common problem but it's not at all specific to Go. Thanks.
Hi Ian, Yes, I had a go yesterday by telling the linux-kernel list about this Go issue and the Java one to show it wasn't system(3) specific and has a wide long-standing impact. Matthew Wilcox, who implies he's a Googler, has replied so far:
The problem is that people advocating for O_CLOFORK understand its value, but not its cost. Other google employees have a system which has literally millions of file descriptors in a single process. Having to maintain this extra state per-fd is a cost they don't want to pay (and have been quite vocal about earlier in this thread).
Perhaps the first thing is to get agreement it's the kernel's issue to fix and then move on to an implementation with a cost they find acceptable. At the moment, the kernel seems to be faulty but fast.
Edited to add link: https://lore.kernel.org/lkml/20200525081626.GA16796@amd/T/#m5b8b20ea6e4ac1eb3bc5353c150ff97b8053b727
I observed what I believe is an equivalent race for pipes when investigating #36107.
The race can cause a Write
call to a pipe to spuriously succeed (instead of returning EPIPE
) after the read side of the pipe has been closed.
The mechanism is the same:
- The parent creates the pipe and (with
syscall.ForkLock
held) sets it to beCLOEXEC
. - The parent process calls
os.StartProcess
, which forks a child process.- (The child process inherits a copy of the file descriptors for the pipe.)
- Before the child process has reached its
exec
, the parent process closes the read side of the pipe.- (However, the FD for that side remains open in the child process.)
- The parent process calls
Write
on the write side, causing the kernel to buffer the write (since the pipe FD is still open in the child). TheWrite
succeeds. - Finally, the child process reaches its
exec
call, closing its copy of the read FD and dropping the buffered bytes.
This race can be observed by running os_test.TestEPIPE
concurrently with a call to os.StartProcess
.
Change https://go.dev/cl/458015 mentions this issue: os: clean up tests
Change https://go.dev/cl/458016 mentions this issue: os/exec: retry ETXTBSY errors in TestFindExecutableVsNoexec
If the OS had a "close all fds above x", we could use that. (I don't know of any that do, but it sure would help.)
Since this was written, Linux and FreeBSD added a close_range
syscall (about fall 2020 - Linux kernel 5.9, FreeBSD 12.2, should be identical semantics). The reason for close_range
instead of closefrom
(which also exists in a few OSes) is so that if you want to have a child process inherit, say, fds 0, 1, 2, and 1000, you can do two syscalls instead of 998.
Assuming that the set of open fds is sparse, you can also do a pretty good job of this on many OSes, including older Linux versions, by using /dev/fd
, /proc/self/fd
, or equivalent. From a quick Google, https://github.com/cptpcrd/close_fds seems to be a pretty thorough attempt at finding the best OS-specific way.
Also, just to make sure I understand correctly - the idea here is that you would have some sort of process-wide two-sided lock where calls to close
require a critical section around the close
call alone and calls to fork
require a critical section on the other side from before fork
until unwanted file descriptors are closed. (It's not exactly an rwlock because you can have multiple close
s at once or multiple fork
s at once, you just can't have one of each. Also note the lock has to work properly across fork
.) And the reason a "close all fds above x" operation would help is that it would allow you to reduce the second critical section to be [fork
, close_range
] instead of [fork
, exec
triggering close-on-exec], which is helpful because exec
could potentially be slow because of the filesystem, and also because if you're waiting for a self-pipe to close, it might get closed before other FDs are closed. Do I have all of that right?
By the way, there is one mildly exciting complication here, which is that closing the FD can also potentially be slow because of the filesystem. At least on Linux, closing a file descriptor calls the filesystem's flush operation on the underlying file description - even if there are other file descriptors open to that file description which will eventually be closed too! So it's possible that if thread A opens a file on a slow filesystem, thread B forks a child for exec, and thread A then closes the file, both thread A and the child will be slowed down. This isn't any worse than the status quo, to be clear, where the close happens on exec. But it is a really good argument in favor of O_CLOFORK
, and that it should be implemented in the form where the FD is never copied to the new process's FD table in the first place instead of actually being closed on fork.
(Another related subtlety - close
can return errors! The file might fail to write for I/O reasons or the filesystem might check quota only when the writes is flushed. This can cause actual data loss if you assume it succeeded. On Linux, close
returns the error from the filesystem's flush operation and I believe always gets rid of the FD, and I guess it's up to the filesystem whether a second flush on another FD to the same open file description also returns the same error, or if it's lost state at that point. And close_range
and of course close-on-exec do not report errors back to userspace the way close
does. So that's another really good argument for O_CLOFORK
implemented as never-inherit, to ensure that errors from close
successfully get to the right close
call.)
Note that vfork(2) is not a solution. Vfork is defined as the parent does not continue executing until the child is no longer using the parent's memory image. In the case of a successful exec, at least on Linux, vfork releases the memory image before doing any of the close-on-exec work, so the parent continues running before the child has closed the fds we care about.
The bigger problem is that vfork
(at least on Linux and FreeBSD) only suspends the calling thread, and other threads continue to run, so it doesn't solve the problem at all. (If it weren't for that, I think vfork
+ close_range
would avoid this problem, since the parent is still suspended during close_range
.)
@geofft Thanks for pointing out close_range
. Unfortunately I'm not sure how to use it in a fully reliable way, as Go programs can of course turn off the "close-on-exec" flag for a file descriptor. I think that what we want in the absence of O_CLOFORK
is an efficient way to close all descriptors marked close-on-exec, while leaving the non-close-on-exec descriptors alone.
Change https://go.dev/cl/522015 mentions this issue: cmd/go: retry ETXTBSY errors when running test binaries
Change https://go.dev/cl/522176 mentions this issue: [release-branch.go1.21] cmd/go: retry ETXTBSY errors when running test binaries
Change https://go.dev/cl/560415 mentions this issue: cmd/go: avoid copying a binary to be exec'd in TestScript/gotoolchain_path
I've worked around this issue on a piece of code that writes to a file that will be executed shortly after by locking syscall.ForkLock
for reading while the file is open.
If my understanding is correct, the runtime always locks syscall.ForkLock
for writing before forking, so locking it for reading before opening a file prevents "leaking" its file descriptor to forked processes, as that fork cannot happen while syscall.ForkLock
is locked.
This is of course not a solution to the general problem, but should be a viable workaround for pieces of code which can assume that a file they are writing will be executed later.
FYI, Linux 6.11 (just released today) has dropped the ETXTBSY "feature" entirely. The commit making this change links to this issue, and also has an argument that the security purpose of this feature didn't actually work. See also this 2021 LWN discussion of the issue and previous changes to reduce the cases that trigger ETXTBSY.
It'll be months/years before the distros that real-world users are actually using get to 6.11, but it's good to know that at least on Linux this issue will completely go away at some point.
Maybe we can also use this to argue to Oracle, Apple, and the BSDs that they should make the equivalent change.