golang / go

The Go programming language

Home Page:https://go.dev

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

cmd/go: 'go run' should run executables in module mode outside a module

eliasnaur opened this issue · comments

#40276 implements go install path@version for installing a Go binary outside a module. I propose the same support added to go run, with equivalent behavior. That is,

$ go run gioui.org/cmd/gogio@d5bdf0756a5a

should build and run the gioui.org/cmd/gogio program at version d5bdf0756a5a

Why isn't go install enough for my uses? Consider a README describing how to build and use an auxiliary Go program:

To build the XYZ Android app you need to use the gogio tool:

$ export PATH=$PATH:$GOPATH/bin
$ go install gioui.org/cmd/gogio@d5bdf0756a5a
$ gogio -target android example.com/cmd/xyz

The README has several issues:

  1. go install'ing the binary is reproducible, but running it isn't. For example, the user may have an old gogio in their PATH already, and fail to run the instructed go install. They may remember, but later install a different version of gogio.
  2. go install polutes the user's GOPATH/bin, and PATH if it includes GOPATH/bin.
  3. If GOPATH is not set, the README has to contain hardcoded paths (~/go/bin).

In contrast, with go run path@version support, the README is reduced to just:

To build the XYZ Android app you need to use the gogio tool:

$ go run gioui.org/cmd/gogio@d5bdf0756a5a -target android example.com/cmd/xyz

cc @bcmills @jayconrod @ianthehat given previous discussion on this

Just to consider as an alternative:

example.com$ GOBIN=$(pwd) go install gioui.org/cmd/gogio@d5bdf0756a5a

example.com$ ./gogio
gogio: specify a package

That has its own problems (namely, $(pwd) is not portable), but it makes clear that for subsequent invocations the user should re-invoke the compiled binary rather than re-resolving and recompiling from upstream.

If we allowed go build to accept the @version syntax, then it could be made portable:

$ go build -o ./gogio.exe gioui.org/cmd/gogio@d5bdf0756a5a
$ ./gogio.exe

The go install thing is not portable, because of the varying executable filename, and I think adding this support to go build would be a mistake, I think would rather see us add a -o to go install if that is where we are going.
One of the use cases I find more interesting to talk about is writing reproducible generate lines

  //go:generate go run golang.org/x/tools/cmd/stringer@v1.2.3 -type=Pill

The go install thing is not portable, because of the varying executable filename, and I think adding this support to go build would be a mistake, I think would rather see us add a -o to go install if that is where we are going.
One of the use cases I find more interesting to talk about is writing reproducible generate lines

  //go:generate go run golang.org/x/tools/cmd/stringer@v1.2.3 -type=Pill

What if you have several such invocations of the same tool, each with a duplicate @version modifier?

I omitted the go:generate use-case from this proposal because I think it's better to have such dependencies recorded in the go.mod file by using the idiom of _-importing the tool in a tools.go file.

Why would duplicate @version modifiers be a problem?

The _ import is bad because it causes the tool to modify the version selection of your main application, and also to pull things into your dependency graph that your binary does not actually depend on. It also causes the tools to affect each other, rather than being run with the versions the author has tested with. In general it is an acceptable hack while we don't have a better answer, but not a long term acceptable solution in my opinion.

cc @matloob as well.

Personally I'm in favor of go run supporting this with the same semantics and restrictions as #40276. This came up a few times in the discussion of #40276. Let's set aside go build.

The only technical barrier is that we'd need to cache linked binaries with a different eviction policy than compiled packages. Currently, we don't cache linked binaries at all.

Other than that, it's just a question of CLI design and impact to the ecosystem.

Why would duplicate @version modifiers be a problem?

I was referring to having to update all versions if you want a newer version of the tool. Maybe that's not too bad.

The _ import is bad because it causes the tool to modify the version selection of your main application, and also to pull things into your dependency graph that your binary does not actually depend on. It also causes the tools to affect each other, rather than being run with the versions the author has tested with. In general it is an acceptable hack while we don't have a better answer, but not a long term acceptable solution in my opinion.

Good points. In an ideal world, go:generate dependencies should be recorded in go.mod, but the downsides you point out seem to outweigh the advantages.

I think if there was a reasonably common pattern of //go:generate go run package@version args... we could easily write tooling to maintain those lines separately if it turns out to be needed.

In general it is an acceptable hack while we don't have a better answer, but not a long term acceptable solution in my opinion.

The argument about dependencies being varied to versions not tested by the author might equally apply to any third party library you are using, so that doesn't sway the argument for me. The fact that, under such a scheme, we would have multiple sites at which to maintain tool versions is a real problem however. Because use of these tools is by no means limited to go:generate directives, e.g. scripts.

There is a significant difference between a library you want to include in your code that shares dependancies with other libraries in your graph, and a binary you want to run exactly as the author intended it to be run. The module story has been very focused on the former (for good reason) and the existing approaches have not left people happy with the results for the latter, which is one of the reasons we have talked about these kinds of changes.

I am mostly uninterested in scripts or makefiles because I think they already have all the tools they need, the path and install hacks are good enough for those cases, it might not be beautiful and need some extra lines, but I don't find that a big deal.

The argument about dependencies being varied to versions not tested by the author might equally apply to any third party library you are using, so that doesn't sway the argument for me. The fact that, under such a scheme, we would have multiple sites at which to maintain tool versions is a real problem however. Because use of these tools is by no means limited to go:generate directives, e.g. scripts.

I also see a significant difference between depending on libraries and installing a released and versioned binary.

Depending on a library means to me taking ownership of how the library fits into your dep graph and sufficiently testing your code to be confident in the potentially unique dep graph.

I never want to modify a released binary's deps based on independent local code i'm developing. If I need to modify the deps of a released binary i'm either in the process of forking or contributing upsteam to the project directly. I want control over the deps of code I am currently authoring only. If I am building a main package outside of my current project then I want a universally reproducible artifact as much as possible.

This may be a duplicate of #33518

The upside of using _ imports for tools is extra dependabot notifications when it's time to update your linter or code generation tools.

I'm afraid //go:generate go run package@version … will open the door for extra inconsistencies and too much flexibility - I don't really like the idea of using multiple versions of same tool in the single project/module and/or get extra headache to keep these versions in sync. So, running tool version defined in go.mod by default is probably better way to go.

Also, the whole story about gobin, go install and now go run for tools have one big downside: not all tools are written in Go, and it's better to have more general (non-Go-specific) way to express tools dependencies and run required tool version. But this is probably offtopic here.

We discussed this for a bit in the golang-tools session this week.

Personally, I'm in favor of this proposal, and it sounded like most others on the call were as well. The main point was that go run pkg@version should build the same binary that go install pkg@version would build. The only differences would be that go run executes the binary instead of writing it to GOBIN. There would be no differences in the semantics used to resolve versions or restrictions on replace directives.

Before we move forward though, I think two things need to be resolved:

  1. We need to have firm agreement on what will be done with replace for go install pkg@version and go run pkg@version. This was discussed at length in #40276. In 1.16, go install will report an error if any replace directive is present in the go.mod file of the module providing the named packages. This eliminates ambiguity, and it leaves the door open for other behaviors. Our experience in 1.16 will inform what we do later on: keep the new behavior, ignore replace directives, or apply some replace directives but not others.
  2. We'll need to change the build cache eviction algorithm. Binaries linked with go run should be cached for a little while so repeated go run commands don't need to re-link.

Our experience in 1.16 will inform what we do later on: keep the new behavior, ignore replace directives, or apply some replace directives but not others.

I agree with pretty much everything you said, but I should also say that go run pkg@version with the current "no replaces" semantics would still be very useful with a lot of modules. So even if we can't figure out how to advance what to do with replace directives, I still think it's worth to teach go run this new behavior in 1.17.

We have to date resisted caching executables specifically to avoid turning go run into some kind of binary management system. It's a little unfortunate to be trending that way.

Jay asked me to elaborate on my previous comment in person, but I thought I would do so here too.

Generally speaking, walled gardens are less powerful than open platforms. Plan 9 was a walled garden - it couldn't run many Unix programs that needed various system calls. Inferno was a walled garden - it couldn't even run non-Limbo programs. Even WSL is a walled garden of sorts: the Linux programs you run inside it can't easily invoke the Windows programs outside it. When you run VMware, your VM is a bit of a walled garden, the same way. In each of these cases, there's a good reason for the wall - things are simpler inside in some way - but the cost is isolation and a loss of interoperability.

Go aims to interoperate well with the surrounding operating system, explicitly not making its own walled garden. This is why, for example, when we added io/fs, we did it with an explicit FS interface that you have to use to access virtual files. An obvious extension would be to let you say things like Mount(zipfile, "/myzip") and then have os.Open("/myzip/file") open a file inside the zip file. That's all well and good inside the Go process memory, but then what happens when you try to run exec.Command("grep", "thing", "/myzip/file")? Grep can't find the file. But it worked with os.Open?! Now there's a wall there, and grep is outside the wall. Operating systems already provide a file system. If Go replaces the concept of "the file system as defined by the OS" with "the file system as extended by Go" then that makes a wall. And it's true for anything not just "file system".

Operating systems already also have a concept of which programs are installed and can be run. There's $HOME/bin, $PATH, apt-get, and so on. "go install" plays nicely with that world by writing Go executables to $HOME/bin, where they can be run by any program, not just Go programs.

Consider special-casing "go generate" so that you can list a Go program there as the thing to run:

//go:generate golang.org/x/tools/cmd/stringer ...

(This has been proposed in the past.) If we made this work, it would look like a Unix command line but is actually "the command line as extended by Go". You can do that, but you can't do:

//go:generate time golang.org/x/tools/cmd/stringer ...

or replace time with strace, or whatever else. It's another wall. We've declined that proposal in the past: the operating system should be in charge of providing programs available to run, and Go should use that definition directly, not extend it.


With that context, go run program@version seems to me to be creeping up close to the line of creating a wall. It doesn't quite cross the line, but it essentially replaces the standard operating system mechanisms of $PATH and $HOME/bin, apt-get, homebrew, and so on, with this alternate command distribution mechanism. And that mechanism only works for Go programs. You can't put a Rust program there. (In that sense, it does actually cross the line.)

The counter-argument is that at least "go run program@version" is a real Unix command, so that //go:generate go run stringer@version is not breaking the "execute a command" rule. And of course that we are the Go program so why shouldn't we make it easier to run Go programs than (say) Rust programs?


Running executables like this makes Go start to supplant apt-get, homebrew, etc. Of course, the counter-argument is that "go install" is already doing that a little, so "go run" is just "go install + exec" and we've already burned all those bridges.


All this is to say that this is a pretty borderline decision. I'm not inclined to refuse it if there is a strong consensus to do it, but I want us to go in with our eyes open about implications.

We should also understand whether we are intending to only apply to go run p@v, or if we're going to accept go test p@v, go build -o myexe p@v, and so on.

I don't oppose those arguments, but at the same time I think the go:generate problem needs a solution in the Go toolchain itself. Otherwise it's a chicken-and-egg situation.

so "go run" is just "go install + exec" and we've already burned all those bridges.

I hadn't thought of it this way, but I agree. go install already competes with the system's way to install programs. And, personally, I think that's fine. Quite often when I use go install it's because I want to install a different/newer version than what's available on my system, for example, and I don't think there's anything wrong with that.

if we're going to accept go test p@v, go build -o myexe p@v, and so on

My personal opinion is "no", and we could always reconsider in the future if someone has a compelling use case.

With that context, go run program@version seems to me to be creeping up close to the line of creating a wall. It doesn't quite cross the line, but it essentially replaces the standard operating system mechanisms

I think we're missing a particular distinction here. program is too coarse, because it does not specify a version. This causes problams for go:generate, for example. package-manager install -version=... program && program is never going to be portable. go run program@version args... is limited to Go software, but thanks to its module and build philosophy, it's quite powerful.

... is limited to Go software, but thanks to its module and build philosophy, it's quite powerful.

Absolutely, I just want us to go in knowing that we're putting up a bit of a wall around that power.

Operating systems already also have a concept of which programs are installed and can be run. There's $HOME/bin, $PATH, apt-get, and so on. "go install" plays nicely with that world by writing Go executables to $HOME/bin, where they can be run by any program, not just Go programs.

Running executables like this makes Go start to supplant apt-get, homebrew, etc. Of course, the counter-argument is that "go install" is already doing that a little, so "go run" is just "go install + exec" and we've already burned all those bridges.

"go run p@v" is not just "go install + exec p@v": "go run" doesn't tamper with the user's global configuration by writing to $HOME/go/bin. And that's exactly what I want: "run this program at that reproducible version, without installing it". I'd love for my operating system to provide this functionality, but it doesn't, let alone portably across OS'es.

Also, apt-get, homebrew etc. can't run programs, only install them. "go install p@v" is replacing software installers, not the proposed "go run p@v".

I suppose I don't understand your argument, or perhaps why it applies to this proposal. If you want to argue that we're replacing OS software installers, that blame seems to me entirely caused by "go install p@v", and not "go run p@v". Or maybe even the "go get" command itself?

OK, so let me try to restate the proposal. The idea is that you run

go run path@version

and it fetches path@version, builds, and runs it. And of course that's cached so that the second time it only links+runs or maybe even just runs.

What happens if you are inside a module? Does go.mod get consulted? Or does this form always bypass go.mod?
I guess the same as go install path@version?
But it occurs to me I don't know what the answer is there either.

/cc @bcmills @jayconrod @matloob

What happens if you are inside a module? Does go.mod get consulted? Or does this form always bypass go.mod?
I guess the same as go install path@version?

This form would always bypass (and never update) the current module's go.mod and go.sum, same as go install path@version.

What happens if you are inside a module? Does go.mod get consulted? Or does this form always bypass go.mod?
I guess the same as go install path@version?

Right, same semantics as go install path@version in #40276. It would run in module mode, ignoring go.mod and go.sum in the current directory if present. There would be no main module, and it would be an error for the module providing the package named by path to have go.mod directives that would cause it to be treated differently if it were the main module (e.g., replace).

Based on the discussion above, this proposal seems like a likely accept.
— rsc for the proposal review group

No change in consensus, so accepted. 🎉
This issue now tracks the work of implementing the proposal.
— rsc for the proposal review group

Change https://golang.org/cl/310074 mentions this issue: cmd/go: support 'go run cmd@version'

Change https://golang.org/cl/310410 mentions this issue: cmd/go: move 'go install cmd@version' code into internal/load

Change https://golang.org/cl/310829 mentions this issue: cmd/go: fix mod_install_pkg_version

Change https://golang.org/cl/314050 mentions this issue: cmd/go/internal/load: treat packages with errors as potentially main packages

Change https://golang.org/cl/317300 mentions this issue: cmd/go: include packages with InvalidGoFiles when filtering main packages

Should't this be added to the release notes for Go 1.17?

@DmitriyMV, good catch. I've filed #46687 for that, and I'm going to close out this issue because I think the implementation proper is complete.