cmd/go: offer a consistent "global install" command
mvdan opened this issue · comments
As of 1.12, one can run the following from outside a Go module:
GO111MODULE=on go get foo.com/cmd/bar
The same mechanism can be used to download a specific version instead of @latest
, such as @v1.2.3
.
We can even emulate very similar behavior in Go 1.11, with a one-liner like:
cd $(mktemp -d); go mod init tmp; go get foo.com/cmd/bar
1.13 will likely make GO111MODULE=on
a default, so it's likely that project READMEs will be able to justs tell users to run go get foo.com/cmd/bar
on Go 1.13 and later.
However, this has a problem - if the user runs the command from within a Go module, the command will add clutter to the module's go.mod
and go.sum
files. The binary will also be installed as usual, so the user might not even notice the unintended consequences until much later.
This is a fairly common point of confusion among Go developers, particularly those new to modules. Now that most Go projects are modules, the chances of one running cmd/go commands within a module are quite high.
What we need is a "global install" command, which will ignore the current module. For example, imagine a go get -global
, to ensure backwards compatibility. Or perhaps even repurposing go install
to always mean global installs, since go get
can be used for non-global ones.
I think we should make a decision and implement it before the 1.13 release, so that READMEs can finally drop the problematic go get -u foo.com/cmd/bar
line. We can almost do it now, minus this confusing edge case with $PWD
.
CC @jayconrod
cc @ianthehat given the recent golang-tools related conversations too.
I think we should repurpose go install
for this when GO111MODULE=on
. Since the build cache is required now, it doesn't seem like there's much use for go install
anymore, especially when the target is a main package not in the build list.
I agree that repurposing go install
for this would be ideal in the long run, as right now go get
and go install
seem to overlap a bit too much. But we'd need to document and announce the new behavior of go install
well, as this would be a breaking change for some users.
I think it is fairly easy to make a case for
go install module/binary@version
doing exactly what the user would expect (not modifying your local go.mod, and reliably building that specific version no matter what the local directory/state is)
I don't think it would even be a breaking change if that is all we did (at the moment specifying a version would not be valid)
The harder things are at the edges, things like are a way of doing that work and then running the resulting binary (had to do will with go run, and I am not sure we should even try), or wether the behaviour of go install when not given a version should be changed to avoid any confusion.
From what I have observed as well as for my personal usage, this would be very useful regardless of how it is spelled (go install
, go get -global
, ...).
If it is implemented, I think it would be important to use the remote go.mod
if one exists, including respecting any replace
and exclude
directives in that remote go.mod
. In practice, authors will need to use replace
or exclude
in some cases, even though hopefully it will not be the common case. One of the main goals of modules is 100% reproducible builds, and that seems especially important when publishing a binary.
At this point, personally I would vote for go install
.
Regarding this:
I think it is fairly easy to make a case for
go install module/binary@version
...
The harder things are at the edges, things like ... whether the behaviour of go install when not given a version should be changed to avoid any confusion.
If go install module/binary@version
takes on the behavior suggested by this proposal, then I think go install module/binary
would need to be equivalent to go install module/binary@latest
(including for consistency with go get foo
vs. go get foo@latest
behavior).
A related question -- should go get some/cmd
and go get some/cmd@v1.2.3
stop installing binaries? That would be a more radical change and would reduce overlapping behavior. One could argue that might be desirable if starting from scratch, but given the number of install instructions out there that say go get some/cmd
or go get -u some/cmd
, I suspect it would be preferable to let go get
keep installing binaries.
I also support go install being for global binary installs to gopath and go get only for use inside a module path.
Modules brings some breaking tooling changes and it seems like making some more here is the time to do it and worth the cost since this makes the tooling much easier to understand.
I support this suggestion.
One question: would this re-purposed go install
command respect replace and exclude directives in the installed module? I've argued before that it should, and I still think that's the case.
To take one example, it wouldn't be possible to build a correctly working executable for github.com/juju/juju/cmd/juju
without respecting replace directives.
I think the results should be identical to as if you had checked out the module at the supplied version to a temporary directory, changed to that directory and done go install .
, so yes, fully obeying the go.mod with replace and excludes applied, not treating it as a dependancy.
Something that we might want to be careful about is go install
commands which are given local packages, and not remote ones. For example, I presume that go install .
or go install ./cmd/foo
should still use the current module.
Yes, this is one of the reasons why I am dubious about making any changes to non versioned forms, and thus don't agree that an implied @latest is the right choice.
On the other hand, I am slightly worried that the radical change of behavior adding a version causes might be confusing, but I think it's probably ok.
@mvdan I suspect I am missing something obvious… But could the rule be to use the go.mod
file (including respecting replace and exclude) based on wherever the "package main" is coming from? In other words, use the remote go.mod
file if installing a remote command, and use the local go.mod
file of the module you are in currently if doing something like go install .
?
Trying to distill the discussion so far into a proposal:
We would change the behavior of go install [modules]
. [modules]
is one or more module patterns, as currently interpreted by go get
. go get
itself would not change. Each module pattern has an optional version query (@latest
if no query is specified). Each module pattern will be handled separately.
We change behavior if:
go install
is run in module mode AND- The module pattern being installed is NOT any of the following:
- Empty (equivalent to
.
) - A local path without a version (e.g.,
./cmd/foo
,./...
). - A metapackage (
all
,cmd
,std
). - Anything matching packages in the standard library.
- Empty (equivalent to
If the conditions above are true, go install
will:
- Download the module at the specified version
- Patterns like
@latest
or@v1
will be interpreted the same way thatgo get
interprets them.
- Patterns like
- Build packages and executables matching the pattern, treating the downloaded module as the main module.
replace
andexclude
directives will be observed.vendor
directories will be used if they would be used normally, e.g.,-mod=vendor
is set. After #30240 is implemented, vendoring would be used by default (as if the main module were the installed module).
- Copy packages and executables to their target locations, i.e., the path indicated by
go list -f '{{.Target}}'
- This will normally be
$(go env GOPATH)/bin
or$(go env GOPATH)/pkg
.
- This will normally be
go install
will not modify thego.mod
orgo.sum
files in the module wherego install
is invoked. Version constraints in the module wherego install
is invoked will be ignored.
Examples:
- These commands will not change behavior:
- Any variant of
go get
. go install
go install cmd
go install ./cmd/foo
go install ./...
go install main.go
- Any variant of
- These commands will change when run in module mode:
go install golang.org/x/tools/packages
go install golang.org/x/tools/cmd/goimports
go install golang.org/x/tools/cmd/goimports@v1.2.3
go install golang.org/x/tools/cmd/...
go install golang.org/x/tools/cmd/...@v1.2.3
go install ./cmd/foo@latest
- An error message would be printed if
go install
is run in GOPATH mode with an argument that includes an@
character, same asgo get
.
I think a key question is whether go install
with a non-local path and no version should behave differently than a non-local path with an explicit version. For example, should go install golang.org/x/tools/cmd/goimports
work differently than go install golang.org/x/tools/cmd/goimports@latest
?
I'd argue the form with no version should be consistent with other module-related commands. For a new user, it would be strange to accidentally run the former command instead of the latter, then see it take your build constraints into account and possibly modify your go.mod
file. This would be a change from current behavior. If we require explicit versions, we'd only be adding new behavior.
I think go install remote.org/pkg
should behave just as if @latest
had been given. That's more consistent, and like you say, less confusing to users.
@rogpeppe @ianthehat The trouble with applying replace
directives is that it isn't feasible to apply them consistently. We could apply replacements that specify other modules, but not filesystem paths: the replacement module must have its own go.mod
file, and as such will not be found within the same module source tree in the module cache.
That means, for example, that we wouldn't be able to fetch the module from a proxy.
The same consideration holds for vendor
directories: if we implement #30240, then the go.mod
files in the vendor
directory will cause essentially all of the source files in that tree to be pruned out of the module cache.
We could, at least in theory, apply the subset of replacements that specify module paths rather than filesystem paths. However, that would produce a third variation on the build configuration: one that doesn't necessarily match the clean-checkout configuration (because it doesn't apply filesystem replacements), but doesn't necessarily match the clean-external-build configuration either (because it does apply non-filesystem replacements).
As another alternative, we could apply module replacements and emit an error of the module has any filesystem replacements. That would produce the same result as a build within the module, but at the cost of rejecting some otherwise-well-formed modules.
I don't think that the benefit of enabling temporary fixes and workarounds offsets the complexity of adding either of those modes of operation. Users have enough trouble understanding module-mode behavior as it is. It seems much simpler to require that modules build successfully using unmodified dependencies, even if that results in a bit of extra work for the maintainers of large modules to upstream their modifications or maintain a complete fork.
As I said before, the rule should be the tool is built exactly as if you had fetched it from a proxy, extracted it to a temporary directory, changed to the directory, and typed go install. I think possibly we should also specify the readonly flag, but that's a separate discussion.
This will apply replace directories in a fully consistent way, with no special rules of any kind required.
More importantly, it will attempt to build exactly the same binary that you would build if you checked out the repository, which is a consistency that is far easier to explain.
It may well fail to work if someone has checked in a bad go.mod, but I am fine with that, people should not be checking in go.mod files with replace directories that are not self contained anyway, and if they do then go install module/binary@version will stop working for them.
I have also said that I strongly disagree with #30240, and I agree this is one more way it will cause problems if we do implement it, but it merely causes a case that does not currently work to still not work, it's not really an argument for not doing this.
"Repurposing" go install
seems like a non-starter to me. It has a defined meaning, and that meaning is not "pretend we're not in the current module". We can't break that. Similarly, go get
is by design the only command that accepts @version
. If go install
adds it, everything has to add it (go build
, go test
, go vet
, etc), and that way lies incoherence.
Half-applying replace
directives also seems like a non-starter to me. It would be a new half-way mode that has similar coherence problems.
"Repurposing"
go install
seems like a non-starter to me. It has a defined meaning
That's fair enough, but aren't go install
and go get
practically the same? Now that install
also fetches source code, that is.
This is where the idea of slightly changing the meaning of one of them comes from, to keep both commands useful. We could always add a flag like go get -global
, but I find that a bit clunky, since installing a Go program on the system is a fairly common need.
@bcmills and I talked a bit about various "auto-detect" kind of ways to shove this into the go command and didn't come up with anything palatable. I could see adding a short flag to go get
to support this, like maybe go get -b
for binary or bare, but the first question to answer is what "this" means.
- Does it mean "run as if in a directory outside any module?"
- Does it mean "run using the upgrades implied by the current module's go.mod but just don't write any changes back down to go.mod?"
- Does it mean something else?
@mvdan Whether they are practically the same doesn't really matter. What matters is that they have established semantics that can't just be redefined.
Does it mean "run as if in a directory outside any module?"
This is what I'd imagine.
What matters is that they have established semantics that can't just be redefined.
Wouldn't the same have applied to go install
or go build
suddenly fetching code in module mode? That was hidden behind GO111MODULE=on
, but still, it seems to me like it changed the established semantics somewhat. Similar to what's been suggested here, in my mind.
Regarding the concern about filesystem paths in replace
directives in a remote go.mod
, I think it could be reasonable to reject those with an error if someone does go get -b
(or perhaps alternatively, reject any filesystem-based replace
directives in a remote go.mod
that go outside the module).
If the result is an error, does that avoid the concern with a new half-way mode?
Some alternatives for what go get -b
could mean:
-
go get -b
could mean "run as if you cloned the repo and rango install
from within the repo." -
Alternatively,
go get
itself when run outside of a module could be redefined in 1.13 to respectreplace
andexclude
directives in a remotego.mod
if one exists. This was considered for 1.12 (e.g., one variation suggested by @bcmills in #24250 (comment)) but was not implemented for 1.12. If that was implemented for 1.13, thengo get -b
could mean "run as if in a directory outside any module" (which implies respectingreplace
andexclude
directives in a remotego.mod
given that is whatgo get
would do when outside of a module).
I wouldn't be surprised if one or both of those formations are not precise enough, but wanted to send some grist towards the mill.
Something also worth mentioning is that the proposal was never about obeying replace directives. I think it would be easier for everyone to focus on the issue that is the current directory where one runs go install
.
Once we have some form of a "global install" command, we can open a separate issue about what to do with replace directives.
After speaking with @bcmills yesterday and reading through the comments here, I think we should ignore replace
directives entirely. To recap, file system replace directives will almost always point outside the module, probably somewhere within the repository. GOPROXY
will be the common case in the future, so we won't actually download anything outside a module most of the time. Consequently, file system replace directives usually won't work. If we only observed module replace directives (ignoring or rejecting file system directives with an error), this would introduce a new build mode. Builds could fail in this mode but not when replace directives are completely ignored or observed. We shouldn't ask authors to test in this additional mode.
Also, I agree with @rsc's point about go install
semantics. This would be too big of a change.
So maybe we can reach consensus on the following:
- The new behavior would be with
go get -b
, only with modules enabled.go get -b
would print an error inGOPATH
mode. go get -b <pkg>@<version>
would behave as if you rango get <pkg>@<version>
from an empty directory with nogo.mod
or other root in any parent directory. It would ignore requirements, excludes, and replacements from the current module (if there is a current module).
Semantics aside, go get -b
doesn't seem right. It's cryptic without being mnemonic.
It's not obvious what "b" stands for. That makes it hard to intuit what it means the first time you see it. That, in turn, makes it hard to remember later on—especially if it's been a while since you've used it.
Of course, if you use it enough, you'll memorize it, but not everyone is going to need to use this often enough to memorize it. And, if you have to consult the docs, a clearer name is going to stand out, so you can jump directly to the relevant section instead of having to scan until you find it.
While this may not be an uncommon thing to do, I don't believe it is something that is so common that eliminating a few keystrokes will add up to any meaningful savings for the average user.
Something like go get -global
seems preferable, given that repurposing go install
is out.
So the command we want is one that given a versioned import path, installs the binary built exactly at that version.
The primary use case is installing tools used during development, which is why it is important that it works even within a module and does not modify that module in any way.
Assuming we agree on that part, looking at the help of the two commands:
usage: go install [-i] [build flags] [packages]
Install compiles and installs the packages named by the import paths.
usage: go get [-d] [-m] [-u] [-v] [-insecure] [build flags] [packages]
Get resolves and adds dependencies to the current development module
and then builds and installs them.
I think it is very very clear that it matches exactly the help of go install and totally does not match the help of go get, so I would argue that it does not repurpose go install. This is specifically why I am arguing we should not change any existing working go install command, no inferring of latest or anything, we just add some way to go install to allow versioned imports. I would be fine with adding a special flag to go install if we need to make the use case clearer than just adding a version to the import path, but I really think it fits naturally within the scope of go install.
I think the first thing to do though is to read agreement about the operations we need and what their semantics should be, once we are sure of that we can discuss the command line that implements those.
Change https://golang.org/cl/169517 mentions this issue: [dev.boringcrypto] misc/boring: add go1.12.1b4 and update build scripts
@ianthehat I think you are on the right track. For example, here is a Makefile for a project which I am working on:
https://github.com/libopenstorage/stork/blob/master/Makefile#L41
In that Makefile, we build a binary called stork. However, there are also rules in that library
which install tools such as golint, gosimple, and errcheck.
What I need is the following:
- I want to have GO111MODULE=on to be unconditionally set all the time
- I need a command that I can run inside this directory where go.mod and go.sum exist, and I need the command to able to install golint, gogimple, and errcheck
- When doing 2., I do not want to update go.mod or go.sum
When I submitted a patch to libopenstorage/stork to convert it to go modules, I did this:
https://github.com/libopenstorage/stork/pull/296/files#diff-b67911656ef5d18c4ae36cb6741b7965R54
lint:
(cd /tmp && GO111MODULE=off go get -v golang.org/x/lint/golint)
I don't know if this approach is correct, but it was the only way I could figure out how to get this to work with go 1.11 and go 1.12
After reading:
- #25624 (comment)
- #25922 (comment)
- https://github.com/golang/go/wiki/Modules#how-can-i-track-tool-dependencies-for-a-module
It looks like the recommended approach is to:
- Create a file tools.go which exists in the same directory as go.mod
- In tools.go, put something like this:
// +build tools
package tools
import (
_ "golang.org/x/tools/cmd/stringer"
)
- In your Makefile, put something like this:
go install golang.org/x/tools/cmd/stringer
So the dependency on the tool is specified in tools.go.
So the dependency on the tool is specified in tools.go.
That is exactly what we don't want to do. Please read the first proposal text in this thread.
However, this has a problem - if the user runs the command from within a Go module, the command will add clutter to the module's go.mod and go.sum files. The binary will also be installed as usual, so the user might not even notice the unintended consequences until much later.
That is exactly what we don't want to do.
This issue is indeed about a "global install", but I think there is a more general issue that perhaps needs to be created to capture all of the situations we need to cover. Reason being, @ianthehat has suggested on Slack that it might be all "tooling" cases actually collapse down to a single tool/command, and so capturing the wider use cases/problems might be of some benefit.
I am not proposing to repurpose this issue, rather I'm just going to use this comment to record the gist of the conversation on Slack.
Quoting @ianthehat
The concrete use cases people have suggested so far are:
- go code generators (often from go generate lines) which must be versioned at possibly per invocation granularity
- code analysis tools (the kinds of things we are trying to merge into gopls, an also gopls itself of course) which may need to be versioned per project
- System wide installs of binaries, for the kinds of instructions people put on their wiki
I’ll add a use case that might not be well-covered so far: go-fuzz. (I’m trying to write installation instructions using modules right now and having a hard time.)
Here’s what go-fuzz needs right now:
- Install two binaries, both from the same module (was: “use go get/install twice”)
- Make sure cmd/go (really go/packages) can find support packages inside that module, which are used by go-fuzz-build (was: “go get puts go-fuzz in GOPATH, where it can be found easily”)
This issue as discussed would help with the former, but I don’t know what to do about the latter. The obvious options don’t see to work very well:
- Manually add a require in the user go.mod, even though strictly speaking there is no dependency present
- Somehow have go-fuzz-build memorize its path on disk when installed (but how? and how to get cmd/go to use it?)
- Use codegen to embed the support package into go-fuzz-build and use go/packages overlays (ugh, plus IIUC this doesn’t work because you can only overlay files, not entire packages)
I’m happy to change how go-fuzz operates, but at the moment I don’t see how without a GOPATH install. Hopefully I’m missing something. :)
cc @dvyukov and xref dvyukov/go-fuzz#234 (comment)
Hi @josharian, is the scenario you are asking about:
A. You are looking to move away entirely from creating a temporary gopath
directory and setting GOPATH
as part of the go-fuzz-build
process?
vs. maybe:
B. You want to continue setting GOPATH
to point to a temporary gopath
directory created by go-fuzz-build
, but you want to use go.mod
to pick the versions of external dependencies of the go-fuzz
and go-fuzz-build
binaries, including golang.org/x/tools/go/packages
? And then the related question is then how does go-fuzz-build
find the right versions of support packages like github.com\dvyukov\go-fuzz\go-fuzz-defs
to copy into that temporary gopath
directory?
vs. maybe something else?
"A" is a more complicated question. It also might be a non-trivial amount of work to support users who want to fuzz non-module aware code if there is no GOPATH used to build in that scenario.
"B" is a variation of "how can an installed binary find portions of code from its own module", I think? Focusing on that as the question for remainder of this comment:
I think you outlined some options above. Another option might be if go-fuzz-build
knows its own version, go-fuzz-build
could do something like the following when executed:
- Create a temporary module (e.g., something like
cd $(mktemp -d) && go mod init tempmod
; this is to avoid dirtying the user'sgo.mod
) - From within the temporary module, do
go get -m github.com/dvyukov/go-fuzz@<version>
(or manually add the equivalentrequire
to the temporarygo.mod
), then dogo list -f '{.Dir}' -m github.com/dvyukov/go-fuzz
to get the on-disk location ofgo-fuzz
in the module cache in GOPATH/pkg/mod, or alternatively usego/packages
at that point.
The slightly tricky bit there is having the binary know its own version. With modules, debug.BuildInfo returns module version info at runtime, which you might think would give you exactly what you want. However, while dependencies have useful version reported like v1.2.3
, the main module in Go 1.12 has its own version reported by debug.BuildInfo
as the less useful version of (devel)
. Making that more useful is tracked in #29814. In advance of #29814, you could use one of the older pre-modules techniques for having a binary know its own version, or a module-specific workaround could be creating nested modules (e.g., create a separate go.mod
in github.com\dvyukov\go-fuzz\go-fuzz-defs\go.mod
) and add require github.com\dvyukov\go-fuzz\go-fuzz-defs
to the top-level go.mod
for go-fuzz
. Nested modules are more targeted at power users and have some significant subtleties, but that would be one way to allow the version of github.com\dvyukov\go-fuzz\go-fuzz-defs
to be returned properly by debug.BuildInfo
in Go 1.12.
If that approach was taken, versions could continue to be based on commit SHAs (in the form of psuedo-versions) given go-fuzz
is not yet tagging releases.
Finally, in the most common case someone will likely have the right version of github.com/dvyukov/go-fuzz
in their module cache in GOPATH/pkg/mod from when they installed go-fuzz
, but if you want to force the go get -m github.com/dvyukov/go-fuzz@<version>
to fail if it requires network access, you could disable network access in go-fuzz-build
by setting GOPROXY
.
In any event, I didn't test anything here as part of this write-up, so maybe there is a mistake here, but that is one probably-possible-but-somewhat-awkward approach. I wouldn't be surprised if there is a better alternative not yet mentioned, or maybe one of the options you outlined above would be better.
@thepudds "how can an installed binary find portions of code from its own module" is a good summary of the question I had. Thanks for outlining another option. I am still left with the feeling that all of the options are too complicated by half.
Just wanted to add a problem with the current go get
mechanism for system-wide installations:
> go get golang.org/x/tools/cmd/gopls
go: finding golang.org/x/tools/cmd/gopls latest
go: finding golang.org/x/tools/cmd latest
go: finding golang.org/x/tools latest
Works fine.
Initialise a git-repository:
git init
And try to fetch it again:
> go get golang.org/x/tools/cmd/gopls
go: cannot determine module path for source directory /tmp/tmp (outside GOPATH, no import comments)
I would expect it to work nonetheless, as I'm not in a go-project in any way (it's just an empty git repo).
Having a global install command would make this clear and easy.
@tommyknows, that failure mode is closely related to #29433, and should be fixed at head. (Please file a separate issue if it is not.)
I feel like repurposing go install
is silly. We already have a "download and use" command... it's go get
.
Could we make go get
do the Right Thing™ when it is targeting a package main
? You can't ever import a package main, so it's nonsensical to make it a dependency of the current module. For 7+ years, when anyone said go get <some binary>
they meant "download and install the binary in $GOPATH/bin" .... we could retain that behavior. And then we wouldn't be asking every single command author to change their build instructions from "go get ...." to something else.
Also, then there wouldn't be the weirdness of having some other command that uses the version format, as brought up by Russ.
Could we make
go get
do the Right Thing™ when it is targeting apackage main
?
There is arguably no one obvious “Right Thing” to do when the arguments to get
are main
packages from different modules, or a main
package from one module and a non-main
package from another, or a module path that happens to also be a main
package in a module that provides other (non-internal
) libraries.
I'm surprised I didn't read anything about dev dependencies here. I think the way npm
handles these questions makes sense, and versioned tools used for the project (i.e. a protobuf compiler) arenecessary in some cases.
What we need is:
- a way to globally install binaries (I think we can ditch packages?) from within a Go module
- a way to locally install dev tools (could include test dependencies) for use within the module
- a fix to allow installing a specific version while outside of a module path
Or in other words:
-g
/--global
flag forgo install
/go get
go get --dev
andrequire-dev
ingo.mod
go install
/go get
defaults to global outside module paths
I'm also really not sure about the difference between go install
and go get
at this point. Intuitively get
would only download & build packages, while install
would also move binaries to $GOBIN
.
Generally, $GOBIN
should be modified within each module to include the project binary cache with higher precedence, so you can use the versioned tools.
I'm surprised I didn't read anything about dev dependencies here.
Please see https://github.com/golang/go/wiki/Modules#how-can-i-track-tool-dependencies-for-a-module.
I'm surprised I didn't read anything about dev dependencies here.
Please see https://github.com/golang/go/wiki/Modules#how-can-i-track-tool-dependencies-for-a-module.
As far as I understand (see #33696 for related discussion) the linked approach does not work, especially when using modules:
then one currently recommended approach is to add a tools.go file to your module that includes import statements for the tools of interest (such as import _ "golang.org/x/tools/cmd/stringer"),
While that installs the dependency, it does not offer a canonical way to actually invoke the tool. This does especially include invoking it from inside the go build process (e.g. go:generate
).
Same here. Global installs are a substitute for being able to vendor specific versions of the tools. And the tools.go
workaround with build tags, and executing them with go run <package>
has quite a bad code smell.
As part of #34506, I'm proposing we add a -g
flag (global mode) which would cause the go command to behave as if it were outside a module. If a go.mod file is present, its requirements would be ignored, and it would not be modified. That has come up a couple times in this thread (also as -b
).
I'm also suggesting -modfile
and -sumfile
flags, which could be used to manage local dependencies and tool dependencies separately.
Please take a look.
I'm also suggesting -modfile and -sumfile flags, which could be used to manage local dependencies and tool dependencies separately.
Do we need a -sumfile
flag? If we assume go.mod and go.sum are always next to each other, then the -modfile
should be enough as far as I can see.
@marwan-at-work A couple people have also commented on that in #34506. I think you're probably right: -modfile
may be enough.
Change https://golang.org/cl/203279 mentions this issue: cmd/go: add go get -g flag to install tools in global mode
Is there a recommended way to install tools without affecting go.mod
/ go.sum
?
For example, go get golang.org/x/lint/golint
is a tool that users may wish to install while working on a module without affecting the module itself.
Yes, please see - #34506. This is coming in 1.14. Use the -modfile argument to specify an alternative go.mod file.
@agnivade which alternative go.mod file? Will users have to create a dummy go.mod file somewhere in their system and keep pointing to it? That seems like bad UX, if I understand it correctly.
I'm also curious why the -g
CL was abandoned. Will it be reintroduced? It seems like the friendliest solution from a user's perspective.
If you want to install the latest version of a tool without recording a version anywhere, a quick command is this:
(cd; GO111MODULE=on go get example.com/tool)
For that command, there is no main
module, so if the tool can't build without replace
or exclude
directives, you need to clone its repository and run go install ./tool
from its module root directory. github.com/myitcv/gobin
is a tool that automates that workflow.
(All this assumes you want to build in module mode; everything works as before in GOPATH mode).
@marwan-at-work -g
is on hold because we haven't reached a consensus on whether the tool's module should be treated as the main module and whether replace
and exclude
directives should be honored.
@jayconrod personally I think by using -g
you're implying it's unrelated to the module dir you happen to be in, at least that's what I would expect, I can't think of any package managers which respect local deps in that scenario.
@tj Yes, we agree on this. -g
would ignore the go.mod
in the current directory and would force go get
to run in module mode.
The question is whether the module that provides the executable being installed should be treated as the main module. Currently, it is not, which means replace
and exclude
directives are not applied. Some executables won't build without these directives (for example, gopls). But we can't safely apply replace
directives that point to directories, which means we'd have to reject a lot of modules that use them anyway (for example, gopls).
One other advantage to not treating the executable's module as the main module: version information is baked into the executable. So go version -m somebinary
will tell you the versions of all the modules used to build it, including its own module.
One other advantage to not treating the executable's module as the main module: version information is baked into the executable. So
go version -m somebinary
will tell you the versions of all the modules used to build it, including its own module.
Is there some reason that the Go tool couldn't include the version in the resulting binary even if it was treated as the main module for dependency-resolution purposes, given that it knows the version (something it usually doesn't) ?
Another thought: if you don't treat it as the main module, then the result isn't the same as if you'd built the module at that version from within itself, so the version would potentially be misleading.
Some executables won't build without these directives (for example, gopls)
As I've mentioned before, I don't think gopls
is an example here, indeed I think it's a counter example for this particular issue. A directory replace
directive is, by definition, a local-only directive that would be better served by a solution to the problem described in #26640.
The debate we've previously had is about whether -g
should apply non-directory replace
directives
- On one side (me, @rogpeppe and others), that
-g
should apply non-directoryreplace
directives - On the other side, that
replace
directives should be applied either all-or-none
The status quo is that no replace
directives are applied, directory or non-directory.
One suggestion that @jayconrod made some time ago was that it could/should be an error for a go.mod
file to contain a directory replace
directive (this is, in effect, an argument in support of a proper solution to #26640), hence allowing the all-or-none approach to "work".
One other advantage to not treating the executable's module as the main module: version information is baked into the executable.
I think that's an orthogonal issue, because -g
could just as easily create a temporary module that applies the non-directory replace
directives of the target module and then runs install. Incidentally, that's exactly what gobin
does.
To further complicate the discussion of directory replace directives, one case in which a directory replace directive probably could safely be applied is when the directory is located inside the same module. This would be important for the kind of case considered in #34867, in which the agreed-upon workaround was a directory replacement.
@josharian, a replace
directive inside the same repository will not work if the module was fetched from a module proxy. (The module cache stores individual modules, not entire repositories.)
OK, I've edited my comment to s/repository/module/. I believe that the point stands.
It is not possible today to point a replace
directive to a directory within the same module.
The target of a replace
directive must contain a go.mod
file, but a directory containing a go.mod
file defines a module boundary (and is pruned out of the parent-directory module).
@bcmills I see. Which is why I needed to write a script instead of being able to just put the files in the right place. OK, I'll hide my comments.
This thread has been dormant for a while, and I would like to revive it for 1.16, since the merge window will open in about five weeks.
Like @rsc said last year, we should agree on the behavior before we bikeshed about what command (or external tool) it would live under. So, instead of trying to summarise the entire issue, I'll try to lay out my understanding of our current agreements and disagreements about the behavior.
-
We seem to agree that we should treat the downloaded module as a main module. The opposite would lead to a simpler solution, but also a far less useful one; replace directives in end user programs are useful and used widely.
-
#30515 (comment) mentions that we lose version information by treating the module as a main module, but #29814 should fix that. It is also an orthogonal issue, which also affects the
git clone + cd + go install
method. -
We seem to agree that applying some, but not all, of the replace directives is a bad idea. We should either apply all of them, or none of them. Otherwise we are adding a third "build mode" that would make Go modules more complex for everyone.
-
Since we want to treat the module as the main module (see point 1), we have to choose "apply all replace directives" over "apply none at all".
5A) Solution to "what do we do with directory replace directives?" by @ianthehat: extract the module download zip, and apply them as usual. The build must succeed applying all directives, or fail.
5B) Solution to "what do we do with directory replace directives?" by @myitcv @thepudds and others: simply error if any directory replace directives exist. The build must succeed applying all directives, or fail.
Is my understanding correct? If anyone disagrees with the points above, please leave a reply and I will update this comment. We can use this summary to get up to speed on the current blockers and discuss the issue in tomorrow's golang-tools call.
@mvdan Good summary! I mostly agree with it all. In regards to point 3, and i'm not sure if anyone else agrees, I think there is a middle ground option available:
The install command respects certain types of valid replace directives, and simply fails if the main module has a replace directive it can't support. Then we don't create a third build mode but still build as the author of that version intended for most modules out there. I'd much prefer this to ignoring all replace directives.
edit: I skimmed over 5A and 5B which covers what I was saying. Thanks mvdan.
@peebs both 5A and 5B are specific ways to implement your general idea of "support certain types of replace directives and fail if any others are found", as far as I can tell. The difference is that 5A tries to support directory replace directives on a best-effort basis (remember that this must all work with GOPROXY), while 5B just doesn't support them.
I think @ianthehat said it best:
the rule should be the tool is built exactly as if you had fetched it from a proxy, extracted it to a temporary directory, changed to the directory, and typed go install.
Let's just.... do that. With all the ways it works and doesn't work. That's what people will expect. That makes it easy to explain, and makes it easy to reason about.
Call it whatever you want, doesn't matter. Just give us a way to do a one line download and install, please.
@mvdan Thanks for restarting this discussion. I'd definitely like to get consensus here so we can get this into 1.16. Looking forward to discussing on the tools call tomorrow.
On point 1, this is still a point of contention. I believe @rsc and @bcmills prefer not having any main module (and consequently ignoring replace
directives). I'm coming around to that viewpoint as well. There are a few reasons for this, but the most important to me is that we don't want to encourage reliance on replace
in modules intended to be consumed by other modules, since it adds complexity and makes the ecosystem hard to scale.
Just as an example of where this can cause issues, suppose someone wants to track tool dependencies in their project's go.mod
or in a separate tools.mod
file, referenced with -modfile
. For that developer, the tool can't rely on replace
. If it does, the developer must copy the tool's replace
directives into their own project, which is not a pleasant experience (some Kubernetes projects are doing this). If it were easier to install tools globally while respecting replace
, I think developers would lean on that more heavily, and managing dependencies on tools and specific versions would get harder.
So I think we need to make a decision on that before proceeding. To inform that decision, I'd like to understand how many tools out there currently rely on replace
directives and why. If we find there are a lot of tools that won't build without them and for reasons that are very difficult to work around, I think that would be a strong argument for supporting replace
.
Assuming we do support main modules and replace
, I agree with all the other points. Additionally, #37475 is an approved proposal to stamp binaries with VCS information, so my comment about losing version information is even less relevant.
Thanks @jayconrod - I wasn't aware that some opinions had been moving towards not having a main module. I definitely want to discuss that tomorrow :)
I'm starting to think that, besides "try to obey all replace directives", and "ignore replace directives", we have a third option - "error if the downloaded module has any replace directives". This will work for simpler modules, and in my opinion has several advantages over "ignore all replace directives":
- If the build works, it's exactly the same as the developers get when they run
go install
inside the cloned repo. No possibility of subtle differences or bugs due to any missing replaces, because there aren't two "build modes" with and without replace directives. - If upstream uses any replace directives, the user gets a clear and intuitive error, and they can nudge upstream to attempt to remove them.
- In the future, we could change the behavior with modules that do have replace directives, since that case simply errors in the first design. In my opinion, this is the best of both worlds, as it opens the door to "try to obey all replace directives" in the future if we want to add it.
@mvdan, also recall that the module cache strips the contents of vendor
directories (and perhaps the directories themselves?).
So “built exactly as if you had fetched it from a proxy, extracted it to a temporary directory, changed to the directory, and typed go install” is still not quite the same as “built exactly as if you had cloned it from upstream, changed to the directory, and typed go install
”, and I suspect the similarity between those two definitions would be confusing.
Briefly summarising a conversation with @jayconrod offline. I'm also coming around to the position of "apply no replace directives" but with one major caveat.
One of the reasons that people use replace
is that it's the easiest way of incorporating a fix that is being upstreamed. For example:
- I'm working on tool
t1
t1
uses modulem1
- we establish that
m1
has a bug that is affectingt1
- we fork (GitHub fork)
m1
, tom1p
, make the change, create a PR to upstream - we also add a
replace
directive int1
tom1p
- we then release a new version of
t1
To not use replace
directives in this scenario we need to create a hard fork of m1
. But:
a) creating a hard fork is harder than it should be
b) creating a hard fork and upstreaming the change is harder still
c) ensuring the hard fork and the upstream change are in sync is harder still
So if we are going to explore this "no replace directives" route further (and I agree there is real merit in so doing) then I think we have to answer the question of how to make this workflow easier.
This workflow affects a significantly smaller number of people (the tools maintainers) - so it feels wrong to craft a solution that potentially compromises the rest of the ecosystem around that.
I've wanted a good way to globally manage binaries for a while, and ended up writing a tool for my own use so I can easily upgrade things, commit to my dotfiles repo, and reproduce the contents of my $GOBIN
, to an extent.
Clearly functionality to this level is out of scope, but one decision I made was to respect replacements so long as they aren't file paths (and therefore may not properly resolve). I've found this to be a good middleground, with some caveats.
Effectively, the tool does the following:
- Create a
go.mod
for the tool binary (generated from its package name) and atools.go
file to reference it. go get
the binary at a specified version, likego get github.com/go-delve/delve/cmd/dlv@latest
.- Use
go list
to grab the module'sgo.mod
path, and parse it withx/mod/modfile
. - Copy all replacements into the temporary module, so they are respected as they would have been if the tool had been cloned and built.
- Apply some fixups (needed to fix some tools like
gopls
). go mod tidy
(so it can be committed).- Install the binary while in the new module, like
go install github.com/go-delve/delve/cmd/dlv
. (An extra feature is to use specific tags when building.)
I find that this helps to handle cases where tools are expecting that their replacements are being applied and work. Mostly.
Unfortunately, this isn't perfect; a big offender is gopls
, which has a relative replacement for x/tools
to its parent directory. This is a case where the replacement does work if you just clone and build. I end up needing a fixup step to ensure that gopls
and x/tools
versions match because gopls
's go.mod
is often in a broken state due to changes in x/tools
's internal packages.
I know that it'd be much simpler to not handle replacements at all for tools, but I'd worry about getting broken or unintended functionality that the tool author would not normally experience working and building from their own project, especially if this global install method becomes the canonical method to install binaries in a module-aware way.
I don't understand why if a module author takes the time to add a replace directive, and we want to build their main module, why we would ever do anything other then respect it or error out. So I agree with mvdan's points here.
If we need a stick against using replaces this feels like a weird place for it.
@peebs - in your experience why do module authors add replace directives? Are there use cases beyond the scenario described in #30515 (comment) that we aren't considering here?
Are there use cases beyond the scenario described in #30515 (comment) that we aren't considering here?
One more case is when the tool is in a multi-module repo. The replace directive is often used with a relative path to ensure the main module is built with the latest / same commit version of the surrounding modules.
@rhcarvalho that just won't work with GOPROXY
, though, because a module's zip archive does not include any other Go modules even if they all live under the same VCS repository.
There are also "replace" directive cases where a fix/change that cannot or doesn't make sense to be upstreamed. Few examples off the top of my head:
-
grpc contains an import to
golang.org/x/net/trace
, which in turn, imports quite a lot of code increasing the binary size. The only option to remove it, was to fork it and remove the specific code. I'm not saying that "trace" isn't a sensible choice there, just that sometimes you want to make changes, that do not make sense upstream. -
sqlite contains build tags for including json support. To avoid making people use build tags to compile your program the easiest change would be to change the code, such that it by default has "sqlite_json" enabled.
-
bbolt for some quite time had issues with incorrect uses of unsafe. Where the fixing pull requests weren't being merged for several months.
@egonelbre that's very useful thanks. For each of the points you raised, would it be correct to say that using a hard fork would be possible, but not easy? Specifically, what would be hard would be incorporating upstream changes in the hard fork. And that furthermore, if tooling etc were improved to make that easier, then the hard fork approach could be just as convenient as a replace
?
For grpc
, a hard-fork would be really difficult due to all the tooling and other libraries around it as well, not just grpc repository itself. In some sense, it would be hard forking multiple repositories and tools, not just one repository. In other words, maintaining patches for code-generators, multiple repositories and other tools, and ensure everyone uses those patched tools.
For sqlite
it would be possible and probably could be (relatively easily) automated. This is as long as you are using sqlite directly and not in conjunction with some ORM like library that imports the original package.
For bbolt
a hard-fork would fragment the user-base even further. So waiting for it to be eventually merged is probably better than maintaining a hard-fork.
To summarize, as long as you are using the target package directly a hard-fork is doable. If you use external packages that integrate with the original package, then the hard-fork would mean forking these as well. These might include tools and code generators that cannot be easily/automatically modified.
@egonelbre, for the grpc
case, would it make sense to (instead of forking) send an upstream patch to implement a +build
constraint that drops the problematic dependency? Then you could set a build tag to prune the dependency from your binary, instead of needing to maintain local patches.
For sqlite
, perhaps it would make sense to invert the sense of the build tags..? If programs are slightly larger with the tags, and some programs are missing necessary functionality without them, then it seems like the default behavior should be “slightly larger” instead of “missing necessary functionality”. In other words: this still seems like a usability issue to fix upstream.
For bbolt
... if you have a dependency that isn't merging fixes for critical issues (such as memory-corruption bugs) in a timely manner, it may be wise to reconsider use of that dependency.
At any rate, that seems like a clear case where making it easier to use replace
to paper over the problem produces a better local state at the cost of a worse global state: that would substantially reduce the incentive to get the fixes merged upstream at all, and if you're worried about fragmenting the user base, “~everyone carrying the same local patches” does not seem any better than “some projects moving over to a fork”.
@bcmills potentially, yes, there's also an issue since Sep 2017 about the tracing (grpc/grpc-go#1510). I agree that it probably could be eventually solved somehow. Of course, it's possible that the upstream maintainers do not agree with the change -- since an extra tag is more maintenance for them. So, a build tag could be used, however this gets into the same issue as with sqlite, you need to force people to use the build tag.
With sqlite the json is an opt-in extension, both in the original library and in Go. There are 6 extensions available for Sqlite (specifically in mattn/go-sqlite3). But there are more of them https://sqlite.org/src/file/ext/misc. Should all of the 54 extensions be included there? If you need to reduce the binary size, then you would need to somehow force people to use all the negative tags.
Yes, I do agree with the bbolt
thing. However, it's often unclear in the beginning whether there's just a minor delay or a large delay in getting things merged.
A couple points for emphasis and clarification:
We want to make sure developers can track dependencies on modules that provide tools. This might not make sense for all tools, but for example, it's especially useful to track versions of code generators and static analyzers. Tool modules required in this way cannot rely on replace
directives, since they are not the main module.
Of course, not all tools make sense as dependencies. For example, gopls
and goimports
probably do not. This issue is about installing tools without reading or writing the dependencies of the module in the current directory (if there is one). Even though tools may be built and installed with a different command (say, go get -g
), for the sake of simplicity and comprehensibility, we'd prefer the build to work the same as go get
.
More importantly, we don't want to encourage tools that could be tracked as dependencies to rely on special behavior of go get -g
, namely respecting replace
. That solves problems for tool authors, but it can create problems for downstream users.
The main use case for replace
is making changes to upstream dependencies that won't accept those changes in a timely fashion. Experiences with specific upstream modules like those listed above are helpful. I'm also collecting some data on how often this is necessary.
I'll point out that though that modules providing libraries also need to submit changes to dependencies, and their replace
directives cannot apply to downstream dependents. Hard forking may be a solution, but it's also one we should be very careful about: you can fork a dependency, but you can't get other dependencies to use that fork without forking them, too.
Hi all, very happy to see renewed discussion and thinking here. A few quick comments.
First, I think part of the reason it was tough to come to consensus on the related discussion in #34506 was because there was (as far as I followed) a fair amount of discussion around increasing the number of modes.
Second, for this issue it would be good to to see if there can be consensus around:
- there is "go get behavior when operating in the context of a local module" (which is well defined at this point) and there is "go get behavior when operating outside of the context of a local module" (the exact behavior of which is up for debate).
- define the singular behavior for go get when outside of the context of a local module.
- make it easy (somehow) to specify that you want that singular behavior from 2 even if the current working directory happens to be inside of a directory with a go.mod file.
In other words, it might be easier to get consensus and move forward on this issue if it is considered an anti-goal for this particular issue to increase the number of modes.
Regarding respecting replace
-- modules are by design less expressive than other modern package managers for other languages, but a key counter-balance to that (as far I understood the rationale) was that replace
directives give complete control to the top-level module. For example, from the official proposal:
exclusions and replacements only apply when found in the top-level module, not when the module is a dependency in a larger build. A module author is therefore in complete control of that module's build when it is the main program being built
and from the vgo blog series:
Minimal version selection is very simple. It achieves simplicity by
eliminating all flexibility about what the answer must be: the build
list is exactly the versions specified in the requirements. A real system
needs more flexibility, for example the ability to exclude certain module
versions or replace others.
My personal take is it is very important to support replace
directives with go get -g foo
(or however it is spelled). There is no other top-level module in play at that point. It seems very reasonable to error out if a directory-based replacement is present, but otherwise respect replace
directives in the remote module.
Finally, FWIW, I think the longest discussion on whether or not to respect replace
and exclude
when outside the context of a local module is in #31173, including there was some discussion there of some of the real-world examples provided by @rogpeppe.
@jayconrod - thanks for the additional context.
This might not make sense for all tools, but for example, it's especially useful to track versions of code generators and static analyzers.
This also touches on the point that not all module authors (tool or otherwise) want tool dependencies in the main go.mod
file, because of a) go.mod
bloat and b) the fact that tools' dependencies interact unnecessarily with the other module dependencies, as well as other tool dependencies' dependencies. Our experiment with gobin
led us to this "conclusion": myitcv/gobin#81, namely that each tool dependency should be tracked independent of the main go.mod
file and independent of other tools, at a minimum using a semver version (assuming that resolves to a "complete" module) or using a separate nested go.mod
file otherwise (e.g. tool not yet a module, or tool's go.mod
file incomplete).
Ensuring tools have "complete" go.mod
files is an orthogonal problem, but one definitely worth addressing (using gorelease
?)
This then leads into a discussion about how to install/run tool dependencies in a project. In the world of gobin
:
- a module-local tool dependency install is spelled
gobin -m $mainpkg
- running a module-local tool dependency, e.g. as a
go:generate
directive, is spelledgobin -m -run $mainpkg
- a global tool install is spelled
gobin $mainpkg[@version]
- running a global tool is spelled
gobin -run $mainpkg[@version]
Historically and indeed still currently, go run
has not been a fast enough solution because of the time taken for the linking step.
Perhaps go run
is the ultimate solution, perhaps not. I'm simply advocating that we should consider how to go run
(or however it should be spelt) in both global and module-dependency tool contexts.
I note however that everything which I've just written is orthogonal to the question of whether replace
directives should be applied or not. So I only note these points for additional context once that conclusion is drawn, not to distract current discussion (on today's call).
@peebs - in your experience why do module authors add replace directives? Are there use cases beyond the scenario described in #30515 (comment) that we aren't considering here?
My experience comes more from a time of godep, glide, and dep, but I think it applies here. I haven't worked on large open-source corporate projects since modules. Often, the conflict is that one of my dependencies, lets say an indirect one from a big project like k8s, makes a breaking change/bug. Then say I have another large project that also imports that same indirect dependency. The release engineer needs to cut a release upgrading the k8s dependency but the other large project sharing the indirect dep depends on the old version. The release engineer needs some manual control for this build to resolve the conflict and its likely to be a temporary workaround that will probably take a release or two to get rid of while upstream is petitioned.
Now in the era of godep, glide, and dep I'd have even more problems on my hands with constraints, bugs, and some temporary forking may be involved. I love the simplicity of MVS and I realize that in a perfect world SIV would solve this problem. However, I don't think getting perfect SIV compliance with the presence of 0 versions and accidental breaking bugs is possible and I thought that was the idea behind replace. As thepudds mentioned above:
Regarding respecting replace -- modules are by design less expressive than other modern package managers for other languages, but a key counter-balance to that (as far I understood the rationale) was that replace directives give complete control to the top-level module. For example, from the official proposal:
I'll just reiterate, having the global install command simply error in the presence of replace directives I think is a reasonable solution moving forward. If the author is getting wild with replace directives I don't think its unfair to ask that people git clone to build that release. As mvdan stated, this also leaves the door open to eventually adding support to handle some replace directives. However building a main module while ignoring the replace directives I think would be a mistake. Best case the build ends up failing anyway, worst case someone builds a release of a tool that isn't the same as the intended release and contains bugs. Soon engineers will argue if its cool to use the global install tool ever and there will be usage confusion and caveats.
One other comment is that a strength of the Go ecosystem is that "binaries" are currently most often distributed as code (and that code now has auditable and globally viewable cryptographic checksums, which means I can have confidence that the code I am seeing for version X is the same code that you see for version X).
If the decision here ends up being to not support any replace
directives for go get -g foo
(or whatever spelling), I think that nudges more projects towards publishing binaries, including larger / more complex project, e.g., via the popular goreleaser, or the newer https://gobinaries.com (by TJ Holowaychuk), or otherwise just publishing binary artifacts on github or elsewhere.
It is likely "a nudge" on overall ecosystem behavior rather than "a dramatic shove", but even nudges can add up over time in terms of behavior. Personally, I am happier downloading code than binaries, and probably worth considering if there are implications here of nudging the ecosystem away from that, even if it is a light nudge.
The tools call is just about to start for those people interested in continuing this conversation: https://meet.google.com/xuq-tcoc-dkp
I haven't thought deeply about the repo tools issue in regards to modules or have much experience trying solutions here but why not have a small script in a repo that simply contains a few lines of global install commands:
#!/bin/bash
go get -g github.com/tools/x@2.2.2
go get -g github.com/tools/y@1.1.1
why not have a small script in a repo that simply contains a few lines of global install commands
Because this requires me to mess around with PATH
(on a per project basis), which undermines one of the greatest benefits of the go run
-like workflow.
We continued the discussion in the monthly tools call. The recording and notes will be posted on the wiki soon, but I'll try to recap the discussion for now (please let me know if I've gotten anything wrong here).
We are leaning toward not having a main module and not supporting replace
directives, for the reasons described above, primarily that they can cause problems for module authors tracking tool dependencies.
I don't think we've nailed down the exact behavior. We may make it an error for any replace
directives to be present in modules providing packages named on the command line, as @thepudds suggested above (edit: I misunderstood the comment above; in fact, @thepudds suggests we respect non-directory replace
directives and error out on directory replace
). This gives us room to relax constraints and support some replacements in the future. It may break some modules like gopls; I'm working to collect data on the impact of this. An alternative behavior is to ignore replace
entirely; this is what go get
does outside a module, so if we're confident this is the right behavior we want, we should do this.
All this being said, we should aim toward making replace
less necessary, and less frequently used.
- We don't have clear recommendations on how
replace
should be used.- The module FAQ has some guidance in When should I use the
replace
directive?. We should expand this with something more focused, like a blog post or something linked from https://golang.org/doc. - In an ideal world, no
replace
directives should be present in ago.mod
file at a tagged release version for a module that may be depended on by other modules. Obviously that's not always possible, but this makes things simple for our users, and we should strive for this. - Perhaps
gorelease
and other vetting tools should be more opinionated. It should be easier for authors to build and test their modules withoutreplace
.
- The module FAQ has some guidance in When should I use the
- We should provide more guidance on managing tool dependencies.
- How can I track tool dependencies for a module? on the wiki recommends tools.go. We should explain other approaches using
-modfile
, or submodules.
- How can I track tool dependencies for a module? on the wiki recommends tools.go. We should explain other approaches using
- We should improve ergonomics around tool dependencies.
- For example,
go run
could cache binaries so installing tools is less necessary.
- For example,
There are a lot of ideas here. Let's reserve some time in the tools call next month to discuss and brainstorm on these and turn them into issues we can act on.
I think tracking tool dependencies is a really bad reason to make the default to ignore replace directives.
The fact that the tool author is removed from the equation about how their tool is built seems like a total non-starter. As a tool author, I would never want to have to field issues because someone tried to go get
(or whatever we call it) my tool and then it didn't work (or worse! had subtle bugs) because my replace directives were ignored.
This seems like it's prioritizing something which is way out of scope of this issue. Just because you can (sort of) use go modules to track tool dependencies, doesn't mean it's a good idea. It's also not the default reason people have used go get
in the past. go get
was about.... getting and installing a tool you need on the local system. That's all.
If someone wants to track dependencies, it's trivial to have a list of go get
urls and run them at build time through a makefile or similar. You don't need to build it into repo's dependencies in the default way. In fact, I don't think I would actually want that. There's a big difference between tools needed for development and tools required to just build the code.
I have thought about this a bunch, because we do this at work, and wrote up a blog post about it : https://npf.io/2019/05/retooling-retool/
Please don't conflate these two needs. And please don't make the 95% case of "just go get a tool" suffer because some people want to track external tool dependencies in their go.mod.
I was under the impression listening to the meeting that erroring out instead of ignoring replace's was the solution that kept everyone happy though I don't think we explicitly did rounds asking such a question. I think silently ignoring the replace directives is a really bad idea that leads to having two build modes for a binary that now are in common circulation. This is currently not a problem with replace directives but it will be if we introduce this as a normal way to build external binaries.
If replace use really needs to be discouraged so much why is it even an option? Assuming someone won't use a replace in a release doesn't make any sense to me. The the pressure of getting a release out in time is exactly what might force someone to use a replace because they had to upgrade one of their dependencies for a release.
@jayconrod - thanks for the summary, looks good to me (with one addition that I mention below). For anyone who wasn't able to join the discussion on the tools call, I encourage you to watch/listen again: https://youtu.be/J7MOh2t0qIs?t=2469
In an ideal world, no replace directives should be present in a go.mod file at a tagged release version for a module that may be depended on by other modules. Obviously that's not always possible, but this makes things simple for our users, and we should strive for this.
I think we need to emphasise (and then fix the fact that) there is currently a documentation and tooling gap when it comes to advice for tool authors. This essentially is picked up in #30515 (comment), #30515 (comment) and the subsequent exchange with @bcmills. i.e. as the tool author you find yourself in a situation where you would, today, reach for replace
. What should you do? Are there any tools to help? As discussed, we might not have the most polished answers to these questions today, and the tooling might well be less than ideal. Nonetheless, I think we should flesh out this answer in its current form as part of our ongoing discussion, in parallel to improving the answer through tooling etc.
@natefinch I don't think it's fair to say we're prioritizing tracked tool dependencies over globally installed tools. We're trying to ensure that both use cases are possible. If tool authors widely use replace
, then tracked tool dependencies will less feasible. We're trying to balance the needs of tool authors and tool users.
I don't think asking users to run go get -g
in a bash script or makefile is an adequate solution. I'm sure that works in many cases, but let's not force people to do that. It bypasses the module system and takes power away from tool users that track dependencies. They should still be able to apply their own replace
directives to their tool dependencies if they need to.