rexray / rexray

REX-Ray is a container storage orchestration engine enabling persistence for cloud native workloads

Home Page:http://rexray.io

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

rexray/ebs plugin is not working with EC2 metadata version 2

rixwan-sharif opened this issue · comments

Summary

We're making our EC2 instances to work with instance metadata Version 2 (only) for some security reasons. After we enabled the metadata version 2 on EC2s, rexray/ebs plugin is not working on those instances. Does rexray/ebs plugin has support for EC2 metadata version 2 (only) ?

Bug Reports

rexray/ebs plugin is not working with EC2 metadata version 2

Actual Behavior

Error response from daemon: dial unix /run/docker/plugins/185d790355bc4f0515af8925fa4aba54196de9f57d5a17686a975d0fc2f36856/rexray.sock: connect: no such file or directory

This will require upgrading to a newer version of aws-sdk-go.

Looks like support for IMDSv2 was added in aws/aws-sdk-go#2958 as of Release v1.25.38 (2019-11-19)

Changelog: https://github.com/aws/aws-sdk-go/blob/main/CHANGELOG.md

Another good option might be to use aws-sdk-go v1.27.2, because there is an ec2metadata fix/improvement in that version and this seems to be the last version with ec2metadata fixes that doesn't ask for a dependency update to gopkg.in/yaml.v2.

More generally, the dependencies of rexray are quite outdated; it still uses the dep tool rather than go mod. I wasn't able to get dep to work as packaged in Ubuntu 20.04, nor in an 18.04 VM, and it's been entirely removed from Ubuntu 22.04.

In other words rexray is looking like an unmaintained repo. There are a lot of forks out there, I wonder if anybody has been dealing with freshening up the build setup.

Got this working, created a PR. #1372

That's great @nbryant42 . Thank you for taking a look into this. Hoping to have this PR merged soon.

That's great @nbryant42 . Thank you for taking a look into this. Hoping to have this PR merged soon.

I have no control over that and it looks like the maintainers of this project have been ignoring PRs since several years now.

You can build my fix from source and publish the plugin to a repo that you control, or I have got this published to a public ECR repo (listing: https://gallery.ecr.aws/j1l5j1d1/rexray-ebs) so you can set your plugin name to public.ecr.aws/j1l5j1d1/rexray-ebs

Can't guarantee that repo will stick around forever--that's somewhat out of my control--but I have no intention of deleting it.

If you're doing this on ECS, that will also require updating all task definitions to reference the new plugin name.

Got it. Thank you. @nbryant42 Would you be please able to add the steps here to build this plugin so that we can can build and publish to our repo.

  • First, familiarize yourself with https://github.com/rexray/rexray/blob/master/.docs/dev-guide/build-reference.md
  • Decide what the plugin name needs to be. The format is the same as for Docker images: hostname/reponame/imagename:tag; :tag can be omitted if you just want to use latest. Hostname depends on what repo you'll be pushing to, can be omitted if you use Docker Hub.
  • Decide what branch you prefer to build. Sorta depends on what changes you think are appropriate. There are 3 branches in my fork:
    • alt - just this PR
    • dep-updates - everything in alt, plus some updated dependency versions to get govulncheck to quiet down
    • collected-prs - everything in dep-updates, plus the fix from #1345 plus a few minor Makefile cleanups. I might add a fix or two here going forward. Thinking about updating to Alpine 3.16.
  • ensure make, docker, and git (at least) are installed on your system. Might need a few other things.
  • mkdir -p ~/go/src/github.com/rexray
  • cd ~/go/src/github.com/rexray
  • git clone https://github.com/nbryant42/rexray.git
  • cd rexray
  • git checkout branch_name_you_want
  • DRIVER=ebs make
  • DRIVER=ebs make DOCKER_PLUGIN_NAME=plugin_name_you_chose build-docker-plugin
  • docker login to your repo; the precise details depend on what repo you are using
  • docker plugin push plugin_name_you_chose

@nbryant42 Thank you so much for the details to build the plugin.

@nbryant42 Hi, thank you for your sharing. I encountered a problem when built up the ECS task with the volume driver using public.ecr.aws/j1l5j1d1/rexray-ebs , the ECS task keeps saying was unable to place a task because no container instance met all of its requirements which might due to the wrong setting of the driver. I am not sure if I should set the volume driver in this way, could you give me some suggestions on that? Thank you!

@wjwelsie,

If the plugin successfully initializes before the ECS agent starts up, ECS will track the plugin name as a capability of the ECS container instance. ECS task definitions that require a docker plugin will then require the matching capability.

This means the plugin must be installed (typically in a cloud-init user-data script, as per https://aws.amazon.com/blogs/compute/amazon-ecs-and-docker-volume-drivers-amazon-ebs/), you can't just reference it from a task definition. And the plugin name in the task definition must match the plugin name that you installed.

So, most likely if you had already installed the mainline version of rexray and you want to use a build from another repo, such as the one I've published, you need to change the plugin name in your user-data script and also change the plugin name in the volume settings in your ECS task definitions to match.

@nbryant42 Thank you for the tips! I finally checked out that I didn't restart a new EC2 instance after updating the user-data script, it now gets working. Will try to build up our own repo following your answers :D

+1, we've been using the rexray/ebs plugin, but this is now blocking our SOC2 compliance because of the incompatibility with IMDSv2

+1, we've been using the rexray/ebs plugin, but this is now blocking our SOC2 compliance because of the incompatibility with IMDSv2

This is an unmaintained project, no PRs have been merged in years. Read the build instructions above to build the patch yourself or use my repo if you're in a hurry

Oh didn't realise, we'll probably migrate off of it in that case, thanks for the ping

@thomashlvt to be clear, there are a few of us still using this in production systems and I'm doing my best to keep this on life support (dependency updates etc) in my fork. But I'm just one guy with help from whoever in the community is able to help. So the next time AWS breaks things, there are no guarantees.