turnerlabs / fargate

Deploy serverless containers to the cloud from your command line

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Support for multiple containers

jritsema opened this issue · comments

In an effort to support Fargate tasks with multiple containers, there are a number of use cases that we'll need to address. I will try to lay out a proposal in this issue. Feedback welcome.

1. add/update/delete containers in a task definition

First some quick background. Our philosophy has been to use different tools to separate infrastructure and application concerns. We tend to use Terraform (or Cloud Formation) for provisioning the initial infrastructure and this tool for managing ongoing application concerns on top, like docker images, environment variables, and secrets. When it comes to creating an app, we've relied on fargate-create and terraform templates to initialize the ECS task definition with a placeholder container. This tool then takes over and uses docker and docker-compose as simpler abstractions on top of task definitions by revising them over time with app concerns (images, envvars, secrets) and deploying them to ECS services in a single command. Using docker abstractions also creates more of a seamless transition between running an app locally then deploying the same configuration to the cloud.

When it comes to configuring multiple containers in a task definition, unfortunately, docker abstractions aren't really sufficient. Task definitions provide a superset of what's in docker/docker-compose with constructs like container dependency graphs with conditions and EFS configuration, for example. This means that I think we should allow the user to edit a task definition directly (which is where our abstraction will leak). Rather than requiring a user to shift over to a completely different tool, we could introduce a command that would allow them to export their service's current task definition to a json file which can then be edited and directly re-deployed.

fargate service describe -o json > task-definition.json
fargate service deploy -f task-definition.json

This could come in handy for a number of scenarios by giving the user full control in times when they need to change something that isn't available in the docker abstractions. For example, as previously mentioned, they could leverage container dependencies to control exactly how their containers relate to and depend on each other for startup and shutdown. Again, this construct is not available in docker and is best expressed in task definition json.

2. deploy multiple app containers (i.e.; CI/CD)

Once all the containers have been added/updated/deleted in a task definition and deployed to a service (fargate service deploy -f task-definition.json), the service deploy command should then be able to deploy the app level concerns (images, envvars, and secrets) to one or many containers in a service. For deploying changes to a single container, we could add a --container flag which specifies the name of the container in the task definition to deploy to.

fargate service deploy --image <image> --container app

For deploying to multiple containers in a single command, we could deploy a docker-compose.yml file where the service names would map to container names in the task definition. For backwards compatibility, if there's only 1 container in the compose file, it could deploy to the first container in the task definition without requiring the name mapping.

fargate service deploy -f docker-compose.yml

3. set/unset envvars for multiple containers

As far as setting and unsetting environment variables, we could add a --container flag similar to service deploy --container.

fargate service env set -e KEY=VALUE --container app

If the flag is not specified, the target container is the first one in the list (as it is today, for backwards compatibility).

4. show info for multiple containers

The service info command is already sort of a task-level view of a service. We could add a new service containers command to show information about multiple containers. Something like:

fargate service containers

CONTAINER:   app
IMAGE:       1234567890.dkr.ecr.us-east-1.amazonaws.com/app:1.0
ENVIRONMENT VARIABLES:
  HEALTHCHECK=/health
  PORT=8080
  LOG_LEVEL=debug
  SECRET=/var/secret/my-secr

CONTAINER:   secrets-sidecar
IMAGE:       quay.io/turner/secretsmanager-sidecar:0.1.0
ENVIRONMENT VARIABLES:
  SECRET_ID=my-secret
  SECRET_FILE=/var/secret/my-secret

5. show logs for multiple containers

If all containers are setup to log to the same log group, we can use the existing logs command and grep for the specific container we're interested in.

fargate service logs | grep "ecs/app"

If we wanted to make this easier, I suppose we could add a --container flag that could do the filtering.

fargate service logs --container app

How does this sound? Thoughts?

I agree that adding an optional --container to those commands would be very helpful.

I think (ideally) we'd like a version of the multi-container deploy that doesn't require a docker-compose.yml. I realize that philosophically this is not how this tool was designed, and the cli for a multi-container update is probably not going to be very clean, but in terms of how we'd actually use it, that's what we need.

If it helps understand my thinking, we wrote scripts which do the update as follows:

register_task --family <task_family> --containers main=<image1> sidecar1=<image2> ... sidecarN=<imageN>
update_service --cluster <cluster> --service <service> --family <task_family>

So to mimic your example using the proposal above, it would probably look like this:

fargate service deploy --container main --image <image1>
fargate service deploy --container sidecar1 --image <image2>
fargate service deploy --container sidecarN --image <imageN>

I realize this would register 3 different task definitions, however, only the last one would get rolled out by ECS. If you wanted to do an atomic update, you could use a compose file and use envvars for the images:

version: "3.7"
services:
  main:
    image: $image1
  sidecar1:
    image: $image2
  sidecar2:
    image: $imagen
image1=<image1> image2=<image2> imagen=<imagen> fargate service deploy -f deploy.yml

You could also deploy envvars and secrets to the service atomically as well.

Would that work for you?

We'd definitely want it to be an atomic update.

Would it be possible to optionally pipe in the contents of the deploy.yml? something like:

fargate service deploy -- <<EOF
version: "3.7"
services:
  main:
    image: <image1>
  sidecar1:
    image: <image2>
  sidecar2:
    image: <imageN>
EOF