serverless / serverless

⚡ Serverless Framework – Use AWS Lambda and other managed cloud services to build apps that auto-scale, cost nothing when idle, and boast radically low maintenance.

Home Page:https://serverless.com

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Access environment variables in code

pmuens opened this issue · comments

Back in Serverless v0 we had the possibility to use process.env to use previously passed in environment variables inside the code (e.g. useful to namespace database tables etc.).

Big +1 to this. We (SC5) have always been heavy users of environment variables and sls meta sync. All our projects are using lots of them to configure stage-specific settings. Some variables (API keys etc) can't be stored in Git, so the previous separation into multiple files under _meta/variables was very useful for .gitignoring certain files.

I'm surprised that there's been even an alpha release without this feature. This is critical functionality.

Am I right that in alpha 2 we are not able to access to environment variables at all?

Can we get this in the beta too? 🙏

What solution are you using to set different variables in different environments before we have the feature? I'm very interested even it's a dirty hack...

My solution: I put inside all the v1 code inside a sls-v0.4 project with everything I need (Env, alias, versions...). The changes are minimum and will be very easy to migrate once v1 is more mature.

Use case

Usage of CloudFormation stack output variables

Prerequisite

We have access to variables within a Lambda function

Example

Custom S3 bucket resource through CloudFormation and reference the generated bucket name within your Lambda function.

With v0.5
It was easier since the framework first deploys the resources, any stack output ends-up in a _meta/variables/s-variables-STAGE-REGION.json file. You would then deploy your functions(without CloudFormation) separately with Serverless and the variables would already be present.

With v1.0
This would be only possible with two CloudFormation deploys, since the functions are part of the CloudFormation template. Or with the single sls function deploy foo. But in both cases you could end-up with empty/incorrect variables within your function.

"Solution"

It's a bit ugly and it always requires an APIG to be present. But we could map all the stack outputs with APIG to the Lambda event body.

@nicka thanks for your proposal!

I don't 100% get the need for an APIG.

Our current deployment strategy regarding functions exists of two parts. After the initial stack setup (if the stack is not yet present) we zip the function code and upload it to S3. Next up we update the stack with the compiled CloudFormation function definitions.

Couldn't we add the env variable we want to access (e.g. extract it form a serverless.env.yml file) to the Outputs section of the CloudFormation template and then access them in the Lambda function?

@pmuens Our current deployment strategy regarding functions exists of two parts. After the initial stack setup (if the stack is not yet present) we zip the function code and upload it to S3. Next up we update the stack with the compiled CloudFormation function definitions.

This work because of the initial deploy, for updates it's a lot harder if not impossible.

Imagine the following: Stack is present resources are present and Lambda's are deployed. You would then rename/replace an existing S3 bucket CF resource.

The following will happen: CloudFormation creates the new S3 bucket and removes the old one. The S3 bucket Output would be updated by the end of the deploy. BUT during this period your Lambda's are running with a reference to an old removed S3 buckets(downtime). A second CF deploy is needed to fix this.

ATM it's impossible to get rid of this in-between state, during the UPDATE deploy we can't pass the updated S3 bucket name to the Lambda's. Let's say the CF Lambda resource would a have Property called EnvironmentVariables we could supply them inline(maybe we should send a feature request to AWS haha).

As this is not the case we could do something similar to this with APIG event mapping to Lambda. Although this is not a perfect solution I'm confident it would work and CF would nicely update all the resources in-order without "downtime".

Hope my explanation is clear 😂

@pmuens I was throttled by AWS for too many request to read the outputs of cloud formation in the past, and that was just with a few lambda's running at the same time. I don't think using the outputs is a good idea.

@ajagnanan definitely not every time your Lambda function runs.

Let me suggest the obvious just as food for thought:

Since all providers have some version of an "S3" service and sls already uses it to store the code drops for the lambdas, perhaps an sls deploy can package up some "environment variable" section of the service config, put it onto "S3" as a json file, and provide a simple provider specific line again to read and load those environment variables, e.g.

sls deploy reads the appropriate section of serverless.env.yaml, creates a json file, uploads it to s3 as ${serviceName}-${stage}-environment.json (or something similar).

In (nodejs) function code, something like the following pseudo-code can be injected (as it used to be in v0.5) at the very beginning of handler.js before zip and deployment happens:

(
// 1. configure instance of AWS.S3
// 2. load json file from S3 (path, etc is known at deploy time)
// 3. loop through top-level json properties and set them on process.env OR simply set the value of process.env.SERVERLESS_ENV to the entire json tree
)();

This is easy to inspect, easy to understand and only incurs a slight S3 loading cost once per function instance "warm-up".

I think that during a deploy right now the old service zip files are removed, which I am guessing (needs to be validated) causes Lambda to pick up the new zip files in new lambda containers after each deployment happens. In-progress lambdas while the deploy is happening are sort of a separate problem that @nicka mentions above around how to do rolling deployments.

This is how I load my functions:

src
|__ foo.js
config.json
env.js
index.js
serverless.yml

index.js

const config = require('./config.json'); // You can copy prod / dev specific configs on deploy

require('./env')(config);

// Setup env vars before requiring functions
const handler = require('./src/foo');

module.exports.foo= foo.handler;

env.js

module.exports = function(config, secure) {
  Object.keys(config).forEach((key, index) => {
    const value = config[key];
    process.env[key] = value;

    console.log(`Env: ${key}=${secure ? 'secure' : value}`)
  });
}

serverless.yml

service: sls-project
provider:
    name: aws
    runtime: nodejs4.3
functions:
    foo:
        handler: index.foo // require the index module rather than straight to foo.js

As this is, the config file can be set on build. You could retrieve it from S3, auto generated, commit it to git or whatever. Env variables are also packaged with function. This could be a benefit or drawback depending on your use case.

You could also easily change this to retrieve a config file from S3 during initialisation rather than build.

I'm not sure this is a problem serverless needs to solve as it's easily solved without the framework. If serverless was to try to solve it, they would have to add a wrapper to each function (I think v0.5 did this). While that would work for node, does it also work for other languages?

Good one @johncmckim . I guess you have a npm run deploy which does this for you.

@mt-sergio my deploy script looks like this

deploy.sh

#!/bin/bash
set -e

AWS_REGION=${AWS_REGION:-ap-southeast-2}
BRANCH=${TRAVIS_BRANCH:-$(git rev-parse --abbrev-ref HEAD)}

if [[ $BRANCH == 'master' ]]; then
  STAGE="prod"
elif [[ $BRANCH == 'develop' ]]; then
  STAGE="dev"
fi

if [ -z ${STAGE+x} ]; then
  echo "Not deploying changes";
  exit 0;
fi

echo "Deploying from branch $BRANCH to stage $STAGE"

cp "./config/$STAGE.json" config.json

npm prune --production

sls deploy --stage $STAGE --region $AWS_REGION

I could add ./deploy.sh to npm scripts, but I usually just call it. In case you're interested, you can see my code. It's just a demo project to help me learn.

I've got a pull request in review (#1850) for adding the stage parameter to the API Gateway body mapping template for the Lambda event. After updating the body mapping template, I'm loading configs similar to this:

in my handler.js

module.exports.foo = (event, context, cb) => {
    setEnvVars(event);
    require('./lambda_functions/foo')(event, context, cb);
};

function setEnvVars(event) {
    //examples of setting env vars from event.stage
    process.env.NODE_ENV = event.stage;
    process.env.stage = event.stage;
    process.env.config = require(`./config/${event.stage}.json`); 
}

Could a Serverless plugin be developed to wrap the function on build per the strategy defined by @johncmckim?

@patrickbrandt I'm thinking about developing a plugin which would allow you to map CloudFormation Ref's and Get::Attr's values to Lambda via APIG stage variables. My only concern is the actual need for APIG as this would only partially solve the issue. IMHO Lambda should have this same variables support.

An intermediate build step could inject environment variables into the handlers deployed to Lambda. For example:

I define handlers in handler.js and reference them in serverless.yml:

handler.js
image
serverless.yml
image

Serverless creates a new handler file for deployment using key/values from serverless.env.yml.

Given the following vars in serverless.env.yml:
image
Generate a handler-deploy.js file with the following module:
image

The Lambda function definition uses the handler-deploy.ping module.

Let me know your thoughts.

@patrickbrandt If I'm not mistaking I think this is sort of what's happening in serverless v0.5. For static variables(defined prior to the deploy) this should definitely work! But should be fixed for multiple languages. Again getting CF stack related resources and info into the functions during a deploy is a lot more complex.

My opinion is that a lot of the methods described above are viable options (like @johncmckim demonstrates), and that while it isn't much more complex to roll your own solution than it is to write lambdas in the first place, that there should be a sls plugin (official or community-driven) to handle this use very common use case in a standardized way. Solving this category of problem for the dev is part of the value proposition of a framework.

As @nicka points out, it is common to want to access (at least while we're talking about AWS) ARNs for resources generated during the CF deploy, which requires the right order of execution in terms of build, deploy, config in order to do reliably well. Leaning on APIG to hold these variables might not be the simplest way to get this functionality cross-provider though.

Happy to help work on this as well if it's going to be a community plugin. @pmuens is this high on the internal priority list?

@ianserlin I really like the plugin idea! We've just introduced the community plugin repository (https://github.com/serverless/community-plugins) and are looking for contributions there!

FWIW, if anyone (like me) is trying to figure out a simple way of using Heroku-like environment variables in their Lambda functions accessed from API Gateway, one option is using APIG staging variables.

You can set these up via the API Gateway console:

screenshot from 2016-08-19 23-08-07

And they can then be accessed in the event.stageVariables object from within your lambda function:

screenshot from 2016-08-19 23-09-24

I'm very new to all this (having just started with AWS lambda and serverless today) so apologies if I'm teaching the proverbial granny to suck eggs... and I also realise this ties the env vars to the http event source only. But it works nicely for my use case!

I like the way that travis deals with environment vars. So the idea of having a plugin that can add encrypted vars to your serverless.yml and them commit to the repo safely would be pretty nice. I guess the trick is tying them properties of the service/function so they remain secure?

It would be much nicer if AWS had the same env system for their APIGateway for the lambda functions themselves! Anyone know the best person to ask? :D

For accessing the ARNs of created resources. I think it should be possible to add DependsOn:[ResX, ...] to the Lambda::Function so you could ensure that it was created after all your resources/roles/etc?

The problem with relying on APIG is that not all functions use them. For instance an authorizer function.

Amazon has built-in AWS Console management of environment variables (or other user configuration data) in pretty much all of their platform services (EC2, ECS, OpsWorks, Beanstalk). So it's a really strange thing that only Lambda is missing it.

OTOH some other badly needed features are also missing in recent services (like ACM certificates in API Gateway), which makes you wonder why development of this basic stuff is so slow. I say this as a big advocate and heavy user of AWS.

I'm currently working on a project where I need some way of passing created CF resources (in my case a bucketname) to the Lambda functions. I have everything working now (automatic creating of another stage, with the passed variables), and they are showing up as stage variables in APIG, and are passed to the Lambda functions.

I'll see if I can create a plugin (or PR) for this. But I think it might be better to have this natively supported by Serverless?

Could you use a Lambda backed custom resource to get the outputs from the cloudformation and update the zip file containing the code with a config.json file and then you could have the functions DependOn this resource to ensure it is completed before lambda trys to load the function files?

Just referencing Lambda thread on this subject https://forums.aws.amazon.com/thread.jspa?messageID=686261.
Here's a summary of what others do:

  • static file via var props = require(./props.json)[context.alias];
  • dynamoDB table with config
  • S3 bucket
  • dedicated Lambda function
  • KMS

I think it's pretty easy to roll this out on our own but should be documented. For now (migrating from sls 0.5) I'm going to use a static file since it will be easy to refactor it in the future.

For those on this thread, I was upgrading to SLS 1.0 RC1 and had to have something to make a few environment variables available (service name, stage, etc, at minimum). I wrote a very simple plugin that allows you to define variables in your serverless.yml file that will be written to a .env file in your deployment bundle so that you can use dotenv to load them. This may not address the needs of those who need things like CloudFormation references, but it addressed my simpler needs.

https://www.npmjs.com/package/serverless-plugin-write-env-vars

Hope that helps someone until SLS officially supports it!

Would be great to have the possibility to have env vars both on a per service (like Write Env Vars plugin does) and per handler basis, where priority is handler > service and can override service env vars.

Something like this would be amazing:

service and handler env vars

Just looking for a way to do something like this in V1 as opposed to V0.5

in s-function.json:

"environment": {
  "SERVERLESS_PROJECT": "${project}",
  "SERVERLESS_STAGE": "${stage}"
},

in handler.js:

const stage = process.env.SERVERLESS_STAGE;
const project = process.env.SERVERLESS_PROJECT;
const table = project + '-' + stage + '-todos';

I am doing this in v1 with webpack and the DefinePlugin. It has been working great so far. Here is how I am doing this. https://gist.github.com/andymac4182/b25c5ffc5e23c1e367e5fde7558758d0

I am using the serverless-webpack plugin to integrate webpack and serverless.

@jeffski I'm using the serverless-plugin-write-env-vars plugin from @jthomerson works fine for now. I'm working on creating a plugin for setting the ApiGateway stage variables, so you can just use event.stageVariables.foo in ApiGateway lambda functions.

@svdgraaf - thank you, that worked. I had a try with your plugin but ran in to a couple of issues which I have added.

I'm using a Babel plugin to access env vars in my project (https://github.com/jch254/serverless-es6-dynamodb-webapi).

Check out https://babeljs.io/docs/plugins/transform-inline-environment-variables for more info. This is really handy in React projects too.

A good example for that is to separate environments using the same code. For example, I have 2 queues in sqs and I'd like to use for queue-name-dev and queue-name-prod, but every time that I change the stage in serverless.yml I need to change in handler file too.

@wmarra Check out #2673 it might help

Closing this one as #2673 will discuss this in detail!