This repository deploys a new VPC into AWS containing various NGINX and Open Source software. But before you can use it you need to set a few things up. After cloning you will need to:
We use submodules to load in some playbooks, and demo applications. Git may not have fetched them automatically for you. So to ensure you have them, and they stay up-to-date run:
$ git submodule update --init $ git config --global submodule.recurse true
Next you need to change to the build folder, and copy the example inventory and vars.yaml files
from the example
folder.
$ cp example/* .
You’ll also need a secrets folder to store keys and other sensitive information. Create a folder some where outside of the source tree and then sym-link it into the build folder. Eg:
$ sudo mkdir -p /var/lib/ansible/secrets $ sudo chown <user>:<group> /var/lib/ansible/secrets $ sudo ln -s /var/lib/ansible/secrets secrets
Then review and edit the vars.yaml
, and create secrets/aws_config.yaml
using the instructions below
I recommend using the most recent version of ansible. I’m using 2.9
. You can install ansible and the
required modules below using pip, or your package manager (if it has an up-to-date ansible).
$ pip install ansible
You’ll need to have the following python modules available on your system to run the build playbook
-
boto
-
boto3
-
botocore
-
nose
-
tornado
-
passlib (required on macos for setting jumphost passwords)
So depending on how you installed ansible, an apt-get, yum or pip may be in oder
$ pip install boto boto3 botocore nose tornado
The configuration for the workshop deployment is stored in build/vars.yaml
The stack built by the build playbook will use the below variables for creating your VPC and its hosts. Please change them to identify yourself, and so not to clobber other deployments.
--- ws: stack_name: workshop uk_se_name: mbodd prefix: ws stacks: 1 save_certbot_configs: yes jumphosts: yes aws_access_key: "{{ secrets.aws_access_key }}" aws_secret_key: "{{ secrets.aws_secret_key }}" aws_sts_token: "{{ secrets.aws_sts_token }}" aws_account_id: "{{ secrets.aws_account_id }}" aws_region: "eu-west-1" route53: zone: "{{ secrets.r53_zone }}" zone_id: "{{ secrets.r53_zone_id }}"
The stack_name
and uk_se_name
need to be unique to your deployment. They are used to name
your VPC and add your deployment to an S3 bucket policy. You should also set the prefix
to
your initials or something unique to you as the prefix will make up part of your host names in
Route53.
The prefix
is used for each machine in your stack, and the stacks
number indicates how many
stacks to deploy. If you are throwing something up for testing, then leave stacks at 1.
The above settings would create:
-
A VPC called mbodd_workshop
-
A private Subnet called ws01_private
-
Internet Gateway called: mbodd_workshop_gw
-
Route Table called: mbodd_workshop_route
-
Virtual Machines called: ws01-gateway, ws01-nginx1, ws01-nginx2…
-
etc
The machines created are given names like ws01-gateway
where ws01 is made up from the
prefix
and stack
number. Stacks would be numbered 01, 02, 03, etc.
The gateway is always needed, and by default is the only public machine. The playbook will upload
the workdir
folder and it’s contents to the home directory /home/ubuntu
on the gateway.
The aws_region
should obviously be set to the region in which you want to deploy the workshop.
If the save_certbot_configs
setting is yes|true then the undeploy script will tar up the letsencrypt
certificates from the workshop instances and store them in your local secrets
folder during shutdown.
If jumphosts is set to yes
then each workshop will also get a basic linux jumphost accessable via RDP.
The jumphost is using Amazon Linux and the user will be ec2-user@jump{{stack}}
. See also the jumphost
section in the ec2
vars to set the machine type.
The vars.yaml continues with an S3 section…
s3: bucket: "{{ secrets.s3_bucket }}" folder: "app_centric_automation" files: controller: "controller-installer-3.2.tar.gz" controller_local: "/home/mark/controller-installer-3.2.tar.gz" controller_version: 3.2
S3 contains the NGINX Controller installer package which will be downloaded during deployment
Tip
|
If you are deploying into a region for the first time, you will want to create a local S3 Bucket. See the S3 Prerequisites section below |
The bucket
will be given a suffix of -{{ ws.aws_region }}
and is epected to be a S3 bucket holding your downloads.
The files
should all be placed in the named folder
inside the bucket. The files
section contains the files needed by the
workshop. Currently the only file(s) stored in S3 is the controller installer package.
Tip
|
The controller values set here will override values set in the workdir env files during deployment.
|
The vars.yaml continues with an ec2 section…
ec2: cidr_block: "10.10" # first 2 octets of subnet cidr... this will be a /16 jumphost: type: t3.medium public: yes machines: gateway: type: t3.small public: yes cicd1: type: t3.xlarge disk_name: /dev/sda1 disk_size: 40 public: no ctrl1: type: t3.xlarge disk_name: /dev/sda1 disk_size: 80 public: no nginx1: type: t3.small public: no nginx2: type: t3.small public: no
The cidr_block
is the first two octets of a /16 subnet which will make up the VPC Subnet.
For other network addressing see subnets below.
The machines dictionary contains all the machines which should be deployed into the VPC. Any
machines with public
set to yes
, will be multi-homed, having a NIC on the public subnet, and
the stack
private subnet. Each stack
gets it’s own private subnet, but they share the public
one. Machines without a public interface will only be reachable via the gateway machine.
There are two playbooks in /home/ubuntu/ansible/playbooks/nginx_workshop_gw which will install and configure NGINX on the gateway, to act as a reverse proxy for access to all other services.
When the stack is deployed a wildcard DNS record will be added to Route53 to point at the public IP address of the gateway machine. Eg: ws01.ukws.nginxlab.net
The final variables section configures the subnets.
subnets: public: third: 0 bits: 21 private: third: 10 bits: 24
The public
network is shared by all stacks, and will be given a /21 network block.
The private
networks will be dedicated to each stack, with the stack number incrementing
the third octet. Ie stack #1 will have third == 11, stack #2 with have third == 12.
As the VMs are being deployed their private IP’s will be stored in secrets/hosts.<prefix><stack>
This repo keeps all secrets outside of this repository and sym-links them into the build folder,
The symlink points to /var/lib/ansible/secrets
by default
You should create a file called aws_config.yaml
inside your secrets folder. Containing:
--- secrets: aws_access_key: <YOUR_ACCESS_KEY> aws_secret_key: <YOUR_SECRET_KEY> aws_sts_token: <YOUR_STS_TOKEN> aws_account_id: <AWS ACCOUNT ID> aws_user_role: <The Role for SSO access tokens> r53_zone: <ROUTE_53_ZONE_FOR_PUBLIC_HOSTS> r53_zone_id: <THE_ZONE_ID_FOR_ABOVE> s3_bucket: <NAME_OF_S3_BUCKET> ...
Ansible will also check for a controller licence file, and NGINX repo keys inside the secrets
folder. license.txt
, nginx-repo.crt
and nginx-repo.key
The workshop playbooks can generate letsencrypt keys for the public domain names, so it’s a good idea to tar them up for hostnames you’ll use again and drop them in your secrets folder.
Tip
|
The undeploy script will do this for you if save_certbot_configs is enabled.
|
Ansible will check for: letsencrypt-<prefix><stack#>.tar.gz
, and deploy into /etc/letsencrypt
$ tar ztvf letsencrypt-ws01.tar.gz | head -4 drwxr-xr-x root/root 0 2020-03-02 13:06 letsencrypt/ -rw-r--r-- root/root 64 2020-03-02 12:14 letsencrypt/.updated-options-ssl-nginx-conf-digest.txt -rw-r--r-- root/root 424 2020-03-02 12:14 letsencrypt/ssl-dhparams.pem drwx------ root/root 0 2020-03-02 13:06 letsencrypt/renewal/
If you use SSO then you will need to provide a STS session token along with your temporary access keys.
If this is the first time you’ve used the aws cli, then you will need to create a folder in your
home directory ~/.aws
and create a ~/.aws/config
file with the following content:
[default] sso_start_url = https://<your-org-login-uri>/start#/ sso_region = us-east-1 sso_account_id = <account-id> sso_role_name = <role-name> region = us-east-1 output = yaml
Once you have the AWS CLI installed and configured, then you can generate the temp keys with…
$ aws sso get-role-credentials --account-id <account id> --role-name <role> --access-token <cli access token>
Or use my little script
$ ./bin/get_aws_token.sh
You should see a file named <uuid>.json
in your aws folder: ~/.aws/sso/cache/<uuid>.json
. That file
will contain your current access token in use by the CLI.
Fill out the aws secrets using the values return by the AWS CLI. Alternatively login using your browser and retrive temporary keys from the portal.
To deploy the stack to AWS, enter the build folder, and execute ansible-playbook deploy_aws.yaml
Once complete you will have access to <prefix><stack#>.<r53_zone> via SSH, HTTP, and HTTPS.
You should be able to log in as the user ubuntu using the ssh private key stored in secrets/user.pem
Eg:
$ ssh -i secrets/user.pem ubuntu@ws01.ukws.nginxlab.net
All stack information is recorded in secrets/workshops.txt
, you’ll have an entry for each workshop
and each jumphost. Eg:
$ cat secrets/workshops.txt https://ws01.ukws.somedomain.com/tasks <user> <password> rdp://jump01.ukws.somedomain.com <user> <password>
The https line is to access the documentation for the course, which is protected with basic-auth and
the user/password. The rdp line is the uri for the jumphost with the RDP user/password. You can also
SSH to the jumphost using the same ssh key as the workshop gateway, but the username is ec2-user
.
The deploy_aws.yaml playbook sets up the VPC, and then includes deploy_workshop.yaml to handle setting up the workshop (ie the gateway instance). The gateway is set up to provide DNS for the other machines, and also act as a default gateway.
If jumphosts are enabled then the playbook deploy_jumphosts.yaml is run to setup the jumphosts too.
The deployment will attempt to download a copy of the NGINX Controller installer package from S3. However this will only succeed if the S3 Bucket is in the same region as your deployment. UK London will work :-)
You may need to create a new S3 bucket for your region and upload the controller package.
There is a playbook, which will create a new S3 bucket for your region called deploy_s3_bucket.yaml
.
You will need to set the ws.aws_region
to the AWS region you want to deploy to, and set an
appropriate bucket name in ws.s3.bucket
. The resulting bucket will be named {{ws.s3.bucket}}-{{ws.aws_region}}
When ready, run the playbook from the build
folder.
$ ansible-playbook deploy_s3_bucket.yaml
The playbook will create a new S3 bucket, with a default policy which allows access from your AWS users, but no public access. It will aslo upload the controller image specified in ws.s3.files.controller*
Tip
|
You can also run the deploy_s3_bucket when you want to upload a new version of controller |
Once the stack is running, you’re ready to follow the tasks in the workshop itself.
See workdir/docs
and/or workdoc/html
Workshop Docs link: workdir/docs/index
Or see below for minimal setup instructions….
The first thing to do is to setup the other nodes to use the gateway for internet access and DNS.
$ cd ~/ansible $ ansible-playbook node_setup_playbook.yaml
Once that is done, you will want to enable the gateway to act as a reverse proxy.
$ ansible-galaxy install nginxinc.ngnix $ cd ~/ansible $ ansible-playbook playbooks/nginx_workshop_gw/install.yaml $ ansible-playbook playbooks/nginx_workshop_gw/setup.yaml
Other playbooks included are:
cicd |
Deploy Jenkins and gitea servers on the cicd1 instance. Once deployed they will be accessable at https://git.<prefix><stack>.<domain>; and https://jenkins.<prefix><stack>.<domain>; |
controller |
Deploy an NGNIX controller, license it, and register nginx instances. Once deployed it will be accessable at https://ctrl.<prefix><stack>.<domain>; |
apps |
Deploy an API Gateway configuration via the controller |
There’s also a hidden script which deploys everthing found here ~/.please_dont_run_this_script.sh