This repository represents a workable role for deploying a single new "webworker" to a given stack for the commcare-hq application.
To test the role, a vagrant file has been added that will provide the required servers for a multi-machine deployment similar to the US production stack described in Dimagi devops needs
Begin by checkout out the source for this repostiory:
$ git clone https://github.com/dimagi/commcarehq-ansible
Then install the git hooks:
./git-hooks/install.sh
Now you can change directories into the new clone and set up submodules:
$ cd commcarehq-ansible
$ git submodule init
$ git submodule update
There is one file that is omitted from the commcarehq-ansible repository: DimagiKeyStore. You will need it to complete a full stack deployment. Obtain a copy of it, and place it in the ansible/roles/keystore/files/ directory.
Ensure you have downloaded Vagrant and virtual box
Then start vagrant:
$ vagrant up
If you run into issues starting vagrant, see the troubleshooting section at the bottom.
The ./scripts/reset-vms
command can be run at any time, possibly with a subset of the
VM names, to reset the VMs to their initial state and provision them with your
SSH key. Run ./scripts/reset-vms
without arguments for usage info.
Once vagrant is up, you may ssh into the control server and run a full deployment:
$ vagrant ssh control
...
$ ansible-playbook -i inventories/development -e '@vars/dev/dev_private.yml' -e '@vars/dev/dev_public.yml' deploy_stack.yml
This will build a database server, a proxy server and a single web worker, hooked into both appropriately.
Once the preliminary deployment is complete, a new web worker may be added
simply by editing the file ansible/inventories/development
and adding the second
web worker server IP address. Also uncomment the section of the vagrant file that refers to 'app2':
[webworkers]
192.168.33.15
192.168.33.18
Sometimes multi machine setup locally can be very large. In order to setup a monolith, which is just one machine, you can use this setup:
cp Vagrantfile-monolith Vagrantfile
vagrant up
The one other change needed is to point to the proper inventory. Instead of using ansible/inventories/development
, use ansible/inventories/monolith
:
$ vagrant ssh control
...
$ ansible-playbook -i inventories/monolith -e '@vars/dev/dev_private.yml' -e '@vars/dev/dev_public.yml' deploy_stack.yml
In order to have this set up send email without crashing (which you need to do during a deploy, for example) run a debug smtp server daemon on control:
$ vagrant ssh control
...
$ python -m smtpd -n -c DebuggingServer 0.0.0.0:1025
vagrant up
fails.
- Start VirtualBox
$ VirtualBox
- Or on a Mac,
$ sudo /Library/StartupItems/VirtualBox/VirtualBox restart
- Or on a Mac,
- Attempt to start the VM
- If the error message is:
VT-x needs to be enabled in BIOS
For the Lenovo T440s: - Restart machine, press Enter during startup
- Navigate to Security -> Virtualization
- Turn both settings on
- If the error message is:
- Update localsettings:
ansible-playbook -i inventories/development -e '@vars/dev/dev_private.yml' -e '@vars/dev/dev_public.yml' deploy_localsettings.yml --tags=localsettings
- Skip the common setup, including apt installs and updating the commcarehq code:
ansible-playbook -i inventories/development -e '@vars/dev/dev_private.yml' -e '@vars/dev/dev_public.yml' deploy_stack.yml --skip-tags=common
Tags available:
- apache2
- aptcache
- common
- deploy
- git
- keystore
- ksplice
- localsettings
- newrelic
- slow
Note: to generate this list automatically, you can run something like
ENV=production && ansible-playbook -u root -i ../../commcare-hq/fab/inventory/india deploy_stack.yml -e "@vars/$ENV/${ENV}_vault.yml" -e "@vars/$ENV/${ENV}_public.yml" --tags= | sed 's/ERROR: tag(s) not found in playbook: . possible values: //g' | sed 's/,/\
/g' | xargs -I% echo - %
This must be done as the root user. Run ansible-deploy-control
to get the
proper command.
Ansible forwards SSH requests through your local machine to authenticate with remote servers. This way authentication originates from your machine and your credentials, and the ansible machine doesn't need its own auth to communicate with other servers managed with ansible.
SSH ForwardAgent
can be enabled by passing the -A
flag on the command line:
$ ssh -A control.internal-va.commcarehq.org
You can also enable it automatically for an alias in your ssh config (note that
you then must use the alias $ ssh control
for the settings to take effect)
# ~/.ssh/config
Host control
Hostname control.internal-va.commcarehq.org
ForwardAgent yes
Be careful not to enable ForwardAgent
for untrusted hosts.
You cannot use ssh forwarding with mosh
, so you cannot use mosh for ansible.
You must ssh in as your dev user, with SSH ForwardAgent
enabled (see above).
git clone git@github.com:dimagi/commcarehq-ansible
. commcarehq-ansible/control/init.sh
update-code
# optional: make subsequent logins a bit more convenient
echo '[ -t 1 ] && source ~/init-ansible' >> ~/.profile
On subsequent logins if optional step was not done.
. init-ansible
Setup the vault password files as described below in Managing secrets with Vault
Add a record for your user to dev_users.present
in ansible/vars/dev/dev_public.yml
and your SSH public key to
ansible/vars/dev/users/{username}.pub
.
Login with vagrant ssh control
ansible-playbook -u root -i inventories/development -e @vars/dev/dev_private.yml \
-e @vars/dev/dev_public.yml --diff deploy_control.yml
Login as your user: `vagrant ssh control -- -l $USER -A
ln -s /vagrant ~/commcarehq-ansible
. commcarehq-ansible/control/init.sh
echo '[ -t 1 ] && source ~/init-ansible' >> ~/.profile
# run ansible
ansible-playbook -u ansible --ask-sudo-pass -i inventories/development \
-e @vars/dev/dev_private.yml -e @vars/dev/dev_public.yml \
--diff deploy_stack.yml --tags=users,ssh # or whatever
ansible-playbook -u vagrant -i inventories/development -e @vars/dev/dev_private.yml \
-e @vars/dev/dev_public.yml --diff deploy_stack.yml --tags=users,ssh # or whatever
IMPORTANT: Install the git hooks to help ensure you never commit secrets into the repo: ./git-hooks/install.sh
All the secret variables and private data required for the different environments is included
in this repository as encrypted files (${ENV}_vault.yml
).
To edit these files you need to provide them on the command line when prompted (keys stored in CommCare Keepass).
To use these files with ansible-playbook
include the --ask-vault-pass
param.
(This is included for your convenience in the ap
and aps
aliases.)
You can use Vault's built in editing capability as follows:
ENV=production ansible-vault edit ansible/vars/$ENV/${ENV}_vault.yml
This will decrypt the file for editing and re-encrypt it after. Note that even if no changes are made to the file the encrypted contents will have changed.
If you just want to view the contents of the file you can use this command:
ENV=production ansible-vault view ansible/vars/$ENV/${ENV}_vault.yml
CAUTION: Make sure that you re-encrypt any files with the correct key before committing them.
The following command can be used to encrypt and decrypt files:
ENV=production && ansible-vault [encrypt|decrypt] filename
For more info on Vault see the Ansible Documentation
It is also possible to run tasks on the vagrant machines from you're local machine:
ansible-playbook -u vagrant -i inventories/development --private-key=~/.vagrant.d/insecure_private_key -e '@vars/dev/dev_private.yml' -e '@vars/dev/dev_public.yml' deploy_stack.yml