/etc/consul/config.json not produced as valid json after upgrading from ansible 2.10 to ansible 2.12
MartinAhrer opened this issue · comments
After upgrading from ansible 2.10 to 2.12 a playbook using version v2.6.1 of the consul role is not producing a valid /etc/consul/config.json.
All double-quotes (") are suddenly escaped and rendered as " resulting in a non-parseable json.
My setup was working with
ansible 2.10.3
config file = None
configured module search path = ['/Users/martin/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/2.10.3_1/libexec/lib/python3.9/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.9.0 (default, Dec 3 2020, 16:09:02) [Clang 12.0.0 (clang-1200.0.32.27)]
After updating ansible I started to experience the misfunction.
ansible [core 2.12.1]
config file = None
configured module search path = ['/Users/martin/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/5.1.0/libexec/lib/python3.10/site-packages/ansible
ansible collection location = /Users/martin/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.10.1 (main, Dec 6 2021, 23:20:29) [Clang 13.0.0 (clang-1300.0.29.3)]
jinja version = 3.0.3
libyaml = True
My playbook contains some variable values including double curly braces that are prefixed with !unsafe
so the placeholders are ignored by ansible and passed though to the consul configuration.
# TODO for compatibility 169.254.1.1 bindings are kept. They have to be removed as migration is done.
- name: Assemble consul cluster
become: true
hosts: consul_nodes
roles:
- role: consul
vars:
consul_version: "1.11.1"
consul_install_upgrade: true
consul_group_name: "consul_nodes"
consul_addresses:
dns: !unsafe '169.254.1.1 127.0.0.1 {{ GetPrivateIP }} {{ GetInterfaceIP \"docker0\" }}'
http: !unsafe '169.254.1.1 {{ GetPrivateIP }} {{ GetInterfaceIP \"docker0\" }}'
https: !unsafe '127.0.0.1 {{ GetPrivateIP }}'
grpc: "127.0.0.1"
consul_client_address: "169.254.1.1"
consul_node_role: server
consul_bootstrap_expect_value: 3
consul_bootstrap_expect: true
My initial playbook didn't use !unsafe
and it broke the consul config the same way. After adding !unsafe
the configuration was valid.
So I was wondering if the effect is related to that feature, or this happens by coincidence.
Are those unsafe values passed properly or am I using this the wrong way.
For what it is worth I'm experiencing the same problem when trying to bring up the Vagrant cluster defined in examples
without making any changes to settings. Ansible details below:
$ ansible --version
ansible [core 2.12.1]
config file = /home/manoj/dev/consul/roles/brianshumate.consul/examples/ansible.cfg
configured module search path = ['/home/manoj/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/dist-packages/ansible
ansible collection location = /home/manoj/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.9 (main, Nov 16 2021, 03:08:02) [GCC 9.3.0]
jinja version = 3.0.3
libyaml = False
I can confirm exactly the same behaviour (didn't mention the new line effects in my initial request)
This is apparently a known (and fixed) bug issue
Applied the changes from PR 448 locally, and the config.json
file appears to get generated as expected:
root@consul1:/etc/consul# cat config.json
{
"addresses": {
"dns": "127.0.0.1",
"grpc": "127.0.0.1",
"http": "127.0.0.1",
"https": "127.0.0.1"
},
"advertise_addr": "10.1.42.210",
"advertise_addr_wan": "10.1.42.210",
"bind_addr": "10.1.42.210",
"bootstrap": true,
"client_addr": "127.0.0.1",
"data_dir": "/var/consul",
"datacenter": "dc1",
"disable_update_check": false,
"domain": "consul",
"enable_local_script_checks": false,
"enable_script_checks": false,
"encrypt": "y65Z9ZWVyxTx6zYFPObH9r3+VHki+Sk6XOFpmpd1vJ4=",
"log_file": "/var/log/consul/consul.log",
"log_level": "DEBUG",
"log_rotate_bytes": 0,
"log_rotate_duration": "24h",
"log_rotate_max_files": 0,
"node_name": "consul1",
"performance": {
"leave_drain_time": "5s",
"raft_multiplier": 1,
"rpc_hold_timeout": "7s"
},
"ports": {
"dns": 8600,
"grpc": -1,
"http": 8500,
"https": -1,
"serf_lan": 8301,
"serf_wan": 8302,
"server": 8300
},
"raft_protocol": 3,
"retry_interval": "30s",
"retry_interval_wan": "30s",
"retry_join": [
"10.1.42.210",
"10.1.42.220",
"10.1.42.230"
],
"retry_max": 0,
"retry_max_wan": 0,
"server": true,
"translate_wan_addrs": false,
"ui": true
}
Consul is not starting though. Ansible provisioning fails during the Check Consul HTTP API (via TCP socket)]
step. Not sure if this is related to #455.
Also observed that /etc/consul.d
directory does not exist. There is a directory named /etc/consul/consul.d
however.
root@consul1:/etc/consul# ls -l
total 8
-rw-r--r-- 1 consul bin 1355 Dec 29 09:37 config.json
drwx------ 2 consul bin 4096 Dec 29 09:37 consul.d
Any news on when this will be released or tagged