canonical / cloud-init

Official upstream for the cloud-init: cloud instance initialization

Home Page:https://cloud-init.io/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

FreeBSD only adding one IPv4 address to an interface, missing public IP

hhartzer opened this issue · comments

With FreeBSD 14.0, cloud-init 24.4.1, and DigitalOcean, Cloud Init will configure vtnet0 to have a private 10.0.0.0/8 address, but not the public IP address. DigitalOcean uses both on vtnet0. I suspect the public address is added first, then the configuration is overridden by the second.

Is there an easy workaround for something like this? Should be possible to put both IP addresses on one line.

Thank you!

As a possibly related side question, is this using the ConfigDrive or the DigitalOcean datasource?

The DigitalOcean datasource is deprecated and I think it doesn't extra all available information from the DO metadata server.

There was a comment from a DO staffer regarding public/private interfaces in that discussion: #4130

I'm honestly not sure. That's a great question. I did vanilla cloud-init with no configuration.

That's a good thread. Thanks for linking to it. Sounds like there's a few possible elements at play here.

For another bit of data, with the same image I installed it with IPv6 disabled. It came up just fine with IPv4. I should've checked /etc/rc.conf though, so should launch another to see what it looks like.

Ok, looks like custom images on Digital Ocean actually are told to use DHCP. The primary interface doesn't have a private 10.0.0.0/8 IPv4 address, as well.

I'm honestly not sure. That's a great question. I did vanilla cloud-init with no configuration.

You can see /var/log/cloud-init.log and also in /run/cloud-init/combined-cloud-config.json which DataSource was used.

looks like custom images on Digital Ocean actually are told to use DHCP.

Ah, I'd forgotten DO have "strange" restrictions on custom images: https://docs.digitalocean.com/products/images/custom-images/details/limits/

Unlike stock images provided by DigitalOcean, Droplets created from custom images use DHCP to obtain an IP address from the DigitalOcean platform. The custom image’s network configuration doesn’t require any additional setup to use DHCP.

Also:

You cannot use [IPv6](https://docs.digitalocean.com/products/networking/ipv6/) with Droplets created from custom images.

I wonder whether the above is "enforced" by vendor-data or ConfigDrive/metadata server config.

Sorry, I'm misunderstanding the problem here. cloud-init seems to be told to use DHCP on the primary interface in DigitalOcean droplets and it obtains a private IP address, but the DHCP server doesn't also provide the public IP address when assigning it to the instance/droplet? The instance is still accessible via the known public IP address from outside the instance right? I'm not sure how this represents a problem/bug for cloud-init if it isn't provided with that network configuration information.

Please attach the /var/log/cloud-init.log and or /var/run/cloud-init/network-config.json or /run/cloud-init/network-config.json which should show us specifically what the datasource provides as network config info and it'll help aid in triage of this issue.

Thanks for any clarification here to steer me down the right path.

If it's a custom image, the server will be configured to use DHCP and things appear to work normally.

If it's a snapshot, same behavior as a distribution image from Digital Ocean, the public IPv4 address won't show up.

I'll provide more debugging information here. Thank you!

Sorry for the wait! Here's some more information.

I could not find a network-config.json, so here's what I have.

rc.conf.txt

I think this shows the issue, if you compare it with the rc.conf.

2024-02-07 20:59:28,952 - bsd.py[INFO]: Configuring interface vtnet0
2024-02-07 20:59:28,952 - bsd.py[DEBUG]: Configuring dev vtnet0 with 2604:a880:4:1d0::6fa:2000 / 64
2024-02-07 20:59:28,953 - bsd.py[DEBUG]: Configuring dev vtnet0 with 64.227.98.23 / 255.255.240.0
2024-02-07 20:59:28,953 - bsd.py[DEBUG]: Configuring dev vtnet0 with 10.48.0.64 / 255.255.0.0
2024-02-07 20:59:28,953 - bsd.py[INFO]: Configuring interface vtnet1
2024-02-07 20:59:28,953 - bsd.py[DEBUG]: Configuring dev vtnet1 with 10.124.0.61 / 255.255.240.0

@hhartzer from the cloud-init.log it is able ot use either ConfigDrive or DigitalOcean DSes, there is an ISO provided for ConfigDrive:

__init__.py[DEBUG]: Detected platform: DataSourceConfigDrive [net,ver=None][source=None]. Checking for active instance data
DataSourceConfigDrive.py[DEBUG]: devices=['/dev/iso9660/config-2'] dslist=['ConfigDrive', 'DigitalOcean', 'None']
'''

The inital mount attempt fails for some reason - @igalic any idea why?

The 2nd mount attempt succeeded and various files were read:

'''
subp.py[DEBUG]: Running command ['mount', '-o', 'ro', '-t', 'cd9660', '/dev/iso9660/config-2', '/run/cloud-init/tmp/tmpbjmfzrz3'] with allowed return codes [0] (shell=False, capture=True)
openstack.py[DEBUG]: Selected version '2012-08-10' from ['2012-08-10', '2015-10-16', 'content', 'latest']
util.py[DEBUG]: Reading from /run/cloud-init/tmp/tmpbjmfzrz3/openstack/2012-08-10/meta_data.json (quiet=False)
util.py[DEBUG]: Read 952 bytes from /run/cloud-init/tmp/tmpbjmfzrz3/openstack/2012-08-10/meta_data.json
util.py[DEBUG]: Reading from /run/cloud-init/tmp/tmpbjmfzrz3/openstack/2012-08-10/vendor_data.json (quiet=False)
util.py[DEBUG]: Read 18925 bytes from /run/cloud-init/tmp/tmpbjmfzrz3/openstack/2012-08-10/vendor_data.json
util.py[DEBUG]: Reading from /run/cloud-init/tmp/tmpbjmfzrz3/openstack/2012-08-10/network_data.json (quiet=False)
util.py[DEBUG]: Read 1861 bytes from /run/cloud-init/tmp/tmpbjmfzrz3/openstack/2012-08-10/network_data.json
util.py[DEBUG]: Reading from /run/cloud-init/tmp/tmpbjmfzrz3/openstack/content/0000 (quiet=False)
2024-02-07 20:59:28,753 - util.py[DEBUG]: Read 877 bytes from /run/cloud-init/tmp/tmpbjmfzrz3/openstack/content/0000
2024-02-07 20:59:28,753 - util.py[DEBUG]: Reading from /run/cloud-init/tmp/tmpbjmfzrz3/openstack/content/000u (quiet=False)
util.py[DEBUG]: Read 515 bytes from /run/cloud-init/tmp/tmpbjmfzrz3/openstack/content/000u
util.py[DEBUG]: Reading from /run/cloud-init/tmp/tmpbjmfzrz3/openstack/content/000r (quiet=False)
util.py[DEBUG]: Read 46 bytes from /run/cloud-init/tmp/tmpbjmfzrz3/openstack/content/000r
util.py[DEBUG]: Reading from /run/cloud-init/tmp/tmpbjmfzrz3/openstack/content/0000 (quiet=False)
util.py[DEBUG]: Read 877 bytes from /run/cloud-init/tmp/tmpbjmfzrz3/openstack/content/0000

Specifically the file relating to network configuration:

 Read 1861 bytes from /run/cloud-init/tmp/tmpbjmfzrz3/openstack/2012-08-10/network_data.json

I don't know why /etc/network/interfaces and /etc/udev/rules.d/70-persistent-net.rules were written as these relate to Linux configuration:

DataSourceConfigDrive.py[DEBUG]: Writing 3 injected files
util.py[DEBUG]: Writing to /etc/network/interfaces - wb: [660] 877 bytes
util.py[DEBUG]: Writing to /etc/udev/rules.d/70-persistent-net.rules - wb: [660] 515 bytes

So cloud-init appears happy with the ConfigDrive information it obtained:

util.py[DEBUG]: Writing to /run/cloud-init/cloud-id-configdrive - wb: [644] 12 bytes
util.py[DEBUG]: Creating symbolic link from '/run/cloud-init/cloud-id' => '/run/cloud-init/cloud-id-configdrive'
atomic_helper.py[DEBUG]: Atomically writing to file /run/cloud-init/instance-data-sensitive.json (via temporary file /run/cloud-init/tmpeam0h24e) - w: [600] 10800 bytes/chars
atomic_helper.py[DEBUG]: Atomically writing to file /run/cloud-init/instance-data.json (via temporary file /run/cloud-init/tmp78qvwbky) - w: [644] 4617 bytes/chars
handlers.py[DEBUG]: finish: init-local/search-ConfigDrive: SUCCESS: found local data from DataSourceConfigDrive
stages.py[INFO]: Loaded datasource DataSourceConfigDrive - DataSourceConfigDrive [net,ver=2][source=/dev/iso9660/config-2]

I'm guessing you're running FreeBSD on a machine that was originally provisioned as Debian and so I'm guessing either the user-data or the vendor-data reflects this lol:

cc_set_hostname.py[DEBUG]: Setting the hostname to debian-12-x64 (debian-12-x64)

This is the network config it appears to retrieved from ConfigDrive:

networking.py[DEBUG]: net: all expected physical devices present
stages.py[DEBUG]: applying net config names for {'version': 1, 'config': [{'mtu': 1500, 'type': 'physical', 'accept-ra': False, 'subnets': [{'type': 'static6', 'routes': [{'gateway': '2604:a880:4:1d0::1', 'netmask': '::', 'network': '::'}], 'address': '2604:a880:4:1d0::6fa:2000/64', 'ipv6': True}, {'netmask': '255.255.240.0', 'type': 'static', 'routes': [{'gateway': '64.227.96.1', 'netmask': '0.0.0.0', 'network': '0.0.0.0'}], 'address': '64.227.98.23', 'ipv4': True}, {'netmask': '255.255.0.0', 'type': 'static', 'address': '10.48.0.64', 'ipv4': True}], 'mac_address': 'ee:7b:f4:7d:86:05', 'name': 'vtnet0'}, {'mtu': 1500, 'type': 'physical', 'subnets': [{'netmask': '255.255.240.0', 'type': 'static', 'address': '10.124.0.61', 'ipv4': True}], 'mac_address': '86:6f:10:87:c8:56', 'name': 'vtnet1'}, {'address': '67.207.67.2', 'type': 'nameserver'}, {'address': '67.207.67.3', 'type': 'nameserver'}]}
stages.py[INFO]: Applying network configuration from ds bringup=False: {'version': 1, 'config': [{'mtu': 1500, 'type': 'physical', 'accept-ra': False, 'subnets': [{'type': 'static6', 'routes': [{'gateway': '2604:a880:4:1d0::1', 'netmask': '::', 'network': '::'}], 'address': '2604:a880:4:1d0::6fa:2000/64', 'ipv6': True}, {'netmask': '255.255.240.0', 'type': 'static', 'routes': [{'gateway': '64.227.96.1', 'netmask': '0.0.0.0', 'network': '0.0.0.0'}], 'address': '64.227.98.23', 'ipv4': True}, {'netmask': '255.255.0.0', 'type': 'static', 'address': '10.48.0.64', 'ipv4': True}], 'mac_address': 'ee:7b:f4:7d:86:05', 'name': 'vtnet0'}, {'mtu': 1500, 'type': 'physical', 'subnets': [{'netmask': '255.255.240.0', 'type': 'static', 'address': '10.124.0.61', 'ipv4': True}], 'mac_address': '86:6f:10:87:c8:56', 'name': 'vtnet1'}, {'address': '67.207.67.2', 'type': 'nameserver'}, {'address': '67.207.67.3', 'type': 'nameserver'}]}

Is perhaps the problem is that the FreeBSD-specific network code in cloud-init is not setting rc.conf up correctly? It doesn't write both IPv4 addresses to rc.conf?

I think that's exactly right. It writes one IP address, then another. I think it should be writing both at the same time into the same rc.conf clause.

But yes, this was a Debian server, put into rescue mode, with the FreeBSD image written over top of it. :-)

@blackboxsw would it be possible to remove the incomplete tag? Is there anything else that I can provide here?

I may also be able to submit a pull request for this, but I'm not familiar with the codebase.

Thank you!

I'd guess this would be something for @igalic to look at

this is something that Hetzner also used to do in their vendor-data, in that they would harcode the OS, and you'd have to override it in the user-data

you'll have to repeat this section of the config,

system_info:
in your user-data. just enough of it to override the interfering parts from their vendor-data

Is that something that would resolve the multiple IPv4 address issue? I feel like that would still persist.

The debian-12-x64 hostname is just a byproduct of launching a Debian 12 server to overwrite the disk with FreeBSD while in rescude mode, since Digital Ocean no longer offers FreeBSD images natively.

@igalic ah, I hadn't noticed that the cloud-init detected distro was switching from freebsd to debian during execution.

@hhartzer perhaps adding the following to a new file in /etc/cloud.cfg.d/ directory might be a workaround:

vendor_data:
  enabled: false
vendor_data2:
  enabled: false

That should cause c-i to ignore any vendor-data files.

BTW could you pastebin (or equivalent) the contents of any vendor-data files Hetzner provides?

I can try that! Would that disable SSH key provisioning, however?

Interesting how the distribution changed...

@hhartzer

I can try that! Would that disable SSH key provisioning, however?

That depends on how you are doing SSH key provisioning, i..e is it specified in your user-data or vendor-data?

Interesting how the distribution changed...

It could be due to the vendor-data contents, that's why I asked if you could post its contents.

How can I grab the vendor data? I believe in this case it's probably coming from the vendor data as I'm not manually supplying any user data.

either via their API, or from cloud-init's cache in /var/lib/cloud/instance/

Thank you!

Using /var/lib/cloud/instance/, user data is blank. The only thing I can see about a SSH key is in obj.pkl.

Thank you!

Using /var/lib/cloud/instance/, user data is blank. The only thing I can see about a SSH key is in obj.pkl.

And the vendor-data.txt / vendor-data2.txt contents?

vendor-data.txt

vendor-data2.txt is blank.

vendor-data.txt

If you look at its contents you can see that it is setting the "distro" to debian which is obviously confusing cloud-init.

I did suggest a while ago that you create a file in /etc/cloud.cfg.d/ with the contents:

vendor_data:
  enabled: false

so that the vendor data is ignored.