slackhq / nebula

A scalable overlay networking tool with a focus on performance, simplicity and security

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

🐛 BUG:How do I get the client virtual ip after the relay?

futurecad opened this issue · comments

What version of nebula are you using? (nebula -version)

1.8.2

What operating system are you using?

linux macos

Describe the Bug

After relay, what the intranet server gets is the ip of the relay server. How to get the actual virtual ip?

Logs from affected hosts


Config files from affected hosts


Hi @futurecad - Can you share a bit more about your use case? Where are you seeing the IP of the relay server, and what are you using it for?

Hi @futurecad - Can you share a bit more about your use case? Where are you seeing the IP of the relay server, and what are you using it for?

The relay node forwards the traffic to the internal web management platform. The policy cannot audit the visitor's real IP. The audit content needs to obtain the visitor's real IP.

@futurecad Packets coming out of the Nebula interface should contain the correct Nebula source IP of the original host (not the relay.) I suspect you are looking at the packets coming in on the physical link (i.e. the computer's NIC) which will have a source IP set to the underlay (i.e. WAN or LAN) IP of the relay. It is not possible to change the source IP of these packets and maintain connectivity.

If I'm misunderstanding your goal, please let me know and I'll reopen the ticket.

@futurecad Packets coming out of the Nebula interface should contain the correct Nebula source IP of the original host (not the relay.) I suspect you are looking at the packets coming in on the physical link (i.e. the computer's NIC) which will have a source IP set to the underlay (i.e. WAN or LAN) IP of the relay. It is not possible to change the source IP of these packets and maintain connectivity.

If I'm misunderstanding your goal, please let me know and I'll reopen the ticket.

Client mac (nebula ip: 10.1.1.3, intranet ip: 192.168.3.151)
nebula lighthouse and forwarding node have public network ip (nebula ip: 10.1.1.1, intranet ip: 192.168.27.199)
This machine certificate is configured with forwarding for the 192.168.2 - 99.0/24 network segment.
And configure iptables as follows:
iptables - A FORWARD - i nebula - j ACCEPT
iptables - A FORWARD - o nebula - j ACCEPT
iptables - t nat - A POSTROUTING - o ens32 - j MASQUERADE
When I access the web service on the forwarding IP 192.168.24.60 through the client mac, the recorded IP is the intranet IP of my forwarding node: 192.168.27.199
I understand that it is caused by NAT forwarding in iptables. Is there a way for the intranet 192.168.24.60 web service to obtain the nebula IP (10.1.1.3) of my client mac?

Hi @futurecad - In your initial post you mentioned relays which is the term for a Nebula concept where a relay node forwards end-to-end encrypted traffic between two other Nebula nodes that otherwise cannot communicate (e.g. due to a problem with a NAT gateway in between them.) When using relays, the source IP address, as seen by the destination, will match the node that initiated traffic - not the relay.

However, if I now understand your problem correctly, you are using unsafe_routes to forward traffic from a Nebula node, through another Nebula node, to a destination which does not run Nebula. Am I understanding correctly?

Lighthouse / Router: 10.1.1.1 / 192.168.27.199
Client: 10.1.1.3 / 192.168.3.151
Destination: 192.168.24.60

The masquerade rule you have installed on your router is the reason your packets arrive at the destination with a source IP set to the router's address. The rule tells the system to rewrite the source so that return traffic is sent back to the router, which can then forward it back on to the Nebula network.

To avoid modifying the source address in the packets that arrive at the destination, you can remove the masquerade rule, but you will instead need to add an entry to the routing table on all the hosts you are exposing with your unsafe_routes entry. Something like ip route add 10.1.1.0/24 via 192.168.3.151 assuming your Nebula network IP range is 10.1.1.0/24 should suffice, and will instruct the hosts to send traffic back through the router.

That being said, Nebula works best when you install it on all of your hosts, so that you can leverage end-to-end encryption and fine-grained access control using the built-in firewall. :)

I'm using unsaferoutes to forward traffic from one nebula node to another destination that doesn't run nebula。

Configuration of my client:

pki:
  ca: /Users/cly/Downloads/VPN/nebula/my-mac/ca.crt
  cert: /Users/cly/Downloads/VPN/nebula/my-mac/clyds-mac.crt
  key: /Users/cly/Downloads/VPN/nebula/my-mac/clyds-mac.key

static_host_map:
  "10.1.1.1": ["192.168.27.199:4242"]

lighthouse:
  am_lighthouse: false
  interval: 60
  hosts:
    - "10.1.1.1"  

listen:
  host: "[::]"
  port: 0

punchy:
  punch: true
relay:
  relays:
    - 10.1.1.1
  am_relay: false
  use_relays: true

tun:
  disabled: false
  dev: nebula1
  drop_local_broadcast: true
  drop_multicast: true
  tx_queue: 500
  mtu: 1300

  routes:

  unsafe_routes:
    - route: 172.16.2.0/24
      via: 10.1.1.1
    - route: 172.16.3.0/24
      via: 10.1.1.1
    - route: 172.16.4.0/24
      via: 10.1.1.1
    - route: 192.168.2.0/24
      via: 10.1.1.1   
    - route: 192.168.3.0/24
      via: 10.1.1.1 
    - route: 192.168.4.0/24
      via: 10.1.1.1
    - route: 192.168.21.0/24
      via: 10.1.1.1
    - route: 192.168.22.0/24
      via: 10.1.1.1
    - route: 192.168.23.0/24
      via: 10.1.1.1   
    - route: 192.168.24.0/24
      via: 10.1.1.1   
    - route: 192.168.25.0/24
      via: 10.1.1.1
    - route: 192.168.26.0/24
      via: 10.1.1.1
    - route: 192.168.27.0/24
      via: 10.1.1.1
    - route: 192.168.60.0/24
      via: 10.1.1.1   
    - route: 192.168.96.0/24
      via: 10.1.1.1     
    - route: 192.168.97.0/24
      via: 10.1.1.1     
    - route: 192.168.98.0/24
      via: 10.1.1.1       
    - route: 192.168.63.0/24
      via: 10.1.1.1         
   
logging:
  level: info
  format: text

firewall:
  outbound_action: drop
  inbound_action: drop

  conntrack:
    tcp_timeout: 12m
    udp_timeout: 3m
    default_timeout: 10m

  outbound:
    - port: any
      proto: any
      host: any

  inbound:
    - port: any
      proto: any
      host: any

My lighthouse node and relay node configuration:

static_host_map:
  "10.1.1.1": ["192.168.27.199:4242"]

lighthouse:
  am_lighthouse: true
  interval: 60
  hosts:
    - "10.1.1.1"

listen:
  host: "[::]"
  port: 4242

punchy:
  punch: true

relay:
  relays:
    - 10.1.1.1
  am_relay: true
  use_relays: true

tun:
  disabled: false
  dev: sdp
  drop_local_broadcast: true
  drop_multicast: true
  tx_queue: 500
  mtu: 1350

  unsafe_routes:


logging:
  level: info
  format: text

firewall:
  outbound_action: drop
  inbound_action: drop

  conntrack:
    tcp_timeout: 12m
    udp_timeout: 3m
    default_timeout: 10m

  outbound:
    - port: any
      proto: any
      host: any

  inbound:
    - port: any
      proto: any
      host: any

Subnets configured by my relay node:

Ips: [
			10.1.1.1/24
		]
		Subnets: [
			172.16.2.0/24
			172.16.3.0/24
			172.16.4.0/24
			192.168.2.0/24
			192.168.3.0/24
			192.168.4.0/24
			192.168.19.0/24
			192.168.20.0/24
			192.168.21.0/24
			192.168.22.0/24
			192.168.23.0/24
			192.168.24.0/24
			192.168.25.0/24
			192.168.26.0/24
			192.168.27.0/24
			192.168.60.0/24
			192.168.61.0/24
			192.168.62.0/24
			192.168.63.0/24
			192.168.96.0/24
			192.168.97.0/24
			192.168.98.0/24
		]

I'm not sure how I should change it so that I can log the actual or virtual ip of my client mac when I access the 192.168.24.64 service (10.1.1.3)
The machine I visited was a node without nebula installed

Hi @futurecad - I gave instructions on how to achieve what you're looking for in my last reply. Is there anything I can help you with at this point?

To avoid modifying the source address in the packets that arrive at the destination, you can remove the masquerade rule, but you will instead need to add an entry to the routing table on all the hosts you are exposing with your unsafe_routes entry. Something like ip route add 10.1.1.0/24 via 192.168.3.151 assuming your Nebula network IP range is 10.1.1.0/24 should suffice, and will instruct the hosts to send traffic back through the router.

IP route add 10.1.1.0/24 via 192.168.3.151
This configuration is based on the client and Nebula I sent you above and it is a relay node. Where should it be configured?
When I configure it into the configuration file on my client mac it prompts:
ERRO[0000] Failed to get a tun/tap device error="entry 1.route in tun.unsafe _ routes is contained within the network attached to the certificate; route: 10.1.1.0/24, network: 10.1.1.100/24 "

Removing the masquerade rule will cause the packets to arrive at the destination with the source IP set to the Nebula IP of the client accessing it. But to enable return traffic...

Every host that doesn't run Nebula - so all of the ones in the following range that you're configuring as routes:

172.16.2.0/24
172.16.3.0/24
172.16.4.0/24
192.168.2.0/24
192.168.3.0/24
192.168.4.0/24
192.168.19.0/24
192.168.20.0/24
192.168.21.0/24
192.168.22.0/24
192.168.23.0/24
192.168.24.0/24
192.168.25.0/24
192.168.26.0/24
192.168.27.0/24
192.168.60.0/24
192.168.61.0/24
192.168.62.0/24
192.168.63.0/24
192.168.96.0/24
192.168.97.0/24
192.168.98.0/24

Will need to have an appropriate route manually added (e.g. ip route add 10.1.1.0/24 via 192.168.3.151) so that they know to send packets back through the router node (what you're calling a relay.)

 ERRO[0000] Failed to get a tun/tap device error="entry 1.route in tun.unsafe _ routes is contained within the network attached to the certificate; route: 10.1.1.0/24, network: 10.1.1.100/24 "

This is error is saying you have an unsafe_route in the config set to the Nebula network's address space.

OK thank you
This method may be affected by F5 and lvs load devices
I would like to ask if Nebula currently has a cluster solution?

I added on access to internal machines that are not installed
ip route add 10.1.0.0/16 via 192.168.98.216
192.168.98.216 is my actual machine IP
And I deleted the NAT rule of iptables. At this time, I can’t access 192.168.98.216.

192.168.98.216 is the IP of the router? You previously stated it was 192.168.27.199. The via needs to be the router, so that it can pass traffic back...