pali / igmpproxy

IGMP multicast routing daemon

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Under what conditions does igmpproxy configuration change the source IP address?

sandersaares opened this issue · comments

I am seeing traffic where the source IP address is somehow rewritten to match the IP address of the igmpproxy machine, not the upstream source that it originated from. While this is not exactly a problem (actually I much prefer it this way) I find this behavior is not really deterministic and would like to understand it better.

I have the following setup with a few Ubuntu VMs:

listener (VM) <-> igmpproxy (VM) <-> sender (Docker container)

Listener and igmpproxy VM are connected to each other via 192.x.x.x network (.100 is listener and .2 is igmpproxy).
Docker container and igmpproxy are connected via 172.x.x.x network (.2 is sender).

Igmpproxy is configured with 172 as upstream and 192 as downstream. No altnets are specified. Listener is configured not to filter out 172 traffic (disabled rp_filter in sysctl).

Sender/listener just run iperf in UDP multicast mode - nothing fancy.

Here is what I observe:

  1. I start listener.
  2. I start sender.
  3. I observe on listener the expected multicast traffic with source IP address 172.
  4. I observe igmpproxy logs as follows:
RECV V2 member report   from 192.168.130.100 to 239.1.2.3
Inserted route table entry for 239.1.2.3 on VIF #0
joinMcGroup: 239.1.2.3 on docker0
RECV Membership query   from 192.168.130.2   to 224.0.0.1
The IGMP message was local multicast. Ignoring.
The IGMP message was local multicast. Ignoring.
RECV V2 member report   from 192.168.130.100 to 239.1.2.3
Updated route entry for 239.1.2.3 on VIF #0
RECV V2 member report   from 192.168.130.2   to 224.0.0.22
The IGMP message was from myself. Ignoring.
RECV V2 member report   from 192.168.130.2   to 224.0.0.2
The IGMP message was from myself. Ignoring.
Adding MFC: 172.17.0.2 -> 239.1.2.3, InpVIf: 1

With routes:

(172.17.0.2, 239.1.2.3)          Iif: docker0    Oifs: eth0
(169.254.79.119, 239.255.255.250) Iif: unresolved

Okay now let's reset and do it all over again with the first two steps reversed.

  1. I start sender.
  2. I start listener.
  3. I observe on listener the expected multicast traffic with source IP address 192.
  4. I observe igmpproxy logs as follows:
Inserted route table entry for 239.1.2.3 on VIF #-1
RECV V2 member report   from 192.168.130.2   to 224.0.0.22
The IGMP message was from myself. Ignoring.
RECV V2 member report   from 192.168.130.2   to 224.0.0.2
The IGMP message was from myself. Ignoring.
RECV V2 member report   from 192.168.130.100 to 239.1.2.3
Updated route entry for 239.1.2.3 on VIF #0
Adding MFC: 172.17.0.2 -> 239.1.2.3, InpVIf: 1
joinMcGroup: 239.1.2.3 on docker0
RECV V2 member report   from 192.168.130.100 to 239.1.2.3
Updated route entry for 239.1.2.3 on VIF #0
Adding MFC: 172.17.0.2 -> 239.1.2.3, InpVIf: 1

With routes:

(172.17.0.2, 239.1.2.3)          Iif: docker0    Oifs: eth0
(169.254.79.119, 239.255.255.250) Iif: unresolved

I do not understand what this behavior means. Is this behavior a part of igmpproxy standard feature set? What exactly is happening here? I quite like the IP address rewriting, as it avoids a bunch of routing complexity for me. I would like to always have it happen, if possible (regardless of the starting order of the systems).