Linux Kernel not passing through multicast UDP packets

Solution 1:

In our instance, our problem was solved by sysctl parameters, one different from Maciej.

Please note that I do not speak for the OP (buecking), I came on this post due to the problem being related by the basic detail (no multicast traffic in userland).

We have an application that reads data sent to four multicast addresses, and a unique port per multicast address, from an appliance that is (usually) connected directly to an interface on the receiving server.

We were attempting to deploy this software on a customer site when it mysteriously failed with no known reason. Attempts at debugging this software resulted in inspecting every system call, ultimately they all told us the same thing:

Our software asks for data, and the OS never provides any.

The multicast packet counter incremented, tcpdump showed the traffic reaching the box/specific interface, yet we couldn't do anything with it. SELinux was disabled, iptables was running but had no rules in any of the tables.

Stumped, we were.

In randomly poking around, we started thinking about the kernel parameters that sysctl handles, but none of the documented features was either particularly relevant, or if they had to do with multicast traffic, they were enabled. Oh, and ifconfig did list "MULTICAST" in the feature line (up, broadcast, running, multicast). Out of curiosity we looked at /etc/sysctl.conf. 'lo and behold, this customer's base image had a couple of extra lines added to it at the bottom.

In our case, the customer had set net.ipv4.all.rp_filter = 1. rp_filter is the Route Path filter, which (as I understand it) rejects all traffic that could not have possibly reached this box. Network subnet hopping, the thought being that the source IP is being spoofed.

Well, this server was on a 192.168.1/24 subnet and the appliance's source IP address for the multicast traffic was somewhere in the 10.* network. Thus, the filter was preventing the server from doing anything meaningful with the traffic.

A couple of tweaks approved by the customer; net.ipv4.eth0.rp_filter = 1 and net.ipv4.eth1.rp_filter = 0 and we were running happily.

Solution 2:

TL/DR Also make sure your multicast doesn't come from a vlan. tcpdump -e would help determine if they do.

In all fairness, somebody ought to build a page with a checklist of things that can prevent multicast from reaching the userland. I've been struggling with that for a couple of days, and naturally nothing I could find on the web helped.

Not only I could see the packets in the tcpdump, I could actually receive other multicast packets, for other producers, just on a different interface. The command I ended up using for testing whether I can receive multicast was:

$ GRP=224.x.x.x # set me to the group
$ PORT=yyyy # set me to the receiving port
$ IFACE=mmmm # set me to the name or IP address of the interface
$ strace -f socat -  UDP4-DATAGRAM:$GRP:$PORT,ip-add-membership=$GRP:$IFACE,bind=$PORT,multicast-loop=0

The reason for strace here is that I actually couldn't make socat print out the packets to the stdout, but in strace output you can clearly see if socat is receiving actual data from the bound socket (it'll be mute otherwise after a couple of initial select calls)

  • rp_filter sysctl - doesn't apply, the systems are on the same IP network (I set them to 0 all the same, seems that 1 is a default setting now, at least for Ubuntu).
  • firewalls/etc - the receiving system is firewall free (I don't think packets will show up in tcpdump if they were firewalled, but I guess it's possible if the firewall is funny)
  • IP/Multicast routing and multiple interfaces - I explicitly joined the group on the correct interface
  • Wacky network hardware - this was my last resort, but changing some laptop to an Intel NUC didn't help. This is about where I started chewing my elbows and perpetrating posting this to SE.
  • The problem in my case was use of VLANs by the specialized hardware that was producing those multicast packets. To see if this is your issue, make sure to include -e flag to tcpdump, and check for vlan tags. It will be required to configure an interface into the correct vlan before userland will be able to get those packets. The giveaway for me actually was that the multicast producers won't ping, but won't even get into the ARP cache, though I could clearly see ARP replies.

To get it running with VLAN this link might be helpful to configure multicast routing. (Sadly I'm new to this so Reputation does not allow me to add an answer. Hence this edit.)

Here is what I did (use sudo if needed):

ip link add link eth0 name eth0_100 type vlan id 100
ip addr add brd dev eth0_100
ip link set dev eth0_100 up
ip maddr add 01:00:5e:01:01:01 dev eth0_100
route -n add -net netmask dev eth0_100

This way an additional interface if created for the vlan traffic with vlan id 100. The vlan ip might be unnecessary. Then a multicast address is configured for the new interface (01:00:5e:01:01:01 is the link layer address for and all incoming multicast traffic is bound to eth0_100. I also did all the possible steps in the answers above (check iptables, rp_filter etc).

Solution 3:

You might want to try and look at these settings:


echo "0" > /proc/sys/net/ipv4/icmp_echo_ignore_broadcasts


sed -i -e 's|^net.ipv4.icmp_echo_ignore_broadcasts =.*|net.ipv4.icmp_echo_ignore_broadcasts = 0|g' /etc/sysctl.conf

These have been used to enable multicasting in RHEL.

You might want to make sure that your firewall is allowing the mutlicast traffic; again with RHEL I've enabled the following:

# allow anything in on multicast addresses
-A INPUT -p igmp -d -j ACCEPT
# needed for multicast ping responses
-A INPUT -p icmp --icmp-type 0 -j ACCEPT