How to stop icmpv6 neighbor advertisement storm - ubuntu-16.04

I have two virtual machines, on each VM, I have two interfaces (enp0s3, enp0s8). Each interface belongs to different subnet.
On each VM I have created an OVS bridge br0, and on br0, I have created a VXLAN port with a remote IP pointing at enp0s3 on the other VM.
The problem is when I connect enp0s8 to br0, I have an icmpv6 neighbor advertisement storm on enp0s3, and when I delete enp0s8 port on br0 the broadcast immediately stops.
How can I stop the icmpv6 neighbor advertisement excessive broadcast? Any insight or troubleshooting tips would be greatly appreciated!
Thanks!

A loop is getting created, one way to overcome this problem is to enable STP protocol which dynamically removes loops from the network:
ovs-vsctl set bridge stp_enable=true

Related

What's the difference between cbr0 vs vxlan in Kubernetes?

We have an K8S Cluster environment with 1 master node and 2 worker nodes and all are on Linux and we are using flannel
Example is given below
Master (CentOS 7) - 192.168.10.1
Worker Node-1 (CentOS 7) - 192.168.10.2
Worker Node-2 (CentOS 7) - 192.168.10.3
Worker Node-3 (Windows ) - 192.168.10.4
Now, we have to add a Windows node (eg 192.168.10.4) to existing cluster 192.168.10.1
According to this link it appears that we have to update cni-conf.json section of flannel from cbr0 to vxlan0 and to my understanding this is done to communicate with Windows
My question will this change (from cbr0 to vxlan0) break the existing communication between Linux to Linux?
Let's start with definitions.
cbr0 is own kubernetes bridge which is created to differentiate from docker0 bridge used by docker.
vxlan stands for Virtual Extensible LAN and it's an overlay network which means it encapsulates packet into another packet.
More precise definition:
VXLAN is an encapsulation protocol that provides data center
connectivity using tunneling to stretch Layer 2 connections over an
underlying Layer 3 network.
The VXLAN tunneling protocol that encapsulates Layer 2 Ethernet frames
in Layer 3 UDP packets, enables you to create virtualized Layer 2
subnets, or segments, that span physical Layer 3 networks. Each Layer
2 subnet is uniquely identified by a VXLAN network identifier (VNI)
that segments traffic.
Answer
No, it won't break anything in communication between Linux nodes. This is an another option how nodes can communicate between each other using flannel CNI. I also tested this on my two nodes linux cluster and everything worked fine.
Main difference is how flannel will work with packets. It will be visible via netstat or wireshark, while for PODs nothing is going to be change because packets will be normalized when they come to PODs.
Note! I recommend testing this change on a small dev/test cluster as there may be some additional setup for firewalld (usual rule before making any changes on production).
Useful links:
Flannel - recommended backends for VXLAN
Kubernetes Journey — Up and running out of the cloud — flannel
How Kubernetes Networking Works – Under the Hood

RaspberryPi MQTT Broker access via Wifi and Ethernet without interference

I would like to run an MQTT broker (Mosquitto) on a Pi2.
The Pi is connected to two networks, ethernet and wifi. Both networks are not administrated by myself.
Two independent DHCP servers in both networks.
How can I make the the broker available in both networks without interference with the network infrastructure.
Dumb question ?
Cheers
By default mosquitto will bind to the 0.0.0.0 address, this is a special address that represents all IP addresses of the host machine. There is no need to run 2 separate brokers, one will work just fine.
This means that the broker will be accessible from both networks. The only problem is that if the pi is getting addresses from DHCP on both interfaces then you will need to know what IP addresses have been assigned in order to access the broker from each network.
I suggest you look up a program called avahi which can be used to provide a mDNS service allowing you to refer to the pi by a .local domain name from both networks.

Need help to understand Overlay Network formed by Flannel

I am very new to flannel overlay network with kubernetes, we want to know how packets are transmitted across container in different host using flannel overlay network, below mentioned reference link which contains diagram in order to transmit packet between container in different host, can any one explain how its happen? Reference link :: https://github.com/coreos/flannel
NB: I didn't write flannel, so I'm not the perfect person to answer...
As far as I understand it, by default flannel uses UDP packet encapsulation to deliver packets between nodes in the network.
So if a compute node at 1.2.3.4 is hosting a subnet with a CIDR like 10.244.1.0/24, then all packets for that CIDR are encapsulated in UDP and sent to 1.2.3.4 where the are decapsulated and placed onto the bridge for the subnet.
Hope that helps!
--brendan

OpenMq clustering not supported for loopback addresses

If I start up a single instance of the broker on a loopback address I get the following:
[05/Sep/2014:16:45:11 BST] WARNING [B3236]: Bad bind address of portmapper service for cluster, please change imq.portmapper.hostname: Loopback IP address is not allowed in broker address localhost[localhost/127.0.0.1] for cluster
[05/Sep/2014:16:45:11 BST] WARNING [B1137]: Cluster initialization failed. Disabling the cluster service.
I have a setup (actually the Azure Compute Emulator) which allows multiple vms/processes to be started up with their own unique ipaddresses of the form 127.X.X.X which are actually loopback addresses as far as java.net.InetAddress is concenrned. Therefore despite the fact that I am successfully using these addresses for socket to socket communication between those vm/processes I cannot use them to run an OpenMq cluster.
As a work around I have set up the brokers to bind to a SINGLE non loopback address and use different ports and that works. So it's not the case that you can't cluster on one ipaddress.
Why was loopback disallowed?
If it is theoretically possible, is there a setting to enable it for clustering?
According to Amy Kang of Oracle opnenmq users mailing list this is by design since clustering is intended to be across muultiple servers. You can however bind several brokers to one non loopback address and use different ports.

Communication between two Cassandra nodes

Assume two Cassandra nodes running on hosts A and B respectively. Which TCP and/or UDP ports needs to be open between hosts A and B for Cassandra to operate properly?
That depends on how you have configured storage-conf.xml on your two nodes.
Hint. take a look at <StoragePort>7000</StoragePort> in storage-conf.xml.
(TCP port 7000 is the standard/default port used by Cassandra for internal communication, i.e. address to bind to and tell other nodes to connect to).
UDP port (7001 default) was previous used for gossip, was removed in 0.6.0.