I am very new to flannel overlay network with kubernetes, we want to know how packets are transmitted across container in different host using flannel overlay network, below mentioned reference link which contains diagram in order to transmit packet between container in different host, can any one explain how its happen? Reference link :: https://github.com/coreos/flannel
NB: I didn't write flannel, so I'm not the perfect person to answer...
As far as I understand it, by default flannel uses UDP packet encapsulation to deliver packets between nodes in the network.
So if a compute node at 1.2.3.4 is hosting a subnet with a CIDR like 10.244.1.0/24, then all packets for that CIDR are encapsulated in UDP and sent to 1.2.3.4 where the are decapsulated and placed onto the bridge for the subnet.
Hope that helps!
--brendan
Related
I am trying to connect to an external SCTP server from a pod inside a K8s cluster. The external SCTP server only allows connections from a configured IP address AND a specific source port.
From what i can understand, K8s performs SNAT during connection establishment and updates the IP address with the K8s node IP address, and also the port with a random port. So the SCTP server sees the random source port and therefore rejects the connection.
K8s cluster is using calico plugin and I have tried to "disable NAT for target CIDR range" option as explained here by installing an ip-pool. But it didn't work, I can see via tcpdump on the server that source port is still random, and not sure if the ip-pool gets picked up.
So my question is: Is there a way to preserve the source port? Am i on the right track trying to disable NAT, i.e. will it work considering the pod IPs are internal?
Note: I am not sure if the problem/solution is related to it, but kube-proxy is in iptables mode.
Note: There is actually an identical question here and the accepted answer suggesting to use hostNetwork:true works for me as well. But I can't use hostNetwork, so wanted to post this as a new question. Also the "calico way" of disabling NAT towards specific targets seemed to be promising, and hoping that calico folks can help.
Thanks
External firewall logs show blocked connection from < node IP >:< big port >.
The current cluster uses calico networking.
How do I detect which pod trying to connect?
This would usually be pretty hard to work out, you would have to check the NAT table on the node where the packets exited to the public internet.
I’m wondering if anyone has been able to get Kubernetes running properly over the Wireguard VPN.
I created a 2 node cluster on 2 VM’s linked by wireguard. The master node with the full control plane works fine and can accept worker nodes over the wireguard interface. I set the nodeip for kubelet to the wireguard ip and also set the iface argument for flannel to use the wireguard interface instead of the default. This seems to work well so far.
The problem arises when I try to join the worker node into the cluster via the join command.
Note that I also edited the node ip of kubelet to be the wireguard ip on the worker node.
On join all traffic to the node is dropped by the “Kubernetes Firewall”. By the kubernetes firewall I mean if you check iptables after issuing the join command on the worker node you will see KUBE-FIREWALL which drops all marked packets. The firewall is standard as its the same on the master but I presume that the piece I’m missing is what to do to get traffic flowing on the worker node after joining to the master node.
I’m unable to even ping google.com or communicate with the master over the Wireguard tunnel. Pods can’t be scheduled either. I have manually deleted the KUBE-FIREWALL rule as a test which then allows pods to be scheduled and regular traffic to flow on the worker node but Kubelet will quickly recreate the rule after around a minute.
I’m thinking a route needs to be created before the join or something along those lines.
Has anyone tried this before would really appreciate any suggestions for this.
After getting some help I figured out that the issue was Wiregaurd related. Specifically when running wg-quick as a service which apparently creates an ip rule that routes ALL outgoing traffic via wg0 interface, except WG background secured channel. This causes issues when trying to connect a worker to a cluster and so simply manually creating and starting the wg0 interface with something like the below will work
ip link add dev wg0 type wireguard
ip addr add 10.0.0.4/24 dev wg0
wg addconf wg0 /etc/wireguard/wg0.conf
ip link set wg0 up
I have two virtual machines, on each VM, I have two interfaces (enp0s3, enp0s8). Each interface belongs to different subnet.
On each VM I have created an OVS bridge br0, and on br0, I have created a VXLAN port with a remote IP pointing at enp0s3 on the other VM.
The problem is when I connect enp0s8 to br0, I have an icmpv6 neighbor advertisement storm on enp0s3, and when I delete enp0s8 port on br0 the broadcast immediately stops.
How can I stop the icmpv6 neighbor advertisement excessive broadcast? Any insight or troubleshooting tips would be greatly appreciated!
Thanks!
A loop is getting created, one way to overcome this problem is to enable STP protocol which dynamically removes loops from the network:
ovs-vsctl set bridge stp_enable=true
All,
I want to know is it possible to use only one network card to configure iSCSI multipath for the backend iSCSI storage? E.g, I have a NIC of eth0 with IP address of 192.168.10.100,then I create a virtual NIC of eth0:1 with IP address of 192.168.11.100. The two IPs are corresponding to the ip addresses of the two controllers of the iSCSI storage. Or should one must use two separate physical NICs for iSCSI multipath? I tried the above settings but found only one path is available for any volumes attached to the server. I can ping both IPs of the controllers(192.168.10.10 and 192.168.11.10) without problem.
Cheers,
Doan
To use one network card for multipathing, you need the two NICs on that card to be used, with each one on a different subnet, i.e. using a different switch. It's still not great to have just one NIC card to do this, since that's a single point of failure. For maximum robustness, each path should be independent of the other as much as possible.
So I believe the answer is that it is possible but not recommended.