External firewall logs show blocked connection from < node IP >:< big port >.
The current cluster uses calico networking.
How do I detect which pod trying to connect?
This would usually be pretty hard to work out, you would have to check the NAT table on the node where the packets exited to the public internet.
Related
I am trying to connect to an external SCTP server from a pod inside a K8s cluster. The external SCTP server only allows connections from a configured IP address AND a specific source port.
From what i can understand, K8s performs SNAT during connection establishment and updates the IP address with the K8s node IP address, and also the port with a random port. So the SCTP server sees the random source port and therefore rejects the connection.
K8s cluster is using calico plugin and I have tried to "disable NAT for target CIDR range" option as explained here by installing an ip-pool. But it didn't work, I can see via tcpdump on the server that source port is still random, and not sure if the ip-pool gets picked up.
So my question is: Is there a way to preserve the source port? Am i on the right track trying to disable NAT, i.e. will it work considering the pod IPs are internal?
Note: I am not sure if the problem/solution is related to it, but kube-proxy is in iptables mode.
Note: There is actually an identical question here and the accepted answer suggesting to use hostNetwork:true works for me as well. But I can't use hostNetwork, so wanted to post this as a new question. Also the "calico way" of disabling NAT towards specific targets seemed to be promising, and hoping that calico folks can help.
Thanks
I question I have trouble finding an answer for is this:
When a K8s pod connects to an external service over the Internet, then that external service, what IP address does it see the pod traffic coming from?
I would like to know the answer in two distinct cases:
there is a site-to-site VPN between the K8s cluster and the remote service
there is no such VPN, the access is over the public Internet.
Let me also add the assumption that the K8s cluster is running on AWS (not with EKS,it is customer-managed).
Thanks for answering.
When the traffic leaves the pod and goes out, it usually undergoes NATing on the K8S Node, so the traffic in most cases will be coming with the Node's IP address in SRC. You can manipulate this process by (re-) configuring IP-MASQ-AGENT, which can allow you not to NAT this traffic, but then it would be up to you to make sure the traffic can be routed in the Internet, for example by using a cloud native NAT solution (Cloud NAT in case of GCP, NAT Gateway in AWS).
I’m wondering if anyone has been able to get Kubernetes running properly over the Wireguard VPN.
I created a 2 node cluster on 2 VM’s linked by wireguard. The master node with the full control plane works fine and can accept worker nodes over the wireguard interface. I set the nodeip for kubelet to the wireguard ip and also set the iface argument for flannel to use the wireguard interface instead of the default. This seems to work well so far.
The problem arises when I try to join the worker node into the cluster via the join command.
Note that I also edited the node ip of kubelet to be the wireguard ip on the worker node.
On join all traffic to the node is dropped by the “Kubernetes Firewall”. By the kubernetes firewall I mean if you check iptables after issuing the join command on the worker node you will see KUBE-FIREWALL which drops all marked packets. The firewall is standard as its the same on the master but I presume that the piece I’m missing is what to do to get traffic flowing on the worker node after joining to the master node.
I’m unable to even ping google.com or communicate with the master over the Wireguard tunnel. Pods can’t be scheduled either. I have manually deleted the KUBE-FIREWALL rule as a test which then allows pods to be scheduled and regular traffic to flow on the worker node but Kubelet will quickly recreate the rule after around a minute.
I’m thinking a route needs to be created before the join or something along those lines.
Has anyone tried this before would really appreciate any suggestions for this.
After getting some help I figured out that the issue was Wiregaurd related. Specifically when running wg-quick as a service which apparently creates an ip rule that routes ALL outgoing traffic via wg0 interface, except WG background secured channel. This causes issues when trying to connect a worker to a cluster and so simply manually creating and starting the wg0 interface with something like the below will work
ip link add dev wg0 type wireguard
ip addr add 10.0.0.4/24 dev wg0
wg addconf wg0 /etc/wireguard/wg0.conf
ip link set wg0 up
I built a kubernetes cluster, using flannel overlay network. The problem is one of the service ip isn't always accessible.
I tested within the cluster, by telneting the service ip and port, ended in connection timeout. Checked with netstat, the connection was always in "SYN_SENT" state, seemed that peer didn't accept connection.
But if I telnet to the pod ip and port that backed the service directly, the connection could be made successfully.
It only happened to one of the service, other services are ok.
And if I scaled the backend pod to a larger value, like 2. Then some of requests to the service ip can succeed. It seemed that the service wasn't able to connect to one of the backed pod.
Which component may be the cause of such problem? My service configuration, kube-proxy or flannel?
Check the discussion here: https://github.com/kubernetes/kubernetes/issues/38802
It's required to sysctl net.bridge.bridge-nf-call-iptables=1 on nodes.
I have to get the real ip from the request in my business.actually I got the 10.2.100.1 every time at my test environment. any way to do this ?
This is the same question as GCE + K8S - Accessing referral IP address and How to read client IP addresses from HTTP requests behind Kubernetes services?.
The answer, copied from them, is that this isn't yet possible in the released versions of Kubernetes.
Services go through kube_proxy, which answers the client connection and proxies through to the backend (your web server). The address that you'd see would be the IP of whichever kube-proxy the connection went through.
Work is being actively done on a solution that uses iptables as the proxy, which will cause your server to see the real client IP.
Try to get that service IP which service is associated with that pods.
One very roundabout way right now is to set up an HTTP liveness probe and watch the IP it originates from. Just be sure to also respond to it appropriately or it'll assume your pod is down.