I have a cluster with 4 nodes (3 raspi, 1 NUC) and have setup several different workloads.
The cluster itself worked perfectly fine, so I doubt that it is a general problem with the configuration.
After a reboot of all nodes the cluster came back up well and all pods are running without issues.
Unfortunately, pods that are running on one of my nodes (NUC) are not reachable via ingress anymore.
If I access them through kube-proxy, I can see that the pods itself run fine and the http services behave as exptected.
I upgraded the NUC node from Ubuntu 20.10 from 21.04, which may be related to the issues, but is not confirmed.
When the same pods are scheduled to the other nodes everything works as expected.
For pods on the NUC node, I see the following in the ingress-controller logs:
2021/08/09 09:17:28 [error] 1497#1497: *1027899 upstream timed out (110: Operation timed out) while connecting to upstream, client: 10.244.1.1, server: gitea.fritz.box, request: "GET / HTTP/2.0", upstream: "http://10.244.3.50:3000/", host: "gitea.fritz.box"
I can only assume that the problem is related to the cluster internal network and have compared iptables rules and the like, but have not found differences that seem relevant.
The NUC node is running on Ubuntu 21.04 with kube v1.21.1, the raspis run Ubuntu 20.04.2 LTS. The master node still runs v1.21.1 the two worker nodes already run v.1.22.0, which works fine.
I have found a thread that points out incompatibility between metallb and nftables (https://github.com/metallb/metallb/issues/451) and though it's a bit older, I already changed to xtables as suggested (update-alternatives --set iptables /usr/sbin/iptables-legacy ...) without success.
Currently I'm running out of ideas on where to look.
Can anyone suggest possible issues?
Thanks in advance!
Updating flannel from 13.1-rc2 to 14.0 seems to have done the trick.
Maybe some of the iptables rules were screwed and got revreated, maybe 14.0 is necessary to work with 21.04? Who knows...
I'm back up running fine and happy :)
Related
I'm trying to investigate issue with random 'Connection reset by peer' error or long (up 2 minutes) PDO connection initializations but failing to find a solution.
Similar issue: https://kubernetes.io/blog/2019/03/29/kube-proxy-subtleties-debugging-an-intermittent-connection-reset/, but that supposed to be fixed in the version of kubernetes that I'm running.
GKE config details:
GKE is running on 1.20.12-gke.1500 version, with a NAT network configuration and a router. Cluster has 2 nodes and router has 2 static IP's assigned with dynamic port allocation and range of 32728-65536 ports per VM.
On the kubernetes:
deployments: docker image with local nginx, php-fpm, and google sql proxy
services: LoadBalancer to expose the deployment
As per replication of the issue I created a simple script connecting in a loop to database and making simple count query. I eliminated issues with the database server by testing the script on a standalone GCE VM where I didn't get any issues. When I'm running the script on any of the application pods in the cluster, I'm getting random 'Connection reset by peer' errors. I have tested that script using google sql proxy service or with direct database IP with same random connection issues.
Any help would be appreciated.
Update
On https://cloud.google.com/kubernetes-engine/docs/release-notes I can see that there has been fix released to solve potentially something that I'm getting: "The following GKE versions fix a known issue in which random TCP connection resets might happen for GKE nodes that use Container-Optimized OS with Docker (cos). To fix the issue, upgrade your nodes to any of these versions:"
I'm updating nodes this evening so I hope that will solve the issue.
Update
The update of nodes solved random connection resets.
Updating cluster and nodes to 1.20.15-gke.3400 version using google cloud panel resolved the issue.
Looking for a peice of advice on troubleshooting an issue with Rancher + Calico on a bare metal Ubuntu 20.04.
Here is the issue.
We have few Rancher (2.5.7) clusters built on top of Ubuntu 20.04 running on KVM(Proxmox) VMs.
All clusters have similar setup and use Calico as CNI. Everything works like a charm.
The other day we decided to add a bare metal Ubuntu 20.04 node to one of the clusters.
And everything worked pretty well - Rancher shows new node as healthy and k8s scheudles pods there - however,
it turned out that pods on that node can't access service network - 10.43. Specifically they can't access DNS at 10.43.0.10.
If I do "nc 10.43.0.10 53" on VM Ubuntu host - it connects to DNS pod through service network with no issues. If I'm trying to do the same on a bere metal - connection hangs.
Ubuntu set up is exactly the same for VM and BM. All VMs and BMs are on the same vlan. For the sake of expetiment we configured only one NIC on BM with no fancy stuff like bonding.
calicoctl shows all the BGP peers Established.
I tried to create a fresh cluster and reproduced the same problem - cluster built of VMs works with no issues and each VM(and pods there) can connect to service network, once I add BM - BM is having issues connecting to service network.
My guess is that issue is somewhere with iptables, but I'm not sure how to troubleshoot WHY iptables will be different on BM an on VM.
Will greatly appreaciate any piece of advice.
After few hours of debugging we figured out that the issue was in "tcp offloading".
On VMs virtual NIC does not support offloading so everything worked fine.
On BMs we had to issue
sudo ethtool -K <interface> tx off rx off
to disable offloading and that fixed the issue.
I discovered a strange behavior with K8s networking that can break some applications designs completely.
I have two pods and one Service
Pod 1 is a stupid Reverse Proxy (I don't know the implementation)
Pod 2 is a Webserver
The mentioned Service belongs to pod 2, the webserver
After the initial start of my stack I discovered that Pod 1 - the Reverse Proxy is not able to reach the webserver on the first attempt for some reason, ping is working fine and curl also.
Now I tried wget mywebserver inside of Pod 1 - Reverse Proxy and got back the following:
wget mywebserver
--2020-11-16 20:07:37-- http://mywebserver/
Resolving mywebserver (mywebserver)... 10.244.0.34, 10.244.0.152, 10.244.1.125, ...
Connecting to mywebserver (mywebserver)|10.244.0.34|:80... failed: Connection refused.
Connecting to mywebserver (mywebserver)|10.244.0.152|:80... failed: Connection refused.
Connecting to mywebserver (mywebserver)|10.244.1.125|:80... failed: Connection refused.
Connecting to mywebserver (mywebserver)|10.244.2.177|:80... connected.
Where 10.244.2.177 is the Pod IP of the Webserver.
The problem to me it seems is that the Reverse-Proxy does not try to trigger the attempt to forward the package twice, instead it only tries once where it fails like in the wget cmd above and the request gets dropped as the backed is not reachable due to fancy K8s IPtables stuff it seems...
If I configure the reverse-proxy not to use the Service DNS-name for load-off and instead use the Pod IP (10.244.2.177) everything is working fine and as expected.
I already tried this with a variety of CNI Providers like: Flannel, Calico, Canal, Weave and also Cilium as Kube-Proxy is not used with Cilium but all of them failed and all of them doing fancy routing nobody clearly understands out-of-the-box. So my question is how can I make K8s routing work immediately at this point? I already have reimplemented my whole stack to docker-swarm just to see if it works, and it does, flawlessly! So this issue has to do something with K8s routing scheme it seems.
Just to exclude misconfiguration from my side I also tried this with different ready-to-use K8s solutions like managed K8s from Digital-Ocean and or self-hosted RKE. All have the same behavior.
Does somebody maybe have a Idea what the problem might be and how to fix this behavior of K8s?
I might also be very useful to know what actually happens at the wget request, as this remains a mystery to me.
Many thanks in advance!
It turned out that I had several misconfigurations at my K8s Deployment.
I first removed ClusterIP: None as this leads to the behavior wget shows above at my question. Beside I've set app: and tier: wrong at my deployment. Anyways now everything is working fine and wget has a proper connection.
Thanks again
I've problems getting with the BGP Peers of my Kubernetes clusters. My Cluster is built with 3 master nodes and 2 worker nodes on premise running on Unbuntu 18.04. The etcd is external configurered but running on 2 Master Nodes.
Initialy initialized with the pod cidr 172.16.0.0/16, but recently changed to 192.168.220.0/24.
I've installed Calico the latest version.
Every seems to be working OK when I configured a replicaset of 1 of each services. When I run multible pods I have connection problems with my configured services sometimes, sometimes not.
After some reasearch I discovered the problem could be a misconfiguration of Calico. When I run calicoctl node status I see this.
On the calico side I discovered it has something to do with BGP peering but I can't figure out what went wrong. The nodes had IP connectivity. A telnet session to port TCP 179 is also succesfull. What can I check next? Any help would be appreciated.
after using k8s on GKE for a couple of months I've decided to install my own cluster. Now, I've two ubuntu VMs and one of them is the kube-master and the second one is a node. Cluster run as I expect and I can see nodes (kube-master is a node also) when run kubectl get nodes. I've launched one pod each VMs bat I'm experiencing an issue both pod have the same IP. Can anybody help me to resolve this issue? I'm using flannel as network plugin atm.
Thanks in advance
Update
I've found the solution, thanks kubernetes group on slack. I didn't install the [cni][1]plugin so the kubelet didn't know the subnetwork status. I installed the plugin using this guide, made a configuration file following that. Restarted the kubelete service finally I saw the cluster was work as I expected.