I've created a new k8s cluster in a couple of VMware nodes, following this guide:
https://computingforgeeks.com/deploy-kubernetes-cluster-on-ubuntu-with-kubeadm/
After the installation the cluster is up and running, and the master and the node are READY.
So I've created a couple of Pods but I discovered that no one can reach the Internet.
In order to debug I've created the dnsutils Pod (from the Kubernetes page) and
logged into it with "kubectl exec -it pod/dnsutils bash"
If I do a "ping 8.8.8.8" I cannot get any response.
So is it possible that a classic kubernetes installation end up with a cluster unable to reach internet?
What is the fix to do in order to have Pods that "see" Internet?
Any help would be much appreciated
Thanks
\sergio
Related
I've been encountering few major issues with my K8s cluster:
Windows pods running on my Windows nodes are unable to communicate (internally within the cluster) with my linux pods or services. BUT my Linux pods are able to communicate with my Windows pods.
Windows pods are also unable to resolve dns names for any service within K8s
Windows pods are unable to resolve domain names for anything on the internet. Though after opening a shell in the windows pod and manually adding 8.8.8.8 as a DNS server, it would start working
Windows pods are unable to communicate with other pods using the FQDN or ClusterIPs of services created for windows pods.
Things I've verified:
CoreDNS is running
I see no warnings or errors in logs or k8s description of either CoreDNS or Calico pods
I suspect that the issue might be with calico, but I'm quite lost on where to even begin now to troubleshoot these issues. Googling turned up this similar issue, but I've been unsuccessful so far
I've also tried running the following command, but no luck:
kubectl -n kube-system rollout restart deployment coredns
Any ideas any what I can try to take a look at? below is a diagram that I created to try and help visually explain my issue:
I am trying to install a Kubernetes cluster with one master node and two worker nodes.
I acquired 3 VMs for this purpose running on Ubuntu 21.10. In the master node, I installed kubeadm:1.21.4, kubectl:1.21.4, kubelet:1.21.4 and docker-ce:20.4.
I followed this guide to install the cluster. The only difference was in my init command where I did not mention the --control-plane-endpoint. I used calico CNI v3.19.1 and docker for CRI Runtime.
After I installed the cluster, I deployed minio pod and exposed it as a NodePort.
The pod got deployed in the worker node (10.72.12.52) and my master node IP is 10.72.12.51).
For the first two hours, I am able to access the login page via all three IPs (10.72.12.51:30981, 10.72.12.52:30981, 10.72.13.53:30981). However, after two hours, I lost access to the service via 10.72.12.51:30981 and 10.72.13.53:30981. Now I am only able to access the service from the node on which it is running (10.72.12.52).
I have disabled the firewall and added calico.conf file inside /etc/NetworkManager/conf.d with the following content:
[keyfile]
unmanaged-devices=interface-name:cali*;interface-name:tunl*;interface-name:vxlan.calico
What am I missing in the setup that might cause this issue?
This is a community wiki answer posted for better visibility. Feel free to expand it.
As mentioned by #AbhinavSharma the problem was solved by switching from Calico to Flannel CNI.
More information regarding Flannel itself can be found here.
I am running an on-premise, vmware cluster with HA master nodes and two worker nodes. Both worker nodes are "ready", and the cluster has a NodePort service for running a web server. I am able to access the web page through the worker node directly, but cannot access it through the other worker node in the cluster. I have also added the "iptables -P FORWARD ACCEPT" to fix the issue that has worked in the past, but it seems to no longer be working. Does anyone have any ideas that could fix this issue?
It seems to have worked when reapplying the "iptables -P FORWARD ACCEPT" and restarting both docker and kubelet.
after using k8s on GKE for a couple of months I've decided to install my own cluster. Now, I've two ubuntu VMs and one of them is the kube-master and the second one is a node. Cluster run as I expect and I can see nodes (kube-master is a node also) when run kubectl get nodes. I've launched one pod each VMs bat I'm experiencing an issue both pod have the same IP. Can anybody help me to resolve this issue? I'm using flannel as network plugin atm.
Thanks in advance
Update
I've found the solution, thanks kubernetes group on slack. I didn't install the [cni][1]plugin so the kubelet didn't know the subnetwork status. I installed the plugin using this guide, made a configuration file following that. Restarted the kubelete service finally I saw the cluster was work as I expected.
We deployed a containerized app by pulling a public docker image from docker hub and were able to get a pod running at a server running at 172.30.105.44. Hitting this IP from a rest client or curl/pinging the IP gives no response. Can someone please guide us where we are going wrong?
Firstly, find out the IP of your node by executing the command
kubectl get nodes
Get the information related to the pod running by executing the command kubectl describe services <pod-name>
Make a note of the field NodePort from here.
To access your service that is already running, hit the endpoint - nodeIP:NodePort.
You can now access your service successfully!
I am not sure where you have deployed (AWS, GKE, Bare) but you should make sure you have the following:
https://kubernetes.io/docs/user-guide/ingress/
https://kubernetes.io/docs/user-guide/services/
Ingress will work out of the box on GKE, but with an AWS installation, you may need to make sure you have nginx-ingress pods running.