Unable to access external network from k8s Pods - kubernetes

I am getting one weird issue while deploying Cluster on Kubernetes.
Some background details:
I have deployed Kubernetes Cluster using kubeadm-1.10.5 on my on-prem hardware with Weave plugin.
Master : Centos 7.3
Node1 : Centos 7.3
Node2 : Centos 7.3
Node3 : Centos 7.3
Node4 : RHEL 7.2 Kernal `3.10.0-327.el7.x86_64`
Node5 : RHEL 7.3 Kernal `3.10.0-514.el7.x86_64`
Now everything works fine except on Node4.
On this node4 I have successfully deployed all the kube-system pods and my application Pods, but the only issue which I am facing is any pod launched on node4 is unable to access/ping any external IP address.

Related

How do I set the GCP firewall rules for CoreDNS in Kubernetes?

I tested nslookup inside an container created by kubernetes in GCP. But it doesn't communicate with DNS server. So I check coreDNS config.
<Service>
coredns ClusterIP 10.233.0.3 <none> 53/UDP,53/TCP,9153/TCP 8d
<Pod>
coredns-74c9d4d795-4g54h 1/1 Running 5 8d
All was fine. So I check GCP's firewall rules. When I allow all ingress packets, it works. But, why?
First, I check kube-proxy port. I allow only it, but the server is disconnected again.
Second, I check all kube-* port. I allow all, but the server still is disconnected.
How do I allow least ports to connect DNS server? Is there any configuration for this?
[Environment]
kubespray : v2.11
Container Runtime: Docker
coreDNS : v1.6.0
Platform : GCP
OS : Ubuntu18.04

Resolving Minikube metallb imagepullbackoff

I am moving from Docker Desktop to Minikube and have been having some trouble in getting MetalLB to work properly. I am starting Minikube in MacOS Monterey.
I've started a Minikube profile using the command below:
minikube start -p myprofile --cpus=4 --memory='32g' --disk-size='100000mb'
--driver=hyperkit --kubernetes-version=v1.21.8 --addons=metallb
When I check the pods for MetalLB, they are in an ImagePullBackOff status. The pods are trying to pull images docker.io/metallb/controller:v0.9.6 and docker.io/metallb/speaker:v0.9.6 respectively.
NAME READY STATUS RESTARTS AGE
controller-5fd6788656-jvj4m 0/1 ImagePullBackOff 0 26m
speaker-ctdmw 0/1 ImagePullBackOff 0 37m
After running eval $(minikube -p myprofile docker-env) and manually pulling through docker pull docker.io/metallb/speaker:v0.9.6, I get the error:
Error response from daemon: Get "https://registry-1.docker.io/v2/": dial tcp: lookup registry-1.docker.io on <ip-address>:53: read udp <ip-address>:49978-><ip-address>:53: i/o timeout
I'm not certain if it's useful, but after SSHing into the Minikube node, I've also verified ping google.com does not return a result.
When starting my Minikube profile, I had the following output:
๐Ÿ˜„ [myprofile] minikube v1.28.0 on Darwin 12.3.1
๐Ÿ†• Kubernetes 1.25.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.3
โœจ Using the hyperkit driver based on existing profile
๐Ÿ‘ Starting control plane node myprofile in cluster myprofile
๐Ÿ”„ Restarting existing hyperkit VM for "myprofile" ...
โ— This VM is having trouble accessing https://k8s.gcr.io
๐Ÿ’ก To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
๐Ÿณ Preparing Kubernetes v1.21.8 on Docker 20.10.20 ...
๐Ÿ”Ž Verifying Kubernetes components...
โ–ช Using image gcr.io/k8s-minikube/storage-provisioner:v5
โ–ช Using image metallb/speaker:v0.9.6
โ–ช Using image metallb/controller:v0.9.6
๐ŸŒŸ Enabled addons: storage-provisioner, metallb, default-storageclass
โ— /usr/local/bin/kubectl is version 1.25.4, which may have incompatibilities with Kubernetes 1.21.8.
โ–ช Want kubectl v1.21.8? Try 'minikube kubectl -- get pods -A'
๐Ÿ„ Done! kubectl is now configured to use "myprofile" cluster and "default" namespace by default

Setup different Internal IP for worker nodes

I want to setup a kubernetes cluster locally where I would like to have 1 master node and 2 worker nodes. I have managed to do that but I am not able to access pods or see any logs of a specific pod because Internal IP address is the same for all nodes.
vagrant#k8s-head:~$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-head Ready master 5m53s v1.15.0 10.0.2.15 <none> Ubuntu 16.04.6 LTS 4.4.0-151-generic docker://18.6.2
k8s-node-1 Ready <none> 4m7s v1.15.0 10.0.2.15 <none> Ubuntu 16.04.6 LTS 4.4.0-151-generic docker://18.6.2
k8s-node-2 Ready <none> 2m28s v1.15.0 10.0.2.15 <none> Ubuntu 16.04.6 LTS 4.4.0-151-generic docker://18.6.2
In order to resolve this problem I have found out that following things should be done:
- add KUBELET_EXTRA_ARGS=--node-ip=<IP_ADDRESS> in /etc/default/kubelet file
- restart kubelet by running:sudo systemctl daemon-reload && sudo systemctl restart kubelet
The problem is that /etc/default/kubelet file is missing on this location and I am not able to add this additional parameter. Tried with creating file manually but it looks like it is not working when I restart kubelet, IP address is still the same.
Anyone faced this issue with missing /etc/default/kubelet file or if there is another easier way to setup different Internal IP addresses?
It is normal to have the same IP in every node for the Kubernetes Cluster running in VirtualBox, the reason is that it is a NAT newtork not intended for communication between virtual machines, the 10.0.2.15 IP is NATed when accessing the outside world.
The following diagram shows the networks that are created in a Kubernetes Cluster on top of VirtualBox, as you can see, every node has the same IP in the NAT newtork but different IPs on the other networks:
In order to access the PODs you can use a NodePort and the HOST ONLY network.
See a full example and download the code at Building a Kubernetes Cluster with Vagrant and Ansible (without Minikube). It is a tutorial that explains how to launch a Kubernetes cluster using Ansible playbooks, Vagrant and VirtualBox.
It uses Calico for networking and it includes another tutorial for installing Istio if you need a micro service mesh.

Failed to create pod sandbox kubernetes error

I have a Ubuntu 16.04 which is acting as kubernetes master. I have installed kuber v1.13.1 and using weave for networking. I have 2 Raspberry pi devices running the same version of kubernetes. I created a cluster and joined the raspberry pi to Ubuntu kube master. I have started a deployment and everything looks to be working fine.
When I checked the logs of the container, I found out that it was not able to connect to the internet. I tried pinging but got no results. When I run the command to describe the pod, I got following:
Warning FailedCreatePodSandBox 42m (x3 over 42m) kubelet, node02 (combined from similar events): Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "dea99f80488031b84b7b1f934343e54d877adf931071401651628505d52f55f9" network for pod "deployment-cnfc5": NetworkPlugin cni failed to set up pod "deployment-cnfc5_matrix-device" network: unable to allocate IP address: Post http://127.0.0.1:6784/ip/dea99f80488031b84b7b1f934343e54d877adf931071401651628505d52f55f9: dial tcp 127.0.0.1:6784: connect: connection refused
I have checked the directory /etc/cni/net.d and it contains 10-weave.conflist on both master and worker node. I have also checked the directory /opt/cni/bin and found below on master node:
bridge flannel ipvlan macvlan ptp tuning weave-ipam weave-plugin-2.5.1
dhcp host-local loopback portmap sample vlan weave-net
and on worker, I got below:
bridge flannel ipvlan macvlan ptp tuning weave-ipam weave-plugin-2.5.0
dhcp host-local loopback portmap sample vlan weave-net weave-plugin-2.5.1
Please can anyone please let me know what can I do to resolve this issue.? Thanks.
I initiated the kube master by using below commands:
sudo kubeadm init --token-ttl=0 --apiserver-advertise-address=192.168.0.142
and installed weave using:
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

Kubernetes on Oracle Linux

Can somebody give me a starting point to install Kubernetes on Oracle Linux platform. I like to start some comparison between the orchestration of docker containers.
Regards
Walter
You can use kubeadm for the cluster setup. I haven't worked with Oracle Linux. as long as it supports rpm & yam. you can install the kubernets software.
https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/
You can use this guide from Bitnami to prepare a Kubernetes cluster in Oracle Cloud: https://docs.bitnami.com/kubernetes/how-to/set-kubernetes-cluster-oracle/
The guide is meant for the Ubuntu image, but just as #sfgroups said, using kubeadm the process should be similar.
I have established a kubernetes cluster on Oracle Linux 7.4 using Oracle VirtualBox and vagrant.
Github repo can be found https://github.com/bjarteb/ol-kubeadm
You need an oracle account to follow along (it's free)
Oracleยฎ Container Services for use with Kubernetes
vagrant up && vagrant ssh m - and you are ready for k8s!
[vagrant#m ~]$ kubectl get nodes -owide
NAME STATUS ROLES AGE VERSION EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
m Ready master 1h v1.8.4+2.0.1.el7 <none> Oracle Linux Server 7.4 4.1.12-112.14.13.el7uek.x86_64 docker://17.12.0-ol
w1 Ready <none> 1h v1.8.4+2.0.1.el7 <none> Oracle Linux Server 7.4 4.1.12-112.14.13.el7uek.x86_64 docker://17.12.0-ol
w2 Ready <none> 57m v1.8.4+2.0.1.el7 <none> Oracle Linux Server 7.4 4.1.12-112.14.13.el7uek.x86_64 docker://17.12.0-ol