Resolving Minikube metallb imagepullbackoff - kubernetes

I am moving from Docker Desktop to Minikube and have been having some trouble in getting MetalLB to work properly. I am starting Minikube in MacOS Monterey.
I've started a Minikube profile using the command below:
minikube start -p myprofile --cpus=4 --memory='32g' --disk-size='100000mb'
--driver=hyperkit --kubernetes-version=v1.21.8 --addons=metallb
When I check the pods for MetalLB, they are in an ImagePullBackOff status. The pods are trying to pull images docker.io/metallb/controller:v0.9.6 and docker.io/metallb/speaker:v0.9.6 respectively.
NAME READY STATUS RESTARTS AGE
controller-5fd6788656-jvj4m 0/1 ImagePullBackOff 0 26m
speaker-ctdmw 0/1 ImagePullBackOff 0 37m
After running eval $(minikube -p myprofile docker-env) and manually pulling through docker pull docker.io/metallb/speaker:v0.9.6, I get the error:
Error response from daemon: Get "https://registry-1.docker.io/v2/": dial tcp: lookup registry-1.docker.io on <ip-address>:53: read udp <ip-address>:49978-><ip-address>:53: i/o timeout
I'm not certain if it's useful, but after SSHing into the Minikube node, I've also verified ping google.com does not return a result.
When starting my Minikube profile, I had the following output:
๐Ÿ˜„ [myprofile] minikube v1.28.0 on Darwin 12.3.1
๐Ÿ†• Kubernetes 1.25.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.3
โœจ Using the hyperkit driver based on existing profile
๐Ÿ‘ Starting control plane node myprofile in cluster myprofile
๐Ÿ”„ Restarting existing hyperkit VM for "myprofile" ...
โ— This VM is having trouble accessing https://k8s.gcr.io
๐Ÿ’ก To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
๐Ÿณ Preparing Kubernetes v1.21.8 on Docker 20.10.20 ...
๐Ÿ”Ž Verifying Kubernetes components...
โ–ช Using image gcr.io/k8s-minikube/storage-provisioner:v5
โ–ช Using image metallb/speaker:v0.9.6
โ–ช Using image metallb/controller:v0.9.6
๐ŸŒŸ Enabled addons: storage-provisioner, metallb, default-storageclass
โ— /usr/local/bin/kubectl is version 1.25.4, which may have incompatibilities with Kubernetes 1.21.8.
โ–ช Want kubectl v1.21.8? Try 'minikube kubectl -- get pods -A'
๐Ÿ„ Done! kubectl is now configured to use "myprofile" cluster and "default" namespace by default

Related

minikube + hostPath

I have tried desperately to apply a simple pod specification without any luck, even with this previous answer: Mount local directory into pod in minikube
The yaml file:
apiVersion: v1
kind: Pod
metadata:
name: hostpath-pod
spec:
containers:
- image: httpd
name: hostpath-pod
volumeMounts:
- mountPath: /data
name: test-volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /tmp/data/
I started minikube cluster with: minikube start --mount-string="/tmp:/tmp" --mount and there are 3 files in /tmp/data:
ls /tmp/data/
file2.txt file3.txt hello
However, this is what I get when I do kubectl describe pods:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m26s default-scheduler Successfully assigned default/hostpath-pod to minikube
Normal Pulled 113s kubelet, minikube Successfully pulled image "httpd" in 32.404370125s
Normal Pulled 108s kubelet, minikube Successfully pulled image "httpd" in 3.99427232s
Normal Pulled 85s kubelet, minikube Successfully pulled image "httpd" in 3.906807762s
Normal Pulling 58s (x4 over 2m25s) kubelet, minikube Pulling image "httpd"
Normal Created 54s (x4 over 112s) kubelet, minikube Created container hostpath-pod
Normal Pulled 54s kubelet, minikube Successfully pulled image "httpd" in 4.364295872s
Warning Failed 53s (x4 over 112s) kubelet, minikube Error: failed to start container "hostpath-pod": Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: rootfs_linux.go:76: mounting "/tmp/data" to rootfs at "/data" caused: stat /tmp/data: not a directory: unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
Warning BackOff 14s (x6 over 103s) kubelet, minikube Back-off restarting failed container
Not sure what I'm doing wrong here. If it helps I'm using minikube version v1.23.2 and this was the output when I started minikube:
๐Ÿ˜„ minikube v1.23.2 on Darwin 11.5.2
โ–ช KUBECONFIG=/Users/sachinthaka/.kube/config-ds-dev:/Users/sachinthaka/.kube/config-ds-prod:/Users/sachinthaka/.kube/config-ds-dev-cn:/Users/sachinthaka/.kube/config-ds-prod-cn
โœจ Using the hyperkit driver based on existing profile
๐Ÿ‘ Starting control plane node minikube in cluster minikube
๐Ÿ”„ Restarting existing hyperkit VM for "minikube" ...
โ— This VM is having trouble accessing https://k8s.gcr.io
๐Ÿ’ก To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
๐Ÿณ Preparing Kubernetes v1.22.2 on Docker 20.10.8 ...
๐Ÿ”Ž Verifying Kubernetes components...
๐Ÿ“ Creating mount /tmp:/tmp ...
โ–ช Using image gcr.io/k8s-minikube/storage-provisioner:v5
๐ŸŒŸ Enabled addons: storage-provisioner, default-storageclass
โ— /usr/local/bin/kubectl is version 1.18.0, which may have incompatibilites with Kubernetes 1.22.2.
โ–ช Want kubectl v1.22.2? Try 'minikube kubectl -- get pods -A'
๐Ÿ„ Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
Anything I can try? :'(
Update 1
Changing from minikube to microk8s helped. But I'm still not seeing anything inside /data/ in the pod.
Also changing from /tmp/ to a different folder helped in minikube. Something to do with MacOs.
OP has said, that problem is solved:
changing from /tmp/ to a different folder helped in minikube. Something to do with MacOs
For some reason minikube doesn't like /tmp/
An explanation of this problem:
You cannot mount /tmp to /tmp. The problem isn't with macOS, but with the way you do it. I tried to recreate this problem in several ways. I used a docker and got a very interesting error:
docker: Error response from daemon: Duplicate mount point: /tmp.
This error makes it clear what the problem is. If you mount your catalog elsewhere, everything should work (which was confirmed):
Do I understand correctly, that when you changed the mount point to a different folder, does it work?
that is correct. For some reason minikube doesn't like /tmp/
I know you are using hyperkit instead of docker in my case, but the only difference will be in the message you get on the screen. In the case of the docker, it is very clear.

Kubernetes HA master set up

I have made a HA Kubernetes cluster. FIrst I added a node and joined the other node as master role.
I basically did the multi etcd set up. This worked fine for me. I did the fail over testing which also worked fine. Now the problem is once I am done working, I drained and deleted the other node and then I shut down the other machine( a VM on GCP). But then my kubectl commands dont work... Let me share the steps:
kubectl get node(when multi node is set up)
NAME STATUS ROLES AGE VERSION
instance-1 Ready <none> 17d v1.15.1
instance-3 Ready <none> 25m v1.15.1
masternode Ready master 18d v1.16.0
kubectl get node ( when I shut down my other node)
root#masternode:~# kubectl get nodes
The connection to the server k8smaster:6443 was refused - did you specify the right host or port?
Any clue?
After reboot the server you need to do some step below:
sudo -i
swapoff -a
exit
strace -eopenat kubectl version

Kube-state-metrics error: Failed to create client: ... i/o timeout

I'm running Kubernetes in virtual machines and going through the basic tutorials, currently Add logging and metrics to the PHP / Redis Guestbook example. I'm trying to install kube-state-metrics:
git clone https://github.com/kubernetes/kube-state-metrics.git kube-state-metrics
kubectl create -f kube-state-metrics/kubernetes
but it fails.
kubectl describe pod --namespace kube-system kube-state-metrics-7d84474f4d-d5dg7
...
Warning Unhealthy 28m (x8 over 30m) kubelet, kubernetes-node1 Readiness probe failed: Get http://192.168.129.102:8080/healthz: dial tcp 192.168.129.102:8080: connect: connection refused
kubectl logs --namespace kube-system kube-state-metrics-7d84474f4d-d5dg7 -c kube-state-metrics
I0514 17:29:26.980707 1 main.go:85] Using default collectors
I0514 17:29:26.980774 1 main.go:93] Using all namespace
I0514 17:29:26.980780 1 main.go:129] metric white-blacklisting: blacklisting the following items:
W0514 17:29:26.980800 1 client_config.go:549] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I0514 17:29:26.983504 1 main.go:169] Testing communication with server
F0514 17:29:56.984025 1 main.go:137] Failed to create client: ERROR communicating with apiserver: Get https://10.96.0.1:443/version?timeout=32s: dial tcp 10.96.0.1:443: i/o timeout
I'm unsure if this 10.96.0.1 IP is correct. My virtual machines are in a bridged network 10.10.10.0/24 and a host-only network 192.168.59.0/24. When initializing Kubernetes I used the argument --pod-network-cidr=192.168.0.0/16 so that's one more IP range that I'd expect. But 10.96.0.1 looks unfamiliar.
I'm new to Kubernetes, just doing the basic tutorials, so I don't know what to do now. How to fix it or investigate further?
EDIT - additonal info:
kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
kubernetes-master Ready master 15d v1.14.1 10.10.10.11 <none> Ubuntu 18.04.2 LTS 4.15.0-48-generic docker://18.9.2
kubernetes-node1 Ready <none> 15d v1.14.1 10.10.10.5 <none> Ubuntu 18.04.2 LTS 4.15.0-48-generic docker://18.9.2
kubernetes-node2 Ready <none> 15d v1.14.1 10.10.10.98 <none> Ubuntu 18.04.2 LTS 4.15.0-48-generic docker://18.9.2
The command I used to initialize the cluster:
sudo kubeadm init --apiserver-advertise-address=192.168.59.20 --pod-network-cidr=192.168.0.0/16
The reason for this is probably overlapping of Pod network with Node network - you set Pod network CIDR to 192.168.0.0/16 which your host-only network will be included into as its address is 192.168.59.0/24.
To solve this you can either change the pod network CIDR to 192.168.0.0/24 (it is not recommended as this will give you only 255 addresses for your pod networking)
You can also use different range for your Calico. If you want to do it on a running cluster here is an instruction.
Also other way I tried:
edit Calico manifest to different range (for example 10.0.0.0/8) - sudo kubeadm init --apiserver-advertise-address=192.168.59.20 --pod-network-cidr=10.0.0.0/8) and apply it after the init.
Another way would be using different CNI like Flannel (which uses 10.244.0.0/16).
You can find more information about ranges of CNI plugins here.

kubectl cannot detect localhost:8080 with minikube locally

I am trying to run the tutorial at https://kubernetes.io/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive/ locally on by ubuntu 18 machine.
$ minikube start
๐Ÿ˜„ minikube v1.0.1 on linux (amd64)
๐Ÿคน Downloading Kubernetes v1.14.1 images in the background ...
๐Ÿ”ฅ Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
๐Ÿ“ถ "minikube" IP address is 192.168.39.247
๐Ÿณ Configuring Docker as the container runtime ...
๐Ÿณ Version of container runtime is 18.06.3-ce
โŒ› Waiting for image downloads to complete ...
โœจ Preparing Kubernetes environment ...
๐Ÿ’พ Downloading kubeadm v1.14.1
๐Ÿ’พ Downloading kubelet v1.14.1
๐Ÿšœ Pulling images required by Kubernetes v1.14.1 ...
๐Ÿš€ Launching Kubernetes v1.14.1 using kubeadm ...
โŒ› Waiting for pods: apiserver proxy etcd scheduler controller dns
๐Ÿ”‘ Configuring cluster permissions ...
๐Ÿค” Verifying component health .....
๐Ÿ’— kubectl is now configured to use "minikube"
๐Ÿ„ Done! Thank you for using minikube!
So far, so good.
Next, I try to run
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Similar response for
$ kubectl cluster-info
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The connection to the server localhost:8080 was refused - did you specify the right host or port?
As also,
$ kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?
What am I missing?
Ok so I was able to find the answer myself.
~/.kube/config was present before so I removed it first.
Next, when I ran the commands again, a config file was created again and that mentions the port as 8443.
So, need to make sure there is no old ~/.kube/config file present before starting minikube.

kubectl get nodes not showing workers

I am following this tutorial with 2 vms running CentOS7. Everything looks fine (no errors during installation/setup) but I can't see my nodes.
NOTE:
I am running this on VMWare VMs
kub1 is my master and kub2 my worker node
kubectl get nodes output:
[root#kub1 ~]# kubectl cluster-info
Kubernetes master is running at http://kub1:8080
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[root#kub2 ~]# kubectl cluster-info
Kubernetes master is running at http://kub1:8080
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
nodes:
[root#kub1 ~]# kubectl get nodes
[root#kub1 ~]# kubectl get nodes -a
[root#kub1 ~]#
[root#kub2 ~]# kubectl get nodes -a
[root#kub2 ~]# kubectl get no
[root#kub2 ~]#
cluster events:
[root#kub1 ~]# kubectl get events -a
LASTSEEN FIRSTSEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAGE
1h 1h 1 kub2.local Node Normal Starting {kube-proxy kub2.local} Starting kube-proxy.
1h 1h 1 kub2.local Node Normal Starting {kube-proxy kub2.local} Starting kube-proxy.
1h 1h 1 kub2.local Node Normal Starting {kubelet kub2.local} Starting kubelet.
1h 1h 1 node-kub2 Node Normal Starting {kubelet node-kub2} Starting kubelet.
1h 1h 1 node-kub2 Node Normal Starting {kubelet node-kub2} Starting kubelet.
/var/log/messages:
kubelet.go:1194] Unable to construct api.Node object for kubelet: can't get ip address of node node-kub2: lookup node-kub2: no such host
QUESTION: any idea why my nodes are not shown using "kubectl get nodes"?
My issue was that the KUBELET_HOSTNAME on /etc/kubernetes/kubeletvalue didn't match the hostname.
I commented that line, then restarted the services and I could see my worker after that.
hope that helps
Not sure about your scenario, but I have solved it after 3-4 hours of efforts.
Solved
I was facing this issue, because my docker cgroup driver was different than kubernetes cgroup driver.
Just updated it to cgroupfs using following commands mentioned in doc.
cat << EOF > /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=cgroupfs"]
}
EOF
Restart docker service service docker restart.
Reset kubernetes on slave node: kubeadm reset
Joined master again: kubeadm join <><>
It was visible on master using kubectl get nodes.
I had a similar problem after installing k8s using kubespray on fedora31, and to debug the issue, tried to run a random container directly using docker run that failed with:
docker: Error response from daemon: cgroups: cgroup mountpoint does not exist: unknown.
this is a known problem cause by cgroup version on fedora 31, and the fix is to update grub to use the previous version:
sudo dnf install grubby
sudo grubby --update-kernel=ALL --args="systemd.unified_cgroup_hierarchy=0"