I have a Ubuntu 16.04 which is acting as kubernetes master. I have installed kuber v1.13.1 and using weave for networking. I have 2 Raspberry pi devices running the same version of kubernetes. I created a cluster and joined the raspberry pi to Ubuntu kube master. I have started a deployment and everything looks to be working fine.
When I checked the logs of the container, I found out that it was not able to connect to the internet. I tried pinging but got no results. When I run the command to describe the pod, I got following:
Warning FailedCreatePodSandBox 42m (x3 over 42m) kubelet, node02 (combined from similar events): Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "dea99f80488031b84b7b1f934343e54d877adf931071401651628505d52f55f9" network for pod "deployment-cnfc5": NetworkPlugin cni failed to set up pod "deployment-cnfc5_matrix-device" network: unable to allocate IP address: Post http://127.0.0.1:6784/ip/dea99f80488031b84b7b1f934343e54d877adf931071401651628505d52f55f9: dial tcp 127.0.0.1:6784: connect: connection refused
I have checked the directory /etc/cni/net.d and it contains 10-weave.conflist on both master and worker node. I have also checked the directory /opt/cni/bin and found below on master node:
bridge flannel ipvlan macvlan ptp tuning weave-ipam weave-plugin-2.5.1
dhcp host-local loopback portmap sample vlan weave-net
and on worker, I got below:
bridge flannel ipvlan macvlan ptp tuning weave-ipam weave-plugin-2.5.0
dhcp host-local loopback portmap sample vlan weave-net weave-plugin-2.5.1
Please can anyone please let me know what can I do to resolve this issue.? Thanks.
I initiated the kube master by using below commands:
sudo kubeadm init --token-ttl=0 --apiserver-advertise-address=192.168.0.142
and installed weave using:
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
Related
I am moving from Docker Desktop to Minikube and have been having some trouble in getting MetalLB to work properly. I am starting Minikube in MacOS Monterey.
I've started a Minikube profile using the command below:
minikube start -p myprofile --cpus=4 --memory='32g' --disk-size='100000mb'
--driver=hyperkit --kubernetes-version=v1.21.8 --addons=metallb
When I check the pods for MetalLB, they are in an ImagePullBackOff status. The pods are trying to pull images docker.io/metallb/controller:v0.9.6 and docker.io/metallb/speaker:v0.9.6 respectively.
NAME READY STATUS RESTARTS AGE
controller-5fd6788656-jvj4m 0/1 ImagePullBackOff 0 26m
speaker-ctdmw 0/1 ImagePullBackOff 0 37m
After running eval $(minikube -p myprofile docker-env) and manually pulling through docker pull docker.io/metallb/speaker:v0.9.6, I get the error:
Error response from daemon: Get "https://registry-1.docker.io/v2/": dial tcp: lookup registry-1.docker.io on <ip-address>:53: read udp <ip-address>:49978-><ip-address>:53: i/o timeout
I'm not certain if it's useful, but after SSHing into the Minikube node, I've also verified ping google.com does not return a result.
When starting my Minikube profile, I had the following output:
π [myprofile] minikube v1.28.0 on Darwin 12.3.1
π Kubernetes 1.25.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.3
β¨ Using the hyperkit driver based on existing profile
π Starting control plane node myprofile in cluster myprofile
π Restarting existing hyperkit VM for "myprofile" ...
β This VM is having trouble accessing https://k8s.gcr.io
π‘ To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
π³ Preparing Kubernetes v1.21.8 on Docker 20.10.20 ...
π Verifying Kubernetes components...
βͺ Using image gcr.io/k8s-minikube/storage-provisioner:v5
βͺ Using image metallb/speaker:v0.9.6
βͺ Using image metallb/controller:v0.9.6
π Enabled addons: storage-provisioner, metallb, default-storageclass
β /usr/local/bin/kubectl is version 1.25.4, which may have incompatibilities with Kubernetes 1.21.8.
βͺ Want kubectl v1.21.8? Try 'minikube kubectl -- get pods -A'
π Done! kubectl is now configured to use "myprofile" cluster and "default" namespace by default
I am trying to setup the kube cluster using Oracle VM Virtual Box. The command kubeadm is failing to start the cluster.
It waits on below:
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
Then fails because of below:
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
OS: Ubuntu 16.04-xenial Docker version: 18.09.7 Kube version:
Kubernetes v1.23.5 Cluster type: Flannel
OS: Ubuntu 16.04-xenial Docker version: 20.10.7 Kube version:
Kubernetes v1.23.5 Cluster type: Calico
What I tried so far, with help of Google:
turn off swap - which was already done
combinations of kube-docker as above
restarting kubelet service
other bits I do not remember.
ensured that the static ips have been allocated, and other
prerequisites.
Can anyone assist? I am new to Kube.
I setup a Kubernetes cluster with calico.
The setup is "simple"
1x master (local network, ok)
1x node (local network, ok)
1x node (cloud server, not ok)
All debian buster with docker 19.03
On the cloud server the calico pods do not come up:
calico-kube-controllers-token-x:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SandboxChanged 47m (x50 over 72m) kubelet Pod sandbox changed, it will be killed and re-created.
Warning FailedMount 43m kubelet MountVolume.SetUp failed for volume "calico-kube-controllers-token-x" : failed to sync secret cache: timed out waiting for the condition
Normal SandboxChanged 3m41s (x78 over 43m) kubelet Pod sandbox changed, it will be killed and re-created.
calico-node-x:
Warning Unhealthy 43m (x5 over 43m) kubelet Liveness probe failed: calico/node is not ready: Felix is not live: Get "http://localhost:9099/liveness": dial tcp [::1]:9099: connect: connection refused
Warning Unhealthy 14m (x77 over 43m) kubelet Readiness probe failed: calico/node is not ready: BIRD is not ready: Error querying BIRD: unable to connect to BIRDv4 socket: dial unix /var/run/bird/bird.ctl: connect: no such file or directory
Warning BackOff 4m26s (x115 over 39m) kubelet Back-off restarting failed container
My guess is that there is something wrong with IP/Network config, but did not figure out which.
Required ports (k8s&BGP) are forwarded from the router, also tried the master directly connected to the internet
--control-plane-endpoint is a hostname and public resolveable
Calico is using BGP peering (using public ip as peer)
This entry does worry me the most:
displayes local ip: kubectl get --raw /api
I tried to find a way to change this to the public IP of the master, without success.
Anyone got a clue what to try next?
After an additional time spend with analysis the problem happend to be the distributed api ip address was the local one, not the dns-name.
Created a vpn with wireguard from the cloud node to the local master, so the local ip of the master is reachable from the cloud node.
Don't know if that is the cleanest solution, but it works.
Run this command to verify if IP_AUTODETECTION_METHOD environment variable in calico daemonset has been set
kubectl get daemonset/calico-node -n kube-system --output json | jq '.spec.template.spec.containers[].env[] | select(.name | startswith("IP"))'
Run this command in each of your k8s nodes to find the valid network interface
ifconfig
Explicitly set the IP_AUTODETECTION_METHOD environment variable, to make sure the calico node communicates to the correct network interface of the K8s node.
kubectl set env daemonset/calico-node -n kube-system IP_AUTODETECTION_METHOD=interface=en.*
Flannel on node restarts always.
Log as follows:
root#debian:~# docker logs faa668852544
I0425 07:14:37.721766 1 main.go:514] Determining IP address of default interface
I0425 07:14:37.724855 1 main.go:527] Using interface with name eth0 and address 192.168.50.19
I0425 07:14:37.815135 1 main.go:544] Defaulting external address to interface address (192.168.50.19)
E0425 07:15:07.825910 1 main.go:241] Failed to create SubnetManager: error retrieving pod spec for 'kube-system/kube-flannel-ds-arm-bg9rn': Get https://10.96.0.1:443/api/v1/namespaces/kube-system/pods/kube-flannel-ds-arm-bg9rn: dial tcp 10.96.0.1:443: i/o timeout
master configuration:
ubuntu: 16.04
node:
embedded system with debian rootfs(linux4.9).
kubernetes version:v1.14.1
docker versionοΌ18.09
flannel versionοΌv0.11.0
I hope flannel run normal on node.
First, for flannel to work correctly, you must pass --pod-network-cidr=10.244.0.0/16 to kubeadm init.
kubeadm init --pod-network-cidr=10.244.0.0/16
Set /proc/sys/net/bridge/bridge-nf-call-iptables to 1 by running
sysctl net.bridge.bridge-nf-call-iptables=1
Next is to create the clusterrole and clusterrolebinding
as follows:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml
I have two node Kubernetes setup in Virtualbox. Master is up and running fine. But the worker node is staying in "NotReady" state.
[root#master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 1d v1.10.2
node NotReady <none> 1h v1.10.2
"journalctl -u kubelet" command on worker node is reporting networking related errors:
kuberuntime_manager.go:757] checking backoff for container "install-cni" in pod "kube-flannel-ds-zjlvn_kube-system(873fa36d-4b83-11e8-9997-080027afb5ab)"
remote_runtime.go:278] ContainerStatus "459643e54de7f82df8ada0f60e8f3d51d42c5ce348747a66e20ad5720155e63f" from runtime service failed: rpc error: code = U
kuberuntime_container.go:636] failed to remove pod init container "install-cni": failed to get container status "459643e54de7f82df8ada0f60e8f3d51d42c5ce34
kuberuntime_manager.go:757] checking backoff for container "install-cni" in pod "kube-flannel-ds-zjlvn_kube-system(873fa36d-4b83-11e8-9997-080027afb5ab)"
kuberuntime_manager.go:767] Back-off 10s restarting failed container=install-cni pod=kube-flannel-ds-zjlvn_kube-system(873fa36d-4b83-11e8-9997-080027afb5a
pod_workers.go:186] Error syncing pod 873fa36d-4b83-11e8-9997-080027afb5ab ("kube-flannel-ds-zjlvn_kube-system(873fa36d-4b83-11e8-9997-080027afb5ab)"), sk
cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
kubelet.go:2125] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni con
cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
kubelet.go:2125] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni con
cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
kubelet.go:2125] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni con
I am running Kubernetes version 1.10 and docker version 1.13.1. Could you please help me identify the root cause and resolution for this issue?
Well the thing is, when you want to form a kubernetes cluster, it requires that you deploy a CNI plugin which would provide networking between your pods. The error that you have shown here is due to a CNI plugin not being installed or not being configured properly.
The kube-dns pod would be in pending state until the CNI plugin is deployed on your cluster. Once kube-dns moves to a running state, (after deploying the cni provider) you can run your application workloads.
If you have not deployed a CNI plugin, there are several ones you can choose from.
Calico: Provides Pod networking via standard BGP. (Follow the documentation for further info)
kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/kubeadm/1.7/calico.yaml
Weave: Creates an overlay network.
export kubever=$(kubectl version | base64 | tr -d '\n')
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$kubever"
Flannel: Creates an overlay network treating each host as a subnet.
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml
Container traffic needs to be made aware to the iptables and you can do that by
sysctl net.bridge.bridge-nf-call-iptables=1
This is required by Flannel and Weave to function.
Please do refer to the documentation of each CNI plugin which would be suitable for your cluster.