kubeadm init is getting failed - kubernetes

I was trying to setup a Kubernetes cluster in centos 7 and i am facing issue while running the below kubeadm init command.
kubeadm init --apiserver-advertise-address=10.70.6.18 --pod-network-cidr=192.168.0.0/16
I1224 18:20:55.388552 11136 version.go:94] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: dial tcp: lookup dl.k8s.io on 10.171.221.11:53: no such host
I1224 18:20:55.388679 11136 version.go:95] falling back to the local client version: v1.13.1
[init] Using Kubernetes version: v1.13.1
[preflight] Running pre-flight checks
[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.13.1: output: v1.13.1: Pulling from kube-apiserver
73e3e9d78c61: Pulling fs layer
e08dba503a39: Pulling fs layer
error pulling image configuration: Get https://storage.googleapis.com/asia.artifacts.google-containers.appspot.com/containers/images/sha256:40a63db91ef844887af73e723e40e595e4aa651ac2c5637332d719e42abc4dd2: x509: certificate signed by unknown authority
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.13.1: output: v1.13.1: Pulling from kube-controller-manager

In your worker(s) node try using proxy or VPN to change your IP. I think the registry which you try to pull from it, blocked your IP.

Related

Not able to pull registry.k8s.io in Microk8s start

I'm trying to kickstart a MicroK8s cluster but the Calico pod stays on Pending status because of a 403 error against the pull of registry.k8s.io/pause:3.7
This is the error:
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to get sandbox image "registry.k8s.io/pause:3.7": failed to pull image "registry.k8s.io/pause:3.7": failed to pull and unpack image "registry.k8s.io/pause:3.7": failed to resolve reference "registry.k8s.io/pause:3.7": pulling from host registry.k8s.io failed with status code [manifests 3.7]: 403 Forbidden
We're talking about a new server which might be missing some configuration.
The insecure registry, according to Microk8s documentation, is enabled at localhost:3200.
I've enabled the dns on the Microk8s but nothing has change.
If I try to pull from docker I get a forbidden error.

How to install kubelet, kubectl, kubeadm from source codes?

Because I have modified the source codes of k8s and want to debug the k8s, I would like to build and install kubelet, kubectl, kubeadm from source codes. Now I have built the kubelet, kubectl, kubeadm and get the bin files. When I want to run kubedam to deploy a cluster, I found just copying them to /usr/local/bin/ does not work for me and I got the following errors:
kubeadm init --apiserver-advertise-address=x.x.x.x --pod-network-cidr=10.244.0.0/16
W0204 20:55:28.785179 5682 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.17.2
[preflight] Running pre-flight checks
[WARNING Service-Kubelet]: kubelet service does not exist
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR FileExisting-conntrack]: conntrack not found in system path
[ERROR KubeletVersion]: the kubelet version is higher than the control plane version. This is not a supported version skew and may lead to a malfunctional cluster. Kubelet version: "1.18.0-alpha.2.355+845b23232125ca" Control plane version: "1.17.2"
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
Could anyone help me about how to install kubelet, kubectl, kubeadm correctly from source codes?
Thank you very much.
If you have modified kubernetes source code and what to test those changes I suggest to follow the developer guide for that. It's hard to use kubeadm for testing modified kubernetes because kubeadm comes packaged with officially released kubernetes and it's hard to make it use your modified kubernetes.
If you have modified kubeadm and want to test that you can follow the doc

why is Kuberenetes kubeadm init command unable to pull the images from the repository k8s.gcr.io

I have 2 vms that run a kubernetes master and a slave node that i have setup locally. Till now everything was working fine but suddenly it started giving errors when I try to start the master with kubeadm init command.I have cpppied the error below.
shayeeb#ubuntu:~$ sudo kubeadm init
[init] using Kubernetes version: v1.11.1
[preflight] running pre-flight checks
I0718 11:04:57.038464 20370 kernel_validator.go:81] Validating kernel version
I0718 11:04:57.038896 20370 kernel_validator.go:96] Validating kernel config
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[preflight] Some fatal errors occurred:
[ERROR ImagePull]: failed to pull image [k8s.gcr.io/kube-apiserver-amd64:v1.11.1]: exit status 1
[ERROR ImagePull]: failed to pull image [k8s.gcr.io/kube-controller-manager-amd64:v1.11.1]: exit status 1
[ERROR ImagePull]: failed to pull image [k8s.gcr.io/kube-scheduler-amd64:v1.11.1]: exit status 1
[ERROR ImagePull]: failed to pull image [k8s.gcr.io/kube-proxy-amd64:v1.11.1]: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
You can also run following command rather than writing the yaml,
kubeadm init --kubernetes-version=1.11.0 --apiserver-advertise-address=<public_ip> --apiserver-cert-extra-sans=<private_ip>
If you are using flannel network run following commad,
kubeadm init --kubernetes-version=1.11.0 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=<public_ip> --apiserver-cert-extra-sans=<internal_ip>
The latest version(v1.11.1) is not pulling. You can try specifying the version like
kubeadm config images pull --kubernetes-version=v1.11.0
The image seems to have been removed :
https://console.cloud.google.com/gcr/images/google-containers/GLOBAL/kube-apiserver-arm64?gcrImageListsize=50&gcrImageListquery=%255B%257B_22k_22_3A_22_22_2C_22t_22_3A10_2C_22v_22_3A_22_5C_22v1.11_5C_22_22%257D%255D
As a workaround, pull the latest available images and ignore pre flight errors
kubeadm config images pull --kubernetes-version=v1.11.0
kubeadm init [args] --ignore-preflight-errors=all
Try this approach as it is working:
You can create config.yaml file, for example
cat config.yaml
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
api:
advertiseAddress: 10.44.70.201
networking:
podSubnet: 192.168.0.0/16
kubernetesVersion: 1.11.0
and run kubeadm init --config=kubeadm-config.yaml
I was getting similar error . Proxy was causing "kubeadm config images pull" to timeout.
Similar issue was mentioned in https://github.com/kubernetes/kubeadm/issues/324
https://github.com/kubernetes/kubeadm/issues/182#issuecomment-1137419094
And the below solution provided in the above worked for me
systemctl set-environment HTTP_PROXY=http://proxy.example.com
systemctl set-environment HTTPS_PROXY=http://proxy.example.com
systemctl restart containerd.service

Unauthorized when trying to allow nodes to join a Kubernetes cluster

I had a two node cluster in which one was master and another slave. It was running from the last 26 days. Today i tried to remove a node using kubeadm reset and add it again and kubelet was not able to start
cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
The binary conntrack is not installed, this can cause failures in network connection cleanup.
server.go:376] Version: v1.10.2
feature_gate.go:226] feature gates: &{{} map[]}
plugins.go:89] No cloud provider specified.
server.go:233] failed to run Kubelet: cannot create certificate signing request: Unauthorized
while the join command is successful
[preflight] Running pre-flight checks.
[WARNING FileExisting-crictl]: crictl not found in system path
Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
[preflight] Starting the kubelet service
[discovery] Trying to connect to API Server "aaaaa:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://aaaaa:6443"
[discovery] Requesting info from "https:/aaaaaa:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server
[discovery] Successfully established connection with API Server "aaaa:6443"
This node has joined the cluster:
Certificate signing request was sent to master and a response
was received.
The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the master to see this node join the cluster.
IMO the log failed to run Kubelet: cannot create certificate signing request: Unauthorized is the source of the problem, but I am do not know how it is coming and how to fix it.
TIA. I can give more details but I am not sure what all I shall give

kubeadm join - unable to check if the container runtime

While adding new node to existing 1.9.0 cluster. kubeadm giving this error message.
My cluster is running on Centos 7 server. docker deamon is running, but there is no file /var/run/dockershim.sock found.
How to resolve this error message?
kubeadm join --token
[preflight] Some fatal errors occurred:
[ERROR CRI]: unable to check if the container runtime at "/var/run/dockershim.sock" is running: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
thanks
SR
There is a discussion: I have the following problem when I want to use the client for CRI #153
that follows up with the cubeadm issue: How to get kubeadm init to not fail? #733
The issue was solved with Pull Request kubeadm init: skip checking cri socket in preflight checks #58802
Version v1.9.3 is the nearest release after the PR merge.
As a suggested workaround, you can disable the CRI check:
kubeadm init --ignore-preflight-errors=cri