How to install kubelet, kubectl, kubeadm from source codes? - kubernetes

Because I have modified the source codes of k8s and want to debug the k8s, I would like to build and install kubelet, kubectl, kubeadm from source codes. Now I have built the kubelet, kubectl, kubeadm and get the bin files. When I want to run kubedam to deploy a cluster, I found just copying them to /usr/local/bin/ does not work for me and I got the following errors:
kubeadm init --apiserver-advertise-address=x.x.x.x --pod-network-cidr=10.244.0.0/16
W0204 20:55:28.785179 5682 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.17.2
[preflight] Running pre-flight checks
[WARNING Service-Kubelet]: kubelet service does not exist
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR FileExisting-conntrack]: conntrack not found in system path
[ERROR KubeletVersion]: the kubelet version is higher than the control plane version. This is not a supported version skew and may lead to a malfunctional cluster. Kubelet version: "1.18.0-alpha.2.355+845b23232125ca" Control plane version: "1.17.2"
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
Could anyone help me about how to install kubelet, kubectl, kubeadm correctly from source codes?
Thank you very much.

If you have modified kubernetes source code and what to test those changes I suggest to follow the developer guide for that. It's hard to use kubeadm for testing modified kubernetes because kubeadm comes packaged with officially released kubernetes and it's hard to make it use your modified kubernetes.
If you have modified kubeadm and want to test that you can follow the doc

Related

coredns pods getting failed created by kubeadm init command

When I run kubeadm::init command, all pods are running except coredns pods. when I describe the pods, its showing something cni initialization failed.
do I need any network plugin to be installed before running kubeadm::init??
No, the network add-on is only added after kubeadm init, the documentation is explicit on this topic: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network

kubeadm init is getting failed

I was trying to setup a Kubernetes cluster in centos 7 and i am facing issue while running the below kubeadm init command.
kubeadm init --apiserver-advertise-address=10.70.6.18 --pod-network-cidr=192.168.0.0/16
I1224 18:20:55.388552 11136 version.go:94] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: dial tcp: lookup dl.k8s.io on 10.171.221.11:53: no such host
I1224 18:20:55.388679 11136 version.go:95] falling back to the local client version: v1.13.1
[init] Using Kubernetes version: v1.13.1
[preflight] Running pre-flight checks
[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.13.1: output: v1.13.1: Pulling from kube-apiserver
73e3e9d78c61: Pulling fs layer
e08dba503a39: Pulling fs layer
error pulling image configuration: Get https://storage.googleapis.com/asia.artifacts.google-containers.appspot.com/containers/images/sha256:40a63db91ef844887af73e723e40e595e4aa651ac2c5637332d719e42abc4dd2: x509: certificate signed by unknown authority
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.13.1: output: v1.13.1: Pulling from kube-controller-manager
In your worker(s) node try using proxy or VPN to change your IP. I think the registry which you try to pull from it, blocked your IP.

why is Kuberenetes kubeadm init command unable to pull the images from the repository k8s.gcr.io

I have 2 vms that run a kubernetes master and a slave node that i have setup locally. Till now everything was working fine but suddenly it started giving errors when I try to start the master with kubeadm init command.I have cpppied the error below.
shayeeb#ubuntu:~$ sudo kubeadm init
[init] using Kubernetes version: v1.11.1
[preflight] running pre-flight checks
I0718 11:04:57.038464 20370 kernel_validator.go:81] Validating kernel version
I0718 11:04:57.038896 20370 kernel_validator.go:96] Validating kernel config
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[preflight] Some fatal errors occurred:
[ERROR ImagePull]: failed to pull image [k8s.gcr.io/kube-apiserver-amd64:v1.11.1]: exit status 1
[ERROR ImagePull]: failed to pull image [k8s.gcr.io/kube-controller-manager-amd64:v1.11.1]: exit status 1
[ERROR ImagePull]: failed to pull image [k8s.gcr.io/kube-scheduler-amd64:v1.11.1]: exit status 1
[ERROR ImagePull]: failed to pull image [k8s.gcr.io/kube-proxy-amd64:v1.11.1]: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
You can also run following command rather than writing the yaml,
kubeadm init --kubernetes-version=1.11.0 --apiserver-advertise-address=<public_ip> --apiserver-cert-extra-sans=<private_ip>
If you are using flannel network run following commad,
kubeadm init --kubernetes-version=1.11.0 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=<public_ip> --apiserver-cert-extra-sans=<internal_ip>
The latest version(v1.11.1) is not pulling. You can try specifying the version like
kubeadm config images pull --kubernetes-version=v1.11.0
The image seems to have been removed :
https://console.cloud.google.com/gcr/images/google-containers/GLOBAL/kube-apiserver-arm64?gcrImageListsize=50&gcrImageListquery=%255B%257B_22k_22_3A_22_22_2C_22t_22_3A10_2C_22v_22_3A_22_5C_22v1.11_5C_22_22%257D%255D
As a workaround, pull the latest available images and ignore pre flight errors
kubeadm config images pull --kubernetes-version=v1.11.0
kubeadm init [args] --ignore-preflight-errors=all
Try this approach as it is working:
You can create config.yaml file, for example
cat config.yaml
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
api:
advertiseAddress: 10.44.70.201
networking:
podSubnet: 192.168.0.0/16
kubernetesVersion: 1.11.0
and run kubeadm init --config=kubeadm-config.yaml
I was getting similar error . Proxy was causing "kubeadm config images pull" to timeout.
Similar issue was mentioned in https://github.com/kubernetes/kubeadm/issues/324
https://github.com/kubernetes/kubeadm/issues/182#issuecomment-1137419094
And the below solution provided in the above worked for me
systemctl set-environment HTTP_PROXY=http://proxy.example.com
systemctl set-environment HTTPS_PROXY=http://proxy.example.com
systemctl restart containerd.service

kubeadm join - unable to check if the container runtime

While adding new node to existing 1.9.0 cluster. kubeadm giving this error message.
My cluster is running on Centos 7 server. docker deamon is running, but there is no file /var/run/dockershim.sock found.
How to resolve this error message?
kubeadm join --token
[preflight] Some fatal errors occurred:
[ERROR CRI]: unable to check if the container runtime at "/var/run/dockershim.sock" is running: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
thanks
SR
There is a discussion: I have the following problem when I want to use the client for CRI #153
that follows up with the cubeadm issue: How to get kubeadm init to not fail? #733
The issue was solved with Pull Request kubeadm init: skip checking cri socket in preflight checks #58802
Version v1.9.3 is the nearest release after the PR merge.
As a suggested workaround, you can disable the CRI check:
kubeadm init --ignore-preflight-errors=cri

CoreOS v.1.6.1 not starting

I am working on setting up a new Kubernetes cluster using the CoreOS documentation. This one uses the CoreOS v1.6.1 image. I am following this documentation from link CoreOS Master setup. I looked in the journalctl logs and I see that the kubeapi-server seems to exit and restart.
The following is a journalctl log indicating on the kube-apiserver :
checking backoff for container "kube-apiserver" in pod "kube-apiserver-10.138.192.31_kube-system(16c7e04edcd7e775efadd4bdcb1940c4)"
Back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-10.138.192.31_kube-system(16c7e04edcd7e775efadd4bdcb1940c4)
Error syncing pod 16c7e04edcd7e775efadd4bdcb1940c4 ("kube-apiserver-10.138.192.31_kube-system(16c7e04edcd7e775efadd4bdcb1940c4)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-10.138.192.31_kube-system(16c7e04edcd7e775efadd4bdcb1940c4)"
I am wondering if it's because I need to start the new etcd3 version instead of the etcd2? Any hints or suggestion is appreciated.
The following is my cloud-config:
coreos:
etcd2:
# generate a new token for each unique cluster from https://discovery.etcd.io/new:
discovery: https://discovery.etcd.io/33e3f7c20be0b57daac4d14d478841b4
# multi-region deployments, multi-cloud deployments, and Droplets without
# private networking need to use $public_ipv4:
advertise-client-urls: http://$private_ipv4:2379,http://$private_ipv4:4001
initial-advertise-peer-urls: http://$private_ipv4:2380
# listen on the official ports 2379, 2380 and one legacy port 4001:
listen-client-urls: http://0.0.0.0:2379,http://0.0.0.0:4001
listen-peer-urls: http://$private_ipv4:2380
fleet:
public-ip: $private_ipv4 # used for fleetctl ssh command
units:
- name: etcd2.service
command: start
However, I have tried with CoreOS v1.5 images and they work fine. It's the CoreOS v1.6 images that I am not able to get the kube-apiserver running for some reason.
You use etcd2, so you need to pass the flag '--storage-backend=etcd2' to your kube-apiserver in your manifest.
You are using etcd2, I think maybe you can try etcd3.
You said:
I am wondering if it's because I need to start the new etcd3 version instead of the etcd2? Any hints or suggestion is appreciated.
I would like to recommend that you can read this doc to learn how to upgrade the etcd.