How to install kube-apiserver on centos? - kubernetes

I have installed etcd and kubernetes on centos, now I wanna install kube-apiserver. I installed kube-apiserver by snap.
sudo yum install epel-release
sudo yum install snapd
sudo systemctl enable --now snapd.socket
sudo ln -s /var/lib/snapd/snap /snap
sudo snap install kube-apiserver
I start kube-apiserver with the guide by this link.
Unfortunately, I got failed with ***error etcd certificate file not found in /etc/kubernetes/apiserver/apiserver.pem. But I found that the certificate file exists, how to run the kube-apiserver successfully?

I don't know the reason of your failure. But I suggest you to install kubernetes by kubeadm, it's a great k8s tool. If you install k8s by kubeadm, the kube-apiserver will be installed as a k8s pod. The guide to install kubeadm via this link.
I run the command kubectl get pods -A,
[karl#centos-linux ~]$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-66bff467f8-64pt6 1/1 Running 6 4d18h
kube-system coredns-66bff467f8-xpnsr 1/1 Running 6 4d18h
kube-system etcd-centos-linux.shared 1/1 Running 6 4d18h
kube-system kube-apiserver-centos-linux.shared 1/1 Running 6 4d18h
kube-system kube-controller-manager-centos-linux.shared 1/1 Running 6 4d18h
kube-system kube-flannel-ds-amd64-48stf 1/1 Running 8 4d18h
kube-system kube-proxy-9w8gh 1/1 Running 6 4d18h
kube-system kube-scheduler-centos-linux.shared 1/1 Running 6 4d18h
kube-apiserver-centos-linux.shared is a kube-apiserver pod, it is installed successfully.

I suggest using standard tool such as Kubeadm to install kubernetes on centos. kubeadm init will generate necessary certificates and install all the kubernetes control plane components including Kubernetes API Server.
Following this guide you should be able to install a single control plane cluster of kubernetes.
Kubeadm supports kubernetes cluster with multiple control plane node as well as cluster with completely separate ETCD nodes.

Related

How to install kubernetes in Suse Linux enterprize server 15 virtual machines?

We are trying to install kubernetes in SUSE enterprize linux server v15. We found that there is no way to install k8s using kubeadm. SUSE provides Container as a service Platform(CaasP) to install k8s.
All we are having is few virtual machines and suse subscription. Can we install CaasP in it?
We could not find any documentation to install it in virtual machines.
Is there any way documentation to do step by step CaasP installation in Virtual machines?
Kubeadm on SLES
It is possible to install Kubernetes on SUSE Linux Enterprise Server 15 using kubeadm.
You can find a step-by-step example below.
The example was tested on the following cloud VM images:
GCP :
SUSE Linux Enterprise Server 15 SP1 x86_x64
AWS :
openSUSE-Leap-15.2-v20200710-HVM-x86_64-548f7b74-f1d6-437e-b650-f6315f6d8aa3-ami-0f5745b812a5b7654.4 - ami-023643495f15f104b
suse-sles-15-sp1-v20200615-hvm-ssd-x86_64 - ami-0044ae6906d786f4b
Azure :
SUSE Enterprise Linux 15 SP1 +Patching
So, it has a good chance to be used with other images with only a few changes.
It was also tested on Vagrant box trueability/sles-15-sp1 , and it was required a few additional steps, because of expired subscription keys. I used OSS repositories and ignored expiration errors:
# add OSS repository for software installation
$ zypper addrepo http://download.opensuse.org/distribution/leap/15.2/repo/oss/ public
# add repository for installing newer Docker version
$ zypper addrepo https://download.opensuse.org/repositories/Virtualization:containers/openSUSE_Leap_15.0/Virtualization:containers.repo virt
# install symbols required by Docker:
$ zypper install libseccomp-devel
# turn off all swap partitions. Comment appropriate /etc/fstab entry as well.
$ swapoff -a
# Rest of the steps is similar except additional argument during cluster initialization.
# This box is using btrfs for /var/lib/docker and kubeadm complains about it.
# I've just asked kubeadm to ignore that fact.
# Even with btrfs it can start and run pods, but there might be some problems with Persistent Volumes usage,
# so consider using additional xfs or ext4 partition for /var/lib/docker
$ kubeadm init --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=all
Cloud VMs:
Cloud SLES 15 SP1 images uses xfs for their / file system and don't use swap out of the box, and kubeadm passes all pre-flight checks without errors.
# become root
$ sudo -s
# install docker
$ zypper refresh
$ zypper install docker
# configure sysctl for Kubernetes
$ cat <<EOF >> /etc/sysctl.conf
net.ipv4.ip_forward=1
net.ipv4.conf.all.forwarding=1
net.bridge.bridge-nf-call-iptables=1
EOF
# add Google repository for installing Kubernetes packages
#$ zypper addrepo --type yum --gpgcheck-strict --refresh https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 google-k8s
#or
$ cat <<EOF > /etc/zypp/repos.d/google-k8s.repo
[google-k8s]
name=google-k8s
enabled=1
autorefresh=1
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
type=rpm-md
gpgcheck=1
repo_gpgcheck=1
pkg_gpgcheck=1
EOF
# import Google repository keys
$ rpm --import https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
$ rpm --import https://packages.cloud.google.com/yum/doc/yum-key.gpg
$ rpm -q gpg-pubkey --qf '%{name}-%{version}-%{release} --> %{summary}\n'
# the following repository was needed only for GCP image
# other images was able successfully install conntrack-tools using existing repository
$ zypper addrepo https://download.opensuse.org/repositories/security:netfilter/SLE_12/security:netfilter.repo conntrack
$ zypper refresh conntrack
# conntrack presence is checked during kubeadm pre-flight checks
# but zypper unable to find appropriate dependency for kubelet,
# so let's install it manually
$ zypper install conntrack-tools
# refresh Google repository cache and check if we see several versions of Kubernetes packages to choose from
$ zypper refresh google-k8s
$ zypper packages --repo google-k8s
# install latest available kubelet package
# ignore conntrack dependency and install kubelet (Solution 2 in my case)
$ zypper install kubelet
# install kubeadm package. kubectl and cri-tools are installed as kubeadm dependency
$ zypper install kubeadm
# force docker to use systemd cgroup driver and overlay2 storage driver.
# Check the links in the end of the answer for details.
# BTW, kubelet would work even with default content of the file.
$ cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
# Not sure if it's necessary it was taken from the Kubernetes documentation
$ mkdir -p /etc/systemd/system/docker.service.d
# lets start and enable docker and kubelet services
$ systemctl start docker.service
$ systemctl enable docker.service
$ systemctl enable kubelet.service
# apply configured earlier sysctl settings.
# net.bridge.bridge-nf-call-iptables becomes available after successfully starting
# Docker service
$ sysctl -p
# Now it's time to initialize Kubernetes master node.
# Ignore pre-flight checks for Vagrant box.
$ kubeadm init --pod-network-cidr=10.244.0.0/16
# prepare kubectl configuration to connect the cluster
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
# Check if api-server responds to our requests.
# At this moment it's fine to see master node in NotReady state.
$ kubectl get nodes
# Deploy Flannel network addon
$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# remove taint from the master node.
# It allows master node to run application pods.
# At least one worker node is required if this step is skipped.
$ kubectl taint nodes --all node-role.kubernetes.io/master-
# run test pod to check if everything works fine
$ kubectl run nginx1 --image=nginx
# after some time... ~ 3-5 minutes
# check the pods' state
$ kubectl get pods -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default nginx1 1/1 Running 0 74s 10.244.0.4 suse-test <none> <none>
kube-system coredns-66bff467f8-vc2x4 1/1 Running 0 2m26s 10.244.0.2 suse-test <none> <none>
kube-system coredns-66bff467f8-w4jvq 1/1 Running 0 2m26s 10.244.0.3 suse-test <none> <none>
kube-system etcd-suse-test 1/1 Running 0 2m41s 10.4.0.4 suse-test <none> <none>
kube-system kube-apiserver-suse-test 1/1 Running 0 2m41s 10.4.0.4 suse-test <none> <none>
kube-system kube-controller-manager-suse-test 1/1 Running 0 2m41s 10.4.0.4 suse-test <none> <none>
kube-system kube-flannel-ds-amd64-mbfxp 1/1 Running 0 2m12s 10.4.0.4 suse-test <none> <none>
kube-system kube-proxy-cw5xm 1/1 Running 0 2m26s 10.4.0.4 suse-test <none> <none>
kube-system kube-scheduler-suse-test 1/1 Running 0 2m41s 10.4.0.4 suse-test <none> <none>
# check if the test pod is working fine
# curl 10.244.0.4
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...skipped...
# basic Kubernetes installation is done
Additional materials:
Container runtimes (Kubernetes documentation page)
Giving error saying “unsupported graph driver: btrfs” in SLES when try to Kubeadm init
(If you have btrfs / partition, you can mount additional xfs or ext4 partition for /var/lib/docker to use overlay2 Docker storage driver)
OverlayFS support for SLES 12 (I would expect that SLES 15 support it as well.)
Docker Compatibility Matrix
OS Distribution (x86_64): SLES 15
Enterprise Engine: 19.03.x
UCP: 3.2.x
DTR: 2.7.x
Storage Driver: overlay2,btrfs
Orchestration: Swarm mode, Kubernetes
DTR Storage Backend: NFSv4, NFSv3, Amazon S3, S3 Compliant Alternatives,
Azure Storage (Blob), Google Cloud Storage, OpenStack Swift,
Local Filesystem
SUSE: Docker Open Source Engine Guide (very useful book)
Use the OverlayFS storage driver
Install Linux Kernel 4.12 in openSUSE (in case you want to add AUFS support to the Linux kernel)
Materials about SUSE CaaSP
SUSE CaaS Platform Setup - Nothing to Everything in 1 Hour
(video is quite old but very useful)
CaaSP download page, 60 days free trial

Rook Ceph Operator hangs when checking for cluster status

I've setup a k8s cluster on digital ocean Ubuntu 18.04 LTS droplets using calico on top of wireguard vpn, and was able to setup nginx-ingress with traefik as external LB. I'm now on the step of setting up distributed storage using rook ceph, by following the quickstart at https://rook.io/docs/rook/master/ceph-quickstart.html, but it seems like the monitors never reach a quorum (even when its just one). Actually, monitor a reaches by itself, but neither the operator or any other monitors seem to know that, and the operator hangs when trying to check the status.
I've tried troubleshooting network issues, all the way from wireguard, calico and ufw. I've even set ufw to temporarily allow all traffic by default just to make sure I wasn't allowing one port but the traffic was on another interface (i have wg0, eth1, tunl0 and the calico interfaces).
The I followed the ceph troubleshooting guide unsuccessfully: http://docs.ceph.com/docs/master/rados/troubleshooting/troubleshooting-mon/#recovering-a-monitor-s-broken-monmap
I've been 4 days at this and I'm out of solutions.
Here's how I setup the storage cluster
cd cluster/examples/kubernetes/ceph
kubectl apply -f common.yaml
kubectl apply -f operator.yaml
kubectl apply -f cluster-test.yaml
Running kubectl get pods returns
NAME READY STATUS RESTARTS AGE
pod/rook-ceph-agent-9ws2p 1/1 Running 0 24s
pod/rook-ceph-agent-v6v9n 1/1 Running 0 24s
pod/rook-ceph-agent-x2jv4 1/1 Running 0 24s
pod/rook-ceph-mon-a-74cc6db5c8-8s5l5 1/1 Running 0 9s
pod/rook-ceph-operator-7cd5d8bd4c-pclxp 1/1 Running 0 25s
pod/rook-discover-24cfj 1/1 Running 0 24s
pod/rook-discover-6xsnp 1/1 Running 0 24s
pod/rook-discover-hj4tc 1/1 Running 0 24s
However, when I try to check the status of the monitors, from the operator pod I get:
#This hangs forever
kubectl exec -it rook-ceph-operator-7cd5d8bd4c-pclxp ceph status
#This hangs foverer
kubectl exec -it rook-ceph-operator-7cd5d8bd4c-pclxp ceph ping mon.a
#This returns [errno 2] error calling ping_monitor
#Which I guess should, becasue mon.b does/should not exist
#But I expected a response such as mon.b does not exist
kubectl exec -it rook-ceph-operator-7cd5d8bd4c-pclxp ceph ping mon.b
Pinging the monitor pod from the operator works just fine by the way
Operator logs
https://gist.github.com/figassis/0a3f499f5e3f79a430c9bd58718fd29f#file-operator-log
Monitor a logs
https://gist.github.com/figassis/0a3f499f5e3f79a430c9bd58718fd29f#file-mon-a-log
Monitor a status, obtainer directly form monitor pod via socket
https://gist.github.com/figassis/0a3f499f5e3f79a430c9bd58718fd29f#file-mon-a-status
You can execute ceph status command inside ceph toolbox pod.
https://github.com/rook/rook/blob/master/Documentation/ceph-toolbox.md

Kubenetes upgrade from 1.8.7 to 1.13.0

Context
We currently have 3 stable clusters on kubernetes(v1.8.7). These clusters were created by an external team which is no longer available and we have limited documentation. We are trying to upgrade to a higher stable version(v1.13.0). We're aware that we need to upgrade 1 version at a time so 1.8 -> 1.9 -> 1.10 & so on.
Solved Questions
Any pointers on how to upgrade from 1.8 to 1.9 ?
We tried to install kubeadm v1.8.7 & run kubeadm upgrade plan, but it fails with output -
[preflight] Running pre-flight checks
couldn't create a Kubernetes client from file "/etc/kubernetes/admin.conf": failed to load admin kubeconfig [open /etc/kubernetes/admin.conf: no such file or directory]
we can not find the file admin.conf. Any suggestions on how we can regenerate this or what information would it need ?
New Question
Since we now have the admin.conf file, we installed kubectl,kubeadm and kubelet v 1.9.0 -
apt-get install kubelet=1.9.0-00 kubeadm=1.9.0-00 kubectl=1.9.0-00.
When I run kubeadm upgrade plan v1.9.0
I get
root#k8s-master-dev-0:/home/azureuser# kubeadm upgrade plan v1.9.0
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/health] FATAL: [preflight] Some fatal errors occurred:
[ERROR APIServerHealth]: the API Server is unhealthy; /healthz didn't return "ok"
[ERROR MasterNodesReady]: couldn't list masters in cluster: Get https://<k8s-master-dev-0 ip>:6443/api/v1/nodes?labelSelector=node-role.kubernetes.io%2Fmaster%3D: dial tcp <k8s-master-dev-0 ip>:6443: getsockopt: connection refused
[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...
root#k8s-master-dev-0:/home/azureuser# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
heapster-75f8df9884-nxn2z 2/2 Running 0 42d
kube-addon-manager-k8s-master-dev-0 1/1 Running 2 1d
kube-addon-manager-k8s-master-dev-1 1/1 Running 4 123d
kube-addon-manager-k8s-master-dev-2 1/1 Running 2 169d
kube-apiserver-k8s-master-dev-0 1/1 Running 100 1d
kube-apiserver-k8s-master-dev-1 1/1 Running 4 123d
kube-apiserver-k8s-master-dev-2 1/1 Running 2 169d
kube-controller-manager-k8s-master-dev-0 1/1 Running 3 1d
kube-controller-manager-k8s-master-dev-1 1/1 Running 4 123d
kube-controller-manager-k8s-master-dev-2 1/1 Running 4 169d
kube-dns-v20-5d9fdc7448-smf9s 3/3 Running 0 42d
kube-dns-v20-5d9fdc7448-vtjh4 3/3 Running 0 42d
kube-proxy-cklcx 1/1 Running 1 123d
kube-proxy-dldnd 1/1 Running 4 169d
kube-proxy-gg89s 1/1 Running 0 169d
kube-proxy-mrkqf 1/1 Running 4 149d
kube-proxy-s95mm 1/1 Running 10 169d
kube-proxy-zxnb7 1/1 Running 2 169d
kube-scheduler-k8s-master-dev-0 1/1 Running 2 1d
kube-scheduler-k8s-master-dev-1 1/1 Running 6 123d
kube-scheduler-k8s-master-dev-2 1/1 Running 4 169d
kubernetes-dashboard-8555bd85db-4txtm 1/1 Running 0 42d
tiller-deploy-6677dc8d46-5n5cp 1/1 Running 0 42d
Lets go by step and first generate the admin.conf file in your cluster:
You can generate the admin.conf file using following command:
kubeadm alpha phase kubeconfig admin --cert-dir /etc/kubernetes/pki --kubeconfig-dir /etc/kubernetes/
Now, you can check out my following answer how to upgrade kubernetes cluster by kubeadm (The answer is for 1.10.0 to 1.10.11 but it is applicable also to 1.8 to 1.9, you just need to change the version for the package you download)
how to upgrade kubernetes from v1.10.0 to v1.10.11
Hope this helps.
Any pointers on how to upgrade from 1.8 to 1.9 ?
Definitely kubeadm
We tried to install kubeadm v1.8.7 & run kubeadm upgrade plan, but it
fails with output -
[preflight] Running pre-flight checks couldn't create a Kubernetes
client from file "/etc/kubernetes/admin.conf": failed to load admin
kubeconfig [open /etc/kubernetes/admin.conf: no such file or
directory] we can not find the file admin.conf. Any suggestions on how
we can regenerate this or what information would it need ?
kubeadm requires a couple of things:
ConfigMap in-cluster
Authentication / Credentials file
Firstly, I'd check kube-system namespace for a kubeadm-config ConfigMap. If that exists, you should be able to continue relatively painless.
If this doesn't exist, you will need to go ahead and create it.
kubeadm config upload from-flags would be a good starting point. You can specify the kubelet flags from your systemd unit file and it should get you in good shape.
https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-config/#cmd-config-from-flags
Secondly, kubeadm needs a conf file with credentials. I'd imagine there's one of these in your /etc/kubernetes directory somewhere; so poke around.
This file will look like your local kubeconfigs, starting with:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data:

Kuberetes V1.6.2 flannel not running on master node

I am running 2 node cluster using vagrant, configured with kubeadm command. When I setup the cluster flannel was running on all three nodes. Now i don't see flannel running in master node. because of this overlay network is not working from master node.
Used this yaml files to configure flannel.
kubectl create -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel-rbac.yml
kubectl create -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# kubectl get pods --all-namespaces -o wide |grep fla
kube-system kube-flannel-ds-0d3bn 2/2 Running 0 20m 192.168.15.102 node-01
kube-system kube-flannel-ds-86bzs 2/2 Running 0 20m 192.168.15.103 node-02
# k get nodes -o wide
NAME STATUS AGE VERSION EXTERNAL-IP OS-IMAGE KERNEL-VERSION
master01 Ready 26d v1.6.2 <none> CentOS Linux 7 (Core) 3.10.0-514.10.2.el7.x86_64
node-01 Ready 26d v1.6.2 <none> CentOS Linux 7 (Core) 3.10.0-514.10.2.el7.x86_64
node-02 Ready 26d v1.6.2 <none> CentOS Linux 7 (Core) 3.10.0-514.10.2.el7.x86_64
How can I start the flannel pod in my master node?
I see you using RBAC, maybe there are not enough rights at a node.
Try creating a clusterrolebinding with the necessary rights
$ kubectl create clusterrolebinding nodeName --clusterrole=system:node --
user=nodeName
or can use cluster-admin for testing

Kubernetes 1.6.2 flannel configuration in centos 7

Using kueadm command I have configured 3 nodes Kubernetes cluster. Unlike earlier version 1.6.2 kubeadm command configures all the Kubernetes process automatically. For flannel I used this yml file kube-flannel.yml. my understanding with Kubernetes is it will create the container and run the process inside the container but I see flannel process running on node itself but /opt/bin/flannel binary not in my node. How Kubernetes running the flannel?
How Kubernetes handles this? Is there right document explains this concepts?
flannel pod running in master node itself.
[root#master01 ~]# kubectl get pods -o wide --namespace=kube-system -l app=flannel
NAME READY STATUS RESTARTS AGE IP NODE
kube-flannel-ds-3694s 2/2 Running 37 3d 192.168.15.101 master01
kube-flannel-ds-mbh9b 2/2 Running 10 3d 192.168.15.102 node-01
kube-flannel-ds-vlm20 2/2 Running 12 3d 192.168.15.103 node-02
I see flanneld process
[root#master01 ~]# ps -fed |grep flan
root 5447 5415 0 May10 ? 00:00:08 /opt/bin/flanneld --ip-masq --kube-subnet-mgr
root 5604 5582 0 May10 ? 00:00:00 /bin/sh -c set -e -x; cp -f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conf; while true; do sleep 3600; done
but flanneld is not in the master node
> [root#master01 ~]# ls -ld /opt/bin/flanneld
> ls: cannot access /opt/bin/flanneld: No such file or directory
Thanks
SR
After some more reading found the answer flanneld run inside the continerd.
here is the run details.
https://github.com/opencontainers/runc
we can extract the flannel docker images like below.
> docker save -o flannel-v0.7.1-amd64.tar
> quay.io/coreos/flannel:v0.7.1-amd64 tar tvf flannel-v0.7.1-amd64.tar