create rook-ceph cluster on minikube - ceph

I am on ubuntu 20.04 and trying to create a rook-ceph cluster.
I have kvm2 installed to try it.
This is what I am doing but I don't see the node accessing storage.
minikube start --vm-driver=kvm2
minikube mount /dev/vda1 /data/ceph
and I followed the rook installation.
Is there anything I am missing? Thanks.

Create Volume using qemu and attach disk with virsh.
Like this:
sudo -S qemu-img create -f raw /var/lib/libvirt/images/minikube-box2-vm-disk1-50G 50G
virsh -c qemu:///system attach-disk minikube --source /var/lib/libvirt/images/minikube-box2-vm-disk1-50G --target vdb --cache none
virsh -c qemu:///system reboot --domain minikube

Related

Resolving Minikube metallb imagepullbackoff

I am moving from Docker Desktop to Minikube and have been having some trouble in getting MetalLB to work properly. I am starting Minikube in MacOS Monterey.
I've started a Minikube profile using the command below:
minikube start -p myprofile --cpus=4 --memory='32g' --disk-size='100000mb'
--driver=hyperkit --kubernetes-version=v1.21.8 --addons=metallb
When I check the pods for MetalLB, they are in an ImagePullBackOff status. The pods are trying to pull images docker.io/metallb/controller:v0.9.6 and docker.io/metallb/speaker:v0.9.6 respectively.
NAME READY STATUS RESTARTS AGE
controller-5fd6788656-jvj4m 0/1 ImagePullBackOff 0 26m
speaker-ctdmw 0/1 ImagePullBackOff 0 37m
After running eval $(minikube -p myprofile docker-env) and manually pulling through docker pull docker.io/metallb/speaker:v0.9.6, I get the error:
Error response from daemon: Get "https://registry-1.docker.io/v2/": dial tcp: lookup registry-1.docker.io on <ip-address>:53: read udp <ip-address>:49978-><ip-address>:53: i/o timeout
I'm not certain if it's useful, but after SSHing into the Minikube node, I've also verified ping google.com does not return a result.
When starting my Minikube profile, I had the following output:
πŸ˜„ [myprofile] minikube v1.28.0 on Darwin 12.3.1
πŸ†• Kubernetes 1.25.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.3
✨ Using the hyperkit driver based on existing profile
πŸ‘ Starting control plane node myprofile in cluster myprofile
πŸ”„ Restarting existing hyperkit VM for "myprofile" ...
❗ This VM is having trouble accessing https://k8s.gcr.io
πŸ’‘ To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
🐳 Preparing Kubernetes v1.21.8 on Docker 20.10.20 ...
πŸ”Ž Verifying Kubernetes components...
β–ͺ Using image gcr.io/k8s-minikube/storage-provisioner:v5
β–ͺ Using image metallb/speaker:v0.9.6
β–ͺ Using image metallb/controller:v0.9.6
🌟 Enabled addons: storage-provisioner, metallb, default-storageclass
❗ /usr/local/bin/kubectl is version 1.25.4, which may have incompatibilities with Kubernetes 1.21.8.
β–ͺ Want kubectl v1.21.8? Try 'minikube kubectl -- get pods -A'
πŸ„ Done! kubectl is now configured to use "myprofile" cluster and "default" namespace by default

How to install kubernetes in Suse Linux enterprize server 15 virtual machines?

We are trying to install kubernetes in SUSE enterprize linux server v15. We found that there is no way to install k8s using kubeadm. SUSE provides Container as a service Platform(CaasP) to install k8s.
All we are having is few virtual machines and suse subscription. Can we install CaasP in it?
We could not find any documentation to install it in virtual machines.
Is there any way documentation to do step by step CaasP installation in Virtual machines?
Kubeadm on SLES
It is possible to install Kubernetes on SUSE Linux Enterprise Server 15 using kubeadm.
You can find a step-by-step example below.
The example was tested on the following cloud VM images:
GCP :
SUSE Linux Enterprise Server 15 SP1 x86_x64
AWS :
openSUSE-Leap-15.2-v20200710-HVM-x86_64-548f7b74-f1d6-437e-b650-f6315f6d8aa3-ami-0f5745b812a5b7654.4 - ami-023643495f15f104b
suse-sles-15-sp1-v20200615-hvm-ssd-x86_64 - ami-0044ae6906d786f4b
Azure :
SUSE Enterprise Linux 15 SP1 +Patching
So, it has a good chance to be used with other images with only a few changes.
It was also tested on Vagrant box trueability/sles-15-sp1 , and it was required a few additional steps, because of expired subscription keys. I used OSS repositories and ignored expiration errors:
# add OSS repository for software installation
$ zypper addrepo http://download.opensuse.org/distribution/leap/15.2/repo/oss/ public
# add repository for installing newer Docker version
$ zypper addrepo https://download.opensuse.org/repositories/Virtualization:containers/openSUSE_Leap_15.0/Virtualization:containers.repo virt
# install symbols required by Docker:
$ zypper install libseccomp-devel
# turn off all swap partitions. Comment appropriate /etc/fstab entry as well.
$ swapoff -a
# Rest of the steps is similar except additional argument during cluster initialization.
# This box is using btrfs for /var/lib/docker and kubeadm complains about it.
# I've just asked kubeadm to ignore that fact.
# Even with btrfs it can start and run pods, but there might be some problems with Persistent Volumes usage,
# so consider using additional xfs or ext4 partition for /var/lib/docker
$ kubeadm init --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=all
Cloud VMs:
Cloud SLES 15 SP1 images uses xfs for their / file system and don't use swap out of the box, and kubeadm passes all pre-flight checks without errors.
# become root
$ sudo -s
# install docker
$ zypper refresh
$ zypper install docker
# configure sysctl for Kubernetes
$ cat <<EOF >> /etc/sysctl.conf
net.ipv4.ip_forward=1
net.ipv4.conf.all.forwarding=1
net.bridge.bridge-nf-call-iptables=1
EOF
# add Google repository for installing Kubernetes packages
#$ zypper addrepo --type yum --gpgcheck-strict --refresh https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 google-k8s
#or
$ cat <<EOF > /etc/zypp/repos.d/google-k8s.repo
[google-k8s]
name=google-k8s
enabled=1
autorefresh=1
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
type=rpm-md
gpgcheck=1
repo_gpgcheck=1
pkg_gpgcheck=1
EOF
# import Google repository keys
$ rpm --import https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
$ rpm --import https://packages.cloud.google.com/yum/doc/yum-key.gpg
$ rpm -q gpg-pubkey --qf '%{name}-%{version}-%{release} --> %{summary}\n'
# the following repository was needed only for GCP image
# other images was able successfully install conntrack-tools using existing repository
$ zypper addrepo https://download.opensuse.org/repositories/security:netfilter/SLE_12/security:netfilter.repo conntrack
$ zypper refresh conntrack
# conntrack presence is checked during kubeadm pre-flight checks
# but zypper unable to find appropriate dependency for kubelet,
# so let's install it manually
$ zypper install conntrack-tools
# refresh Google repository cache and check if we see several versions of Kubernetes packages to choose from
$ zypper refresh google-k8s
$ zypper packages --repo google-k8s
# install latest available kubelet package
# ignore conntrack dependency and install kubelet (Solution 2 in my case)
$ zypper install kubelet
# install kubeadm package. kubectl and cri-tools are installed as kubeadm dependency
$ zypper install kubeadm
# force docker to use systemd cgroup driver and overlay2 storage driver.
# Check the links in the end of the answer for details.
# BTW, kubelet would work even with default content of the file.
$ cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
# Not sure if it's necessary it was taken from the Kubernetes documentation
$ mkdir -p /etc/systemd/system/docker.service.d
# lets start and enable docker and kubelet services
$ systemctl start docker.service
$ systemctl enable docker.service
$ systemctl enable kubelet.service
# apply configured earlier sysctl settings.
# net.bridge.bridge-nf-call-iptables becomes available after successfully starting
# Docker service
$ sysctl -p
# Now it's time to initialize Kubernetes master node.
# Ignore pre-flight checks for Vagrant box.
$ kubeadm init --pod-network-cidr=10.244.0.0/16
# prepare kubectl configuration to connect the cluster
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
# Check if api-server responds to our requests.
# At this moment it's fine to see master node in NotReady state.
$ kubectl get nodes
# Deploy Flannel network addon
$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# remove taint from the master node.
# It allows master node to run application pods.
# At least one worker node is required if this step is skipped.
$ kubectl taint nodes --all node-role.kubernetes.io/master-
# run test pod to check if everything works fine
$ kubectl run nginx1 --image=nginx
# after some time... ~ 3-5 minutes
# check the pods' state
$ kubectl get pods -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default nginx1 1/1 Running 0 74s 10.244.0.4 suse-test <none> <none>
kube-system coredns-66bff467f8-vc2x4 1/1 Running 0 2m26s 10.244.0.2 suse-test <none> <none>
kube-system coredns-66bff467f8-w4jvq 1/1 Running 0 2m26s 10.244.0.3 suse-test <none> <none>
kube-system etcd-suse-test 1/1 Running 0 2m41s 10.4.0.4 suse-test <none> <none>
kube-system kube-apiserver-suse-test 1/1 Running 0 2m41s 10.4.0.4 suse-test <none> <none>
kube-system kube-controller-manager-suse-test 1/1 Running 0 2m41s 10.4.0.4 suse-test <none> <none>
kube-system kube-flannel-ds-amd64-mbfxp 1/1 Running 0 2m12s 10.4.0.4 suse-test <none> <none>
kube-system kube-proxy-cw5xm 1/1 Running 0 2m26s 10.4.0.4 suse-test <none> <none>
kube-system kube-scheduler-suse-test 1/1 Running 0 2m41s 10.4.0.4 suse-test <none> <none>
# check if the test pod is working fine
# curl 10.244.0.4
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...skipped...
# basic Kubernetes installation is done
Additional materials:
Container runtimes (Kubernetes documentation page)
Giving error saying β€œunsupported graph driver: btrfs” in SLES when try to Kubeadm init
(If you have btrfs / partition, you can mount additional xfs or ext4 partition for /var/lib/docker to use overlay2 Docker storage driver)
OverlayFS support for SLES 12 (I would expect that SLES 15 support it as well.)
Docker Compatibility Matrix
OS Distribution (x86_64): SLES 15
Enterprise Engine: 19.03.x
UCP: 3.2.x
DTR: 2.7.x
Storage Driver: overlay2,btrfs
Orchestration: Swarm mode, Kubernetes
DTR Storage Backend: NFSv4, NFSv3, Amazon S3, S3 Compliant Alternatives,
Azure Storage (Blob), Google Cloud Storage, OpenStack Swift,
Local Filesystem
SUSE: Docker Open Source Engine Guide (very useful book)
Use the OverlayFS storage driver
Install Linux Kernel 4.12 in openSUSE (in case you want to add AUFS support to the Linux kernel)
Materials about SUSE CaaSP
SUSE CaaS Platform Setup - Nothing to Everything in 1 Hour
(video is quite old but very useful)
CaaSP download page, 60 days free trial

About : CreateContainerError

i installed K8S cluster in my laptop, it was running fine in the beginning but when i restarted my laptop then some services were not running.
kube-system coredns-5c98db65d4-9nm6m 0/1 Error 594 12d
kube-system coredns-5c98db65d4-qwkk9 0/1 CreateContainerError
kube-system kube-scheduler-kubemaster 0/1 CreateContainerError
I searched online for solution but could not get appropriate answer ,
please help me resolve this issue
I encourage you to look for official kubernetes documentation. Remember that your kubemaster should have at least fallowing resources: 2CPUs or more, 2GB or more of RAM.
Firstly install docker and kubeadm (as a root user) on each machine.
Initialize kubeadm (on master):
kubeadm init <args>
For example for Calico to work correctly, you need to pass --pod-network-cidr=192.168.0.0/16 to kubeadm init:
kubeadm init --pod-network-cidr=192.168.0.0/16
Install a pod network add-on (depends on what you would like to use). You can install a pod network add-on with the following command:
kubectl apply -f <add-on.yaml>
e.g. for Calico:
kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml
To start using your cluster, you need to run on master the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You can now join any number of machines by running the following on each node as root:
kubeadm join <master-ip>:<master-port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>
By default, tokens expire after 24 hours. If you are joining a node to the cluster after the current token has expired, you can create a new token by running the following command on the control-plane node:
kubeadm token create
Please, let me know if it works for you.
Did you check the status of docker and kubelet services.? if not, please run below commands and verify that services are up and running.
systemctl status docker kubelet

Issue with minikube / kvm2

I have this issue when running "minikube start --vm-driver kvm2":
E0109 11:23:34.536027 22169 start.go:187] Error starting host: Error
starting stopped host: Error creating VM: virError(Code=1, Domain=10,
Message='internal error: qemu unexpectedly closed the monitor:
2019-01-09 16:23:34.183+0000: Domain id=11 is tainted: host-cpu
2019-01-09T16:23:34.284194Z qemu-kvm: unrecognized feature kvm').
Result of lsmod kvm:
[root#smu-ws ~]# lsmod | grep kvm kvm_intel 225280 0 kvm
647168 1 kvm_intel irqbypass 16384 1 kvm
Result of virt-host-validate, everything PASS except:
QEMU: Checking for device assignment IOMMU support
: WARN (No ACPI DMAR table found, IOMMU either disabled in BIOS or not
supported by this hardware platform)
Regards.
I managed to resolve this on RHEL by
$ sudo rm /usr/local/bin/minikube
$ sudo rm -rf .minikube/ (from home directory)
$ curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.34.1/minikube-linux-amd64 && chmod +x minikube && sudo cp minikube /usr/local/bin/ && rm minikube
$ minikube start --vm-driver kvm2
πŸ˜„ minikube v0.34.1 on linux (amd64)
πŸ”₯ Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
πŸ’Ώ Downloading Minikube ISO ...
184.30 MB / 184.30 MB [============================================] 100.00% 0s
πŸ“Ά "minikube" IP address is 192.168.39.29
🐳 Configuring Docker as the container runtime ...
✨ Preparing Kubernetes environment ...
πŸ’Ύ Downloading kubeadm v1.13.3
πŸ’Ύ Downloading kubelet v1.13.3
🚜 Pulling images required by Kubernetes v1.13.3 ...
πŸš€ Launching Kubernetes v1.13.3 using kubeadm ...
πŸ”‘ Configuring cluster permissions ...
πŸ€” Verifying component health .....
πŸ’— kubectl is now configured to use "minikube"
πŸ„ Done! Thank you for using minikube!

Kubernetes 1.6.2 flannel configuration in centos 7

Using kueadm command I have configured 3 nodes Kubernetes cluster. Unlike earlier version 1.6.2 kubeadm command configures all the Kubernetes process automatically. For flannel I used this yml file kube-flannel.yml. my understanding with Kubernetes is it will create the container and run the process inside the container but I see flannel process running on node itself but /opt/bin/flannel binary not in my node. How Kubernetes running the flannel?
How Kubernetes handles this? Is there right document explains this concepts?
flannel pod running in master node itself.
[root#master01 ~]# kubectl get pods -o wide --namespace=kube-system -l app=flannel
NAME READY STATUS RESTARTS AGE IP NODE
kube-flannel-ds-3694s 2/2 Running 37 3d 192.168.15.101 master01
kube-flannel-ds-mbh9b 2/2 Running 10 3d 192.168.15.102 node-01
kube-flannel-ds-vlm20 2/2 Running 12 3d 192.168.15.103 node-02
I see flanneld process
[root#master01 ~]# ps -fed |grep flan
root 5447 5415 0 May10 ? 00:00:08 /opt/bin/flanneld --ip-masq --kube-subnet-mgr
root 5604 5582 0 May10 ? 00:00:00 /bin/sh -c set -e -x; cp -f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conf; while true; do sleep 3600; done
but flanneld is not in the master node
> [root#master01 ~]# ls -ld /opt/bin/flanneld
> ls: cannot access /opt/bin/flanneld: No such file or directory
Thanks
SR
After some more reading found the answer flanneld run inside the continerd.
here is the run details.
https://github.com/opencontainers/runc
we can extract the flannel docker images like below.
> docker save -o flannel-v0.7.1-amd64.tar
> quay.io/coreos/flannel:v0.7.1-amd64 tar tvf flannel-v0.7.1-amd64.tar