About : CreateContainerError - kubernetes

i installed K8S cluster in my laptop, it was running fine in the beginning but when i restarted my laptop then some services were not running.
kube-system coredns-5c98db65d4-9nm6m 0/1 Error 594 12d
kube-system coredns-5c98db65d4-qwkk9 0/1 CreateContainerError
kube-system kube-scheduler-kubemaster 0/1 CreateContainerError
I searched online for solution but could not get appropriate answer ,
please help me resolve this issue

I encourage you to look for official kubernetes documentation. Remember that your kubemaster should have at least fallowing resources: 2CPUs or more, 2GB or more of RAM.
Firstly install docker and kubeadm (as a root user) on each machine.
Initialize kubeadm (on master):
kubeadm init <args>
For example for Calico to work correctly, you need to pass --pod-network-cidr=192.168.0.0/16 to kubeadm init:
kubeadm init --pod-network-cidr=192.168.0.0/16
Install a pod network add-on (depends on what you would like to use). You can install a pod network add-on with the following command:
kubectl apply -f <add-on.yaml>
e.g. for Calico:
kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml
To start using your cluster, you need to run on master the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You can now join any number of machines by running the following on each node as root:
kubeadm join <master-ip>:<master-port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>
By default, tokens expire after 24 hours. If you are joining a node to the cluster after the current token has expired, you can create a new token by running the following command on the control-plane node:
kubeadm token create
Please, let me know if it works for you.

Did you check the status of docker and kubelet services.? if not, please run below commands and verify that services are up and running.
systemctl status docker kubelet

Related

create rook-ceph cluster on minikube

I am on ubuntu 20.04 and trying to create a rook-ceph cluster.
I have kvm2 installed to try it.
This is what I am doing but I don't see the node accessing storage.
minikube start --vm-driver=kvm2
minikube mount /dev/vda1 /data/ceph
and I followed the rook installation.
Is there anything I am missing? Thanks.
Create Volume using qemu and attach disk with virsh.
Like this:
sudo -S qemu-img create -f raw /var/lib/libvirt/images/minikube-box2-vm-disk1-50G 50G
virsh -c qemu:///system attach-disk minikube --source /var/lib/libvirt/images/minikube-box2-vm-disk1-50G --target vdb --cache none
virsh -c qemu:///system reboot --domain minikube

How to install kubernetes in Suse Linux enterprize server 15 virtual machines?

We are trying to install kubernetes in SUSE enterprize linux server v15. We found that there is no way to install k8s using kubeadm. SUSE provides Container as a service Platform(CaasP) to install k8s.
All we are having is few virtual machines and suse subscription. Can we install CaasP in it?
We could not find any documentation to install it in virtual machines.
Is there any way documentation to do step by step CaasP installation in Virtual machines?
Kubeadm on SLES
It is possible to install Kubernetes on SUSE Linux Enterprise Server 15 using kubeadm.
You can find a step-by-step example below.
The example was tested on the following cloud VM images:
GCP :
SUSE Linux Enterprise Server 15 SP1 x86_x64
AWS :
openSUSE-Leap-15.2-v20200710-HVM-x86_64-548f7b74-f1d6-437e-b650-f6315f6d8aa3-ami-0f5745b812a5b7654.4 - ami-023643495f15f104b
suse-sles-15-sp1-v20200615-hvm-ssd-x86_64 - ami-0044ae6906d786f4b
Azure :
SUSE Enterprise Linux 15 SP1 +Patching
So, it has a good chance to be used with other images with only a few changes.
It was also tested on Vagrant box trueability/sles-15-sp1 , and it was required a few additional steps, because of expired subscription keys. I used OSS repositories and ignored expiration errors:
# add OSS repository for software installation
$ zypper addrepo http://download.opensuse.org/distribution/leap/15.2/repo/oss/ public
# add repository for installing newer Docker version
$ zypper addrepo https://download.opensuse.org/repositories/Virtualization:containers/openSUSE_Leap_15.0/Virtualization:containers.repo virt
# install symbols required by Docker:
$ zypper install libseccomp-devel
# turn off all swap partitions. Comment appropriate /etc/fstab entry as well.
$ swapoff -a
# Rest of the steps is similar except additional argument during cluster initialization.
# This box is using btrfs for /var/lib/docker and kubeadm complains about it.
# I've just asked kubeadm to ignore that fact.
# Even with btrfs it can start and run pods, but there might be some problems with Persistent Volumes usage,
# so consider using additional xfs or ext4 partition for /var/lib/docker
$ kubeadm init --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=all
Cloud VMs:
Cloud SLES 15 SP1 images uses xfs for their / file system and don't use swap out of the box, and kubeadm passes all pre-flight checks without errors.
# become root
$ sudo -s
# install docker
$ zypper refresh
$ zypper install docker
# configure sysctl for Kubernetes
$ cat <<EOF >> /etc/sysctl.conf
net.ipv4.ip_forward=1
net.ipv4.conf.all.forwarding=1
net.bridge.bridge-nf-call-iptables=1
EOF
# add Google repository for installing Kubernetes packages
#$ zypper addrepo --type yum --gpgcheck-strict --refresh https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 google-k8s
#or
$ cat <<EOF > /etc/zypp/repos.d/google-k8s.repo
[google-k8s]
name=google-k8s
enabled=1
autorefresh=1
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
type=rpm-md
gpgcheck=1
repo_gpgcheck=1
pkg_gpgcheck=1
EOF
# import Google repository keys
$ rpm --import https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
$ rpm --import https://packages.cloud.google.com/yum/doc/yum-key.gpg
$ rpm -q gpg-pubkey --qf '%{name}-%{version}-%{release} --> %{summary}\n'
# the following repository was needed only for GCP image
# other images was able successfully install conntrack-tools using existing repository
$ zypper addrepo https://download.opensuse.org/repositories/security:netfilter/SLE_12/security:netfilter.repo conntrack
$ zypper refresh conntrack
# conntrack presence is checked during kubeadm pre-flight checks
# but zypper unable to find appropriate dependency for kubelet,
# so let's install it manually
$ zypper install conntrack-tools
# refresh Google repository cache and check if we see several versions of Kubernetes packages to choose from
$ zypper refresh google-k8s
$ zypper packages --repo google-k8s
# install latest available kubelet package
# ignore conntrack dependency and install kubelet (Solution 2 in my case)
$ zypper install kubelet
# install kubeadm package. kubectl and cri-tools are installed as kubeadm dependency
$ zypper install kubeadm
# force docker to use systemd cgroup driver and overlay2 storage driver.
# Check the links in the end of the answer for details.
# BTW, kubelet would work even with default content of the file.
$ cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
# Not sure if it's necessary it was taken from the Kubernetes documentation
$ mkdir -p /etc/systemd/system/docker.service.d
# lets start and enable docker and kubelet services
$ systemctl start docker.service
$ systemctl enable docker.service
$ systemctl enable kubelet.service
# apply configured earlier sysctl settings.
# net.bridge.bridge-nf-call-iptables becomes available after successfully starting
# Docker service
$ sysctl -p
# Now it's time to initialize Kubernetes master node.
# Ignore pre-flight checks for Vagrant box.
$ kubeadm init --pod-network-cidr=10.244.0.0/16
# prepare kubectl configuration to connect the cluster
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
# Check if api-server responds to our requests.
# At this moment it's fine to see master node in NotReady state.
$ kubectl get nodes
# Deploy Flannel network addon
$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# remove taint from the master node.
# It allows master node to run application pods.
# At least one worker node is required if this step is skipped.
$ kubectl taint nodes --all node-role.kubernetes.io/master-
# run test pod to check if everything works fine
$ kubectl run nginx1 --image=nginx
# after some time... ~ 3-5 minutes
# check the pods' state
$ kubectl get pods -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default nginx1 1/1 Running 0 74s 10.244.0.4 suse-test <none> <none>
kube-system coredns-66bff467f8-vc2x4 1/1 Running 0 2m26s 10.244.0.2 suse-test <none> <none>
kube-system coredns-66bff467f8-w4jvq 1/1 Running 0 2m26s 10.244.0.3 suse-test <none> <none>
kube-system etcd-suse-test 1/1 Running 0 2m41s 10.4.0.4 suse-test <none> <none>
kube-system kube-apiserver-suse-test 1/1 Running 0 2m41s 10.4.0.4 suse-test <none> <none>
kube-system kube-controller-manager-suse-test 1/1 Running 0 2m41s 10.4.0.4 suse-test <none> <none>
kube-system kube-flannel-ds-amd64-mbfxp 1/1 Running 0 2m12s 10.4.0.4 suse-test <none> <none>
kube-system kube-proxy-cw5xm 1/1 Running 0 2m26s 10.4.0.4 suse-test <none> <none>
kube-system kube-scheduler-suse-test 1/1 Running 0 2m41s 10.4.0.4 suse-test <none> <none>
# check if the test pod is working fine
# curl 10.244.0.4
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...skipped...
# basic Kubernetes installation is done
Additional materials:
Container runtimes (Kubernetes documentation page)
Giving error saying “unsupported graph driver: btrfs” in SLES when try to Kubeadm init
(If you have btrfs / partition, you can mount additional xfs or ext4 partition for /var/lib/docker to use overlay2 Docker storage driver)
OverlayFS support for SLES 12 (I would expect that SLES 15 support it as well.)
Docker Compatibility Matrix
OS Distribution (x86_64): SLES 15
Enterprise Engine: 19.03.x
UCP: 3.2.x
DTR: 2.7.x
Storage Driver: overlay2,btrfs
Orchestration: Swarm mode, Kubernetes
DTR Storage Backend: NFSv4, NFSv3, Amazon S3, S3 Compliant Alternatives,
Azure Storage (Blob), Google Cloud Storage, OpenStack Swift,
Local Filesystem
SUSE: Docker Open Source Engine Guide (very useful book)
Use the OverlayFS storage driver
Install Linux Kernel 4.12 in openSUSE (in case you want to add AUFS support to the Linux kernel)
Materials about SUSE CaaSP
SUSE CaaS Platform Setup - Nothing to Everything in 1 Hour
(video is quite old but very useful)
CaaSP download page, 60 days free trial

Flannel is crashing for Slave node

I am getting this result for flannel service on my slave node. Flannel is running fine on master node.
kube-system kube-flannel-ds-amd64-xbtrf 0/1 CrashLoopBackOff 4 3m5s
Kube-proxy running on the slave is fine but not the flannel pod.
I have a master and a slave node only. At first its say running, then it goes to error and finally, crashloopbackoff.
godfrey#master:~$ kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system kube-flannel-ds-amd64-jszwx 0/1 CrashLoopBackOff 4 2m17s 192.168.152.104 slave3 <none> <none>
kube-system kube-proxy-hxs6m 1/1 Running 0 18m 192.168.152.104 slave3 <none> <none>
I am also getting this from the logs:
I0515 05:14:53.975822 1 main.go:390] Found network config - Backend type: vxlan
I0515 05:14:53.975856 1 vxlan.go:121] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false
E0515 05:14:53.976072 1 main.go:291] Error registering network: failed to acquire lease: node "slave3" pod cidr not assigned
I0515 05:14:53.976154 1 main.go:370] Stopping shutdownHandler...
I could not find a solution so far. Help appreciated.
As solution came from OP, I'm posting answer as community wiki.
As reported by OP in the comments, he didn't passed the podCIDR during kubeadm init.
The following command was used to see that the flannel pod was in "CrashLoopBackoff" state:
sudo kubectl get pods --all-namespaces -o wide
To confirm that podCIDR was not passed to flannel pod kube-flannel-ds-amd64-ksmmh that was in CrashLoopBackoff state.
$ kubectl logs kube-flannel-ds-amd64-ksmmh
kubeadm init --pod-network-cidr=172.168.10.0/24 didn't pass the podCIDR to the slave nodes as expected.
Hence to solve the problem, kubectl patch node slave1 -p '{"spec":{"podCIDR":"172.168.10.0/24"}}' command had to be used to pass podCIDR to each slave node.
Please see this link: coreos.com/flannel/docs/latest/troubleshooting.html and section "Kubernetes Specific"
The described cluster configuration doesn't look correct in two aspects:
First of all, PodCIDR reasonable minimum subnet size is /16. Each Kubernetes node usually gets /24 subnet because it can run up to 100 pods.
PodCIDR and ServicesCIDR (default: "10.96.0.0/12") must not interfere with your existing LAN network and with each other.
So, correct kubeadm command would look like:
$ sudo kubeadm init --pod-network-cidr=10.244.0.0/16
In your case PodCIDR subnet is only /24 and it was assigned to master node. Slave node didn't get its own /24 subnet, so Flannel Pod showed the error in the logs:
Error registering network: failed to acquire lease: node "slave3" pod cidr not assigned
Assigning the same subnet to several nodes manually will lead to the other connectivity problems.
You can find more details on Kubernetes IP subnets in GKE documentation.
The second problem is the IP subnet number.
Recent Calico network addon versions are able to detect the correct Pod subnet based on kubeadm parameter --pod-network-cidr. Older version was using predefined subnet 192.168.0.0/16 and you had to adjust it in its YAML file in the Deaemonset specification :
- name: CALICO_IPV4POOL_CIDR
value: "192.168.0.0/16"
Flannel is still requires default subnet ( 10.244.0.0/16 ) to be specified for kubeadm init.
To use custom subnet for your cluster, Flannel "installation" YAML file should be adjusted before applying to the cluster.
...
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-system
...
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
...
So the following should work for any version of Kubernetes and Calico:
$ sudo kubeadm init --pod-network-cidr=192.168.0.0/16
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
# Latest Calico version
$ kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
# or specific version, v3.14 in this case, which is also latest at the moment
# kubectl apply -f https://docs.projectcalico.org/v3.14/manifests/calico.yaml
Same for Flannel:
$ sudo kubeadm init --pod-network-cidr=10.244.0.0/16
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
# For Kubernetes v1.7+
$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# For older versions of Kubernetes:
# For RBAC enabled clusters:
$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml
$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-legacy.yml
$
There are many other network addons. You can find the list in the documentation:
Cluster Networking
Installing Addons

Failed to install rook on k8s cluster

I am trying to create a rook cluster inside k8s cluster.
Set up - 1 master node, 1 worker node
These are the steps I have followed
Master node:
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
sudo sysctl net.bridge.bridge-nf-call-iptables=1
sudo sysctl net.bridge.bridge-nf-call-ip6tables=1
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/32a765fd19ba45b387fdc5e3812c41fff47cfd55/Documentation/kube-flannel.yml
kubeadm token create --print-join-command
Worker node:
kubeadm join {master_ip_address}:6443 --token {token} --discovery-token-ca-cert-hash {hash} --apiserver-advertise-address={worker_private_ip}
Master node - Install rook - (reference - https://rook.github.io/docs/rook/master/ceph-quickstart.html):
kubectl create -f ceph/common.yaml
kubectl create -f ceph/operator.yaml
kubectl create -f ceph/cluster-test.yaml
Error while creating rook-ceph-operator pod:
(combined from similar events): Failed create pod sandbox: rpc error: code =
Unknown desc = failed to set up sandbox container "4a901f12e5af5340f2cc48a976e10e5c310c01a05a4a47371f766a1a166c304f"
network for pod "rook-ceph-operator-fdfbcc5c5-jccc9": networkPlugin cni failed to
set up pod "rook-ceph-operator-fdfbcc5c5-jccc9_rook-ceph" network: failed to set bridge addr:
"cni0" already has an IP address different from 10.244.1.1/24
Can anybody help me with this issue?
This issue start if you did kubeadm reset and after that kubeadm init reinitialize Kubernetes.
kubeadm reset
systemctl stop kubelet
systemctl stop docker
rm -rf /var/lib/cni/
rm -rf /var/lib/kubelet/*
rm -rf /etc/cni/
ifconfig cni0 down
ifconfig flannel.1 down
ifconfig docker0 down
After this start docker and kubelet and kubeadm again.
Work around
You can also try this way as simple easy solution
ip link delete cni0
ip link delete flannel.1
that depends on which network you are using inside k8s.

minikube start error exit status 1

Here is my error when i "minikube start " in Aliyun.
What I did:
minikube delete
kubectl config use-context minikube
minikube start --vm-driver=none
Aliyun(The 3rd Party Application Server) could not install VirtualBox or KVM,
so I tried to start it with --vm-driver=none.
[root#iZj6c68brirvucbzz5yyunZ home]# minikube delete
Deleting local Kubernetes cluster...
Machine deleted.
[root#iZj6c68brirvucbzz5yyunZ home]# kubectl config use-context minikube
Switched to context "minikube".
[root#iZj6c68brirvucbzz5yyunZ home]# minikube start --vm-driver=none
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
E0618 16:06:56.885163 500 start.go:294] Error starting cluster: kubeadm init error sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data-minikube --ignore-preflight-errors=Port-10250 --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI running command: : running command: sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data-minikube --ignore-preflight-errors=Port-10250 --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI
output: [init] Using Kubernetes version: v1.10.0
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
[WARNING Hostname]: hostname "minikube" could not be reached
[WARNING Hostname]: hostname "minikube" lookup minikube on 100.100.2.138:53: no such host
[WARNING FileExisting-crictl]: crictl not found in system path
Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
Flag --admission-control has been deprecated, Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version.
[certificates] Using the existing ca certificate and key.
[certificates] Using the existing apiserver certificate and key.
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [minikube] and IPs [172.31.4.34]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/var/lib/localkube/certs/"
a kubeconfig file "/etc/kubernetes/admin.conf" exists already but has got the wrong CA cert
: running command: sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data-minikube --ignore-preflight-errors=Port-10250 --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI
.: exit status 1
Versions of components:
[root#iZj6c68brirvucbzz5yyunZ home]# minikube version
minikube version: v0.28.0
[root#iZj6c68brirvucbzz5yyunZ home]# uname -a
Linux iZj6c68brirvucbzz5yyunZ 3.10.0-514.26.2.el7.x86_64 #1 SMP Tue Jul 4 15:04:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
[root#iZj6c68brirvucbzz5yyunZ home]# kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.4", GitCommit:"5ca598b4ba5abb89bb773071ce452e33fb66339d", GitTreeState:"clean", BuildDate:"2018-06-06T08:13:03Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Why Minikube exit with the status 1?
Thank in advance.
First of all, try to cleanup all traces after the previous unsuccessful minikube start. It should help with mismatch certificate issue.
rm -rf ~/.minikube ~/.kube /etc/kubernetes
Then try to start minikube again.
minikube start --vm-driver=none
If you still running into errors, try to follow my "happy path":
(This was tested on fresh GCP instance with Ubuntu 16 OS on board)
# become root
sudo su
# turn off swap
swapoff -a
# edit /etc/fstab and comment swap partition.
# add repository key
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
# add repository
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
# update repository cache
apt-get update
# install some software
apt-get -y install ebtables ethtool docker.io apt-transport-https kubelet kubeadm kubectl
# tune sysctl
cat <<EOF >>/etc/ufw/sysctl.conf
net/bridge/bridge-nf-call-ip6tables = 1
net/bridge/bridge-nf-call-iptables = 1
net/bridge/bridge-nf-call-arptables = 1
EOF
sudo sysctl --system
# download minikube
wget https://github.com/kubernetes/minikube/releases/download/v0.28.0/minikube-linux-amd64
# install minikube
chmod +x minikube-linux-amd64
mv minikube-linux-amd64 /usr/bin/minikube
# start minikube
minikube start --vm-driver=none
---This is what you should see----------
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Downloading kubeadm v1.10.0
Downloading kubelet v1.10.0
Finished Downloading kubeadm v1.10.0
Finished Downloading kubelet v1.10.0
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
===================
WARNING: IT IS RECOMMENDED NOT TO RUN THE NONE DRIVER ON PERSONAL WORKSTATIONS
The 'none' driver will run an insecure kubernetes apiserver as root that may leave the host vulnerable to CSRF attacks
When using the none driver, the kubectl config and credentials generated will be root owned and will appear in the root home directory.
You will need to move the files to the appropriate location and then set the correct permissions. An example of this is below:
sudo mv /root/.kube $HOME/.kube # this will write over any previous configuration
sudo chown -R $USER $HOME/.kube
sudo chgrp -R $USER $HOME/.kube
sudo mv /root/.minikube $HOME/.minikube # this will write over any previous configuration
sudo chown -R $USER $HOME/.minikube
sudo chgrp -R $USER $HOME/.minikube
This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
Loading cached images from config file.
-------------------
#check the results
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
minikube Ready master 18s v1.10.0
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-minikube 1/1 Running 0 9m
kube-system kube-addon-manager-minikube 1/1 Running 0 9m
kube-system kube-apiserver-minikube 1/1 Running 0 9m
kube-system kube-controller-manager-minikube 1/1 Running 0 10m
kube-system kube-dns-86f4d74b45-p99gv 3/3 Running 0 10m
kube-system kube-proxy-hlfc8 1/1 Running 0 10m
kube-system kube-scheduler-minikube 1/1 Running 0 9m
kube-system kubernetes-dashboard-5498ccf677-scdf9 1/1 Running 0 10m
kube-system storage-provisioner 1/1 Running 0 10m