minikube start error exit status 1 - minikube

Here is my error when i "minikube start " in Aliyun.
What I did:
minikube delete
kubectl config use-context minikube
minikube start --vm-driver=none
Aliyun(The 3rd Party Application Server) could not install VirtualBox or KVM,
so I tried to start it with --vm-driver=none.
[root#iZj6c68brirvucbzz5yyunZ home]# minikube delete
Deleting local Kubernetes cluster...
Machine deleted.
[root#iZj6c68brirvucbzz5yyunZ home]# kubectl config use-context minikube
Switched to context "minikube".
[root#iZj6c68brirvucbzz5yyunZ home]# minikube start --vm-driver=none
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
E0618 16:06:56.885163 500 start.go:294] Error starting cluster: kubeadm init error sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data-minikube --ignore-preflight-errors=Port-10250 --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI running command: : running command: sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data-minikube --ignore-preflight-errors=Port-10250 --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI
output: [init] Using Kubernetes version: v1.10.0
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
[WARNING Hostname]: hostname "minikube" could not be reached
[WARNING Hostname]: hostname "minikube" lookup minikube on 100.100.2.138:53: no such host
[WARNING FileExisting-crictl]: crictl not found in system path
Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
Flag --admission-control has been deprecated, Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version.
[certificates] Using the existing ca certificate and key.
[certificates] Using the existing apiserver certificate and key.
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [minikube] and IPs [172.31.4.34]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/var/lib/localkube/certs/"
a kubeconfig file "/etc/kubernetes/admin.conf" exists already but has got the wrong CA cert
: running command: sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data-minikube --ignore-preflight-errors=Port-10250 --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI
.: exit status 1
Versions of components:
[root#iZj6c68brirvucbzz5yyunZ home]# minikube version
minikube version: v0.28.0
[root#iZj6c68brirvucbzz5yyunZ home]# uname -a
Linux iZj6c68brirvucbzz5yyunZ 3.10.0-514.26.2.el7.x86_64 #1 SMP Tue Jul 4 15:04:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
[root#iZj6c68brirvucbzz5yyunZ home]# kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.4", GitCommit:"5ca598b4ba5abb89bb773071ce452e33fb66339d", GitTreeState:"clean", BuildDate:"2018-06-06T08:13:03Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Why Minikube exit with the status 1?
Thank in advance.

First of all, try to cleanup all traces after the previous unsuccessful minikube start. It should help with mismatch certificate issue.
rm -rf ~/.minikube ~/.kube /etc/kubernetes
Then try to start minikube again.
minikube start --vm-driver=none
If you still running into errors, try to follow my "happy path":
(This was tested on fresh GCP instance with Ubuntu 16 OS on board)
# become root
sudo su
# turn off swap
swapoff -a
# edit /etc/fstab and comment swap partition.
# add repository key
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
# add repository
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
# update repository cache
apt-get update
# install some software
apt-get -y install ebtables ethtool docker.io apt-transport-https kubelet kubeadm kubectl
# tune sysctl
cat <<EOF >>/etc/ufw/sysctl.conf
net/bridge/bridge-nf-call-ip6tables = 1
net/bridge/bridge-nf-call-iptables = 1
net/bridge/bridge-nf-call-arptables = 1
EOF
sudo sysctl --system
# download minikube
wget https://github.com/kubernetes/minikube/releases/download/v0.28.0/minikube-linux-amd64
# install minikube
chmod +x minikube-linux-amd64
mv minikube-linux-amd64 /usr/bin/minikube
# start minikube
minikube start --vm-driver=none
---This is what you should see----------
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Downloading kubeadm v1.10.0
Downloading kubelet v1.10.0
Finished Downloading kubeadm v1.10.0
Finished Downloading kubelet v1.10.0
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
===================
WARNING: IT IS RECOMMENDED NOT TO RUN THE NONE DRIVER ON PERSONAL WORKSTATIONS
The 'none' driver will run an insecure kubernetes apiserver as root that may leave the host vulnerable to CSRF attacks
When using the none driver, the kubectl config and credentials generated will be root owned and will appear in the root home directory.
You will need to move the files to the appropriate location and then set the correct permissions. An example of this is below:
sudo mv /root/.kube $HOME/.kube # this will write over any previous configuration
sudo chown -R $USER $HOME/.kube
sudo chgrp -R $USER $HOME/.kube
sudo mv /root/.minikube $HOME/.minikube # this will write over any previous configuration
sudo chown -R $USER $HOME/.minikube
sudo chgrp -R $USER $HOME/.minikube
This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
Loading cached images from config file.
-------------------
#check the results
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
minikube Ready master 18s v1.10.0
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-minikube 1/1 Running 0 9m
kube-system kube-addon-manager-minikube 1/1 Running 0 9m
kube-system kube-apiserver-minikube 1/1 Running 0 9m
kube-system kube-controller-manager-minikube 1/1 Running 0 10m
kube-system kube-dns-86f4d74b45-p99gv 3/3 Running 0 10m
kube-system kube-proxy-hlfc8 1/1 Running 0 10m
kube-system kube-scheduler-minikube 1/1 Running 0 9m
kube-system kubernetes-dashboard-5498ccf677-scdf9 1/1 Running 0 10m
kube-system storage-provisioner 1/1 Running 0 10m

Related

How to install kubernetes in Suse Linux enterprize server 15 virtual machines?

We are trying to install kubernetes in SUSE enterprize linux server v15. We found that there is no way to install k8s using kubeadm. SUSE provides Container as a service Platform(CaasP) to install k8s.
All we are having is few virtual machines and suse subscription. Can we install CaasP in it?
We could not find any documentation to install it in virtual machines.
Is there any way documentation to do step by step CaasP installation in Virtual machines?
Kubeadm on SLES
It is possible to install Kubernetes on SUSE Linux Enterprise Server 15 using kubeadm.
You can find a step-by-step example below.
The example was tested on the following cloud VM images:
GCP :
SUSE Linux Enterprise Server 15 SP1 x86_x64
AWS :
openSUSE-Leap-15.2-v20200710-HVM-x86_64-548f7b74-f1d6-437e-b650-f6315f6d8aa3-ami-0f5745b812a5b7654.4 - ami-023643495f15f104b
suse-sles-15-sp1-v20200615-hvm-ssd-x86_64 - ami-0044ae6906d786f4b
Azure :
SUSE Enterprise Linux 15 SP1 +Patching
So, it has a good chance to be used with other images with only a few changes.
It was also tested on Vagrant box trueability/sles-15-sp1 , and it was required a few additional steps, because of expired subscription keys. I used OSS repositories and ignored expiration errors:
# add OSS repository for software installation
$ zypper addrepo http://download.opensuse.org/distribution/leap/15.2/repo/oss/ public
# add repository for installing newer Docker version
$ zypper addrepo https://download.opensuse.org/repositories/Virtualization:containers/openSUSE_Leap_15.0/Virtualization:containers.repo virt
# install symbols required by Docker:
$ zypper install libseccomp-devel
# turn off all swap partitions. Comment appropriate /etc/fstab entry as well.
$ swapoff -a
# Rest of the steps is similar except additional argument during cluster initialization.
# This box is using btrfs for /var/lib/docker and kubeadm complains about it.
# I've just asked kubeadm to ignore that fact.
# Even with btrfs it can start and run pods, but there might be some problems with Persistent Volumes usage,
# so consider using additional xfs or ext4 partition for /var/lib/docker
$ kubeadm init --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=all
Cloud VMs:
Cloud SLES 15 SP1 images uses xfs for their / file system and don't use swap out of the box, and kubeadm passes all pre-flight checks without errors.
# become root
$ sudo -s
# install docker
$ zypper refresh
$ zypper install docker
# configure sysctl for Kubernetes
$ cat <<EOF >> /etc/sysctl.conf
net.ipv4.ip_forward=1
net.ipv4.conf.all.forwarding=1
net.bridge.bridge-nf-call-iptables=1
EOF
# add Google repository for installing Kubernetes packages
#$ zypper addrepo --type yum --gpgcheck-strict --refresh https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 google-k8s
#or
$ cat <<EOF > /etc/zypp/repos.d/google-k8s.repo
[google-k8s]
name=google-k8s
enabled=1
autorefresh=1
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
type=rpm-md
gpgcheck=1
repo_gpgcheck=1
pkg_gpgcheck=1
EOF
# import Google repository keys
$ rpm --import https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
$ rpm --import https://packages.cloud.google.com/yum/doc/yum-key.gpg
$ rpm -q gpg-pubkey --qf '%{name}-%{version}-%{release} --> %{summary}\n'
# the following repository was needed only for GCP image
# other images was able successfully install conntrack-tools using existing repository
$ zypper addrepo https://download.opensuse.org/repositories/security:netfilter/SLE_12/security:netfilter.repo conntrack
$ zypper refresh conntrack
# conntrack presence is checked during kubeadm pre-flight checks
# but zypper unable to find appropriate dependency for kubelet,
# so let's install it manually
$ zypper install conntrack-tools
# refresh Google repository cache and check if we see several versions of Kubernetes packages to choose from
$ zypper refresh google-k8s
$ zypper packages --repo google-k8s
# install latest available kubelet package
# ignore conntrack dependency and install kubelet (Solution 2 in my case)
$ zypper install kubelet
# install kubeadm package. kubectl and cri-tools are installed as kubeadm dependency
$ zypper install kubeadm
# force docker to use systemd cgroup driver and overlay2 storage driver.
# Check the links in the end of the answer for details.
# BTW, kubelet would work even with default content of the file.
$ cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
# Not sure if it's necessary it was taken from the Kubernetes documentation
$ mkdir -p /etc/systemd/system/docker.service.d
# lets start and enable docker and kubelet services
$ systemctl start docker.service
$ systemctl enable docker.service
$ systemctl enable kubelet.service
# apply configured earlier sysctl settings.
# net.bridge.bridge-nf-call-iptables becomes available after successfully starting
# Docker service
$ sysctl -p
# Now it's time to initialize Kubernetes master node.
# Ignore pre-flight checks for Vagrant box.
$ kubeadm init --pod-network-cidr=10.244.0.0/16
# prepare kubectl configuration to connect the cluster
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
# Check if api-server responds to our requests.
# At this moment it's fine to see master node in NotReady state.
$ kubectl get nodes
# Deploy Flannel network addon
$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# remove taint from the master node.
# It allows master node to run application pods.
# At least one worker node is required if this step is skipped.
$ kubectl taint nodes --all node-role.kubernetes.io/master-
# run test pod to check if everything works fine
$ kubectl run nginx1 --image=nginx
# after some time... ~ 3-5 minutes
# check the pods' state
$ kubectl get pods -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default nginx1 1/1 Running 0 74s 10.244.0.4 suse-test <none> <none>
kube-system coredns-66bff467f8-vc2x4 1/1 Running 0 2m26s 10.244.0.2 suse-test <none> <none>
kube-system coredns-66bff467f8-w4jvq 1/1 Running 0 2m26s 10.244.0.3 suse-test <none> <none>
kube-system etcd-suse-test 1/1 Running 0 2m41s 10.4.0.4 suse-test <none> <none>
kube-system kube-apiserver-suse-test 1/1 Running 0 2m41s 10.4.0.4 suse-test <none> <none>
kube-system kube-controller-manager-suse-test 1/1 Running 0 2m41s 10.4.0.4 suse-test <none> <none>
kube-system kube-flannel-ds-amd64-mbfxp 1/1 Running 0 2m12s 10.4.0.4 suse-test <none> <none>
kube-system kube-proxy-cw5xm 1/1 Running 0 2m26s 10.4.0.4 suse-test <none> <none>
kube-system kube-scheduler-suse-test 1/1 Running 0 2m41s 10.4.0.4 suse-test <none> <none>
# check if the test pod is working fine
# curl 10.244.0.4
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...skipped...
# basic Kubernetes installation is done
Additional materials:
Container runtimes (Kubernetes documentation page)
Giving error saying β€œunsupported graph driver: btrfs” in SLES when try to Kubeadm init
(If you have btrfs / partition, you can mount additional xfs or ext4 partition for /var/lib/docker to use overlay2 Docker storage driver)
OverlayFS support for SLES 12 (I would expect that SLES 15 support it as well.)
Docker Compatibility Matrix
OS Distribution (x86_64): SLES 15
Enterprise Engine: 19.03.x
UCP: 3.2.x
DTR: 2.7.x
Storage Driver: overlay2,btrfs
Orchestration: Swarm mode, Kubernetes
DTR Storage Backend: NFSv4, NFSv3, Amazon S3, S3 Compliant Alternatives,
Azure Storage (Blob), Google Cloud Storage, OpenStack Swift,
Local Filesystem
SUSE: Docker Open Source Engine Guide (very useful book)
Use the OverlayFS storage driver
Install Linux Kernel 4.12 in openSUSE (in case you want to add AUFS support to the Linux kernel)
Materials about SUSE CaaSP
SUSE CaaS Platform Setup - Nothing to Everything in 1 Hour
(video is quite old but very useful)
CaaSP download page, 60 days free trial

About : CreateContainerError

i installed K8S cluster in my laptop, it was running fine in the beginning but when i restarted my laptop then some services were not running.
kube-system coredns-5c98db65d4-9nm6m 0/1 Error 594 12d
kube-system coredns-5c98db65d4-qwkk9 0/1 CreateContainerError
kube-system kube-scheduler-kubemaster 0/1 CreateContainerError
I searched online for solution but could not get appropriate answer ,
please help me resolve this issue
I encourage you to look for official kubernetes documentation. Remember that your kubemaster should have at least fallowing resources: 2CPUs or more, 2GB or more of RAM.
Firstly install docker and kubeadm (as a root user) on each machine.
Initialize kubeadm (on master):
kubeadm init <args>
For example for Calico to work correctly, you need to pass --pod-network-cidr=192.168.0.0/16 to kubeadm init:
kubeadm init --pod-network-cidr=192.168.0.0/16
Install a pod network add-on (depends on what you would like to use). You can install a pod network add-on with the following command:
kubectl apply -f <add-on.yaml>
e.g. for Calico:
kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml
To start using your cluster, you need to run on master the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You can now join any number of machines by running the following on each node as root:
kubeadm join <master-ip>:<master-port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>
By default, tokens expire after 24 hours. If you are joining a node to the cluster after the current token has expired, you can create a new token by running the following command on the control-plane node:
kubeadm token create
Please, let me know if it works for you.
Did you check the status of docker and kubelet services.? if not, please run below commands and verify that services are up and running.
systemctl status docker kubelet

How to start kubelet service?

I ran command
systemctl stop kubelet
then try to start it
systemctl start kubelet
but can't able to start it
here is the output of systemctl status kubelet
kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: activating (auto-restart) (Result: exit-code) since Wed 2019-06-05 15:35:34 UTC; 7s ago
Docs: https://kubernetes.io/docs/home/
Process: 31697 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=255)
Main PID: 31697 (code=exited, status=255)
Because of this i am not able to run any kubectl command
example kubectl get pods gives
The connection to the server 172.31.6.149:6443 was refused - did you specify the right host or port?
Worked
Need to disable swap using swapoff -a
then,
try to start it systemctl start kubelet
So i need to reset kubelete service
Here are the step :-
check status of your docker service.
If stoped,start it by cmd sudo systemctl start docker.
If not installed installed it
#yum install -y kubelet kubeadm kubectl docker
Make swap off by #swapoff -a
Now reset kubeadm by #kubeadm reset
Now try #kudeadm init
after that check #systemctl status kubelet
it will be working
Check nodes
kubectl get nodes
if Master Node is not ready ,refer following
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
if you not able to create pod ..check dns
kubectl get pods --namespace=kube-system
if dns pods are in pending state
i.e you need to use network service
i used calico
kubectl apply -f https://docs.projectcalico.org/v3.7/manifests/calico.yaml
Now your master node is ready .. now you can deploy pod

Issue with minikube / kvm2

I have this issue when running "minikube start --vm-driver kvm2":
E0109 11:23:34.536027 22169 start.go:187] Error starting host: Error
starting stopped host: Error creating VM: virError(Code=1, Domain=10,
Message='internal error: qemu unexpectedly closed the monitor:
2019-01-09 16:23:34.183+0000: Domain id=11 is tainted: host-cpu
2019-01-09T16:23:34.284194Z qemu-kvm: unrecognized feature kvm').
Result of lsmod kvm:
[root#smu-ws ~]# lsmod | grep kvm kvm_intel 225280 0 kvm
647168 1 kvm_intel irqbypass 16384 1 kvm
Result of virt-host-validate, everything PASS except:
QEMU: Checking for device assignment IOMMU support
: WARN (No ACPI DMAR table found, IOMMU either disabled in BIOS or not
supported by this hardware platform)
Regards.
I managed to resolve this on RHEL by
$ sudo rm /usr/local/bin/minikube
$ sudo rm -rf .minikube/ (from home directory)
$ curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.34.1/minikube-linux-amd64 && chmod +x minikube && sudo cp minikube /usr/local/bin/ && rm minikube
$ minikube start --vm-driver kvm2
πŸ˜„ minikube v0.34.1 on linux (amd64)
πŸ”₯ Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
πŸ’Ώ Downloading Minikube ISO ...
184.30 MB / 184.30 MB [============================================] 100.00% 0s
πŸ“Ά "minikube" IP address is 192.168.39.29
🐳 Configuring Docker as the container runtime ...
✨ Preparing Kubernetes environment ...
πŸ’Ύ Downloading kubeadm v1.13.3
πŸ’Ύ Downloading kubelet v1.13.3
🚜 Pulling images required by Kubernetes v1.13.3 ...
πŸš€ Launching Kubernetes v1.13.3 using kubeadm ...
πŸ”‘ Configuring cluster permissions ...
πŸ€” Verifying component health .....
πŸ’— kubectl is now configured to use "minikube"
πŸ„ Done! Thank you for using minikube!

kube-discovery fails to start when using kubeadm

I'm trying to install a cluster using kubeadm, using this guide.
I'm installing it on bare metal Ubuntu 16.04 server.
Docker is already preinstalled:
root#host# docker -v
Docker version 1.12.3, build 6b644ec
After executing the following:
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF > /etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl kubernetes-cni
I run 'kubeadm init', and it hangs on the kube-discovery addon:
root#host# kubeadm init
Running pre-flight checks
<master/tokens> generated token: "<token>"
<master/pki> generated Certificate Authority key and certificate:
Issuer: CN=kubernetes | Subject: CN=kubernetes | CA: true
Not before: 2016-11-22 15:27:25 +0000 UTC Not After: 2026-11-20 15:27:25 +0000 UTC
Public: /etc/kubernetes/pki/ca-pub.pem
Private: /etc/kubernetes/pki/ca-key.pem
Cert: /etc/kubernetes/pki/ca.pem
<master/pki> generated API Server key and certificate:
Issuer: CN=kubernetes | Subject: CN=kube-apiserver | CA: false
Not before: 2016-11-22 15:27:25 +0000 UTC Not After: 2017-11-22 15:27:25 +0000 UTC
Alternate Names: [<ipaddress> kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local]
Public: /etc/kubernetes/pki/apiserver-pub.pem
Private: /etc/kubernetes/pki/apiserver-key.pem
Cert: /etc/kubernetes/pki/apiserver.pem
<master/pki> generated Service Account Signing keys:
Public: /etc/kubernetes/pki/sa-pub.pem
Private: /etc/kubernetes/pki/sa-key.pem
<master/pki> created keys and certificates in "/etc/kubernetes/pki"
<util/kubeconfig> created "/etc/kubernetes/kubelet.conf"
<util/kubeconfig> created "/etc/kubernetes/admin.conf"
<master/apiclient> created API client configuration
<master/apiclient> created API client, waiting for the control plane to become ready
<master/apiclient> all control plane components are healthy after 44.584082 seconds
<master/apiclient> waiting for at least one node to register and become ready
<master/apiclient> first node is ready after 1.003104 seconds
<master/apiclient> attempting a test deployment
<master/apiclient> test deployment succeeded
<master/discovery> created essential addon: kube-discovery, waiting for it to become ready
I can see that this pod is restarting:
root#host# kubectl get pods --all-namespaces=true
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system dummy-2088944543-dsjtb 1/1 Running 0 29m
kube-system etcd-host.test.com 1/1 Running 0 29m
kube-system kube-apiserver-host.test.com 1/1 Running 0 30m
kube-system kube-controller-manager-host.test.com 1/1 Running 0 29m
kube-system kube-discovery-1150918428-ulap3 0/1 CrashLoopBackOff 10 29m
kube-system kube-scheduler-host.test.com 1/1 Running 0 29m
root#host# kubectl logs kube-discovery-1150918428-ulap3 --namespace=kube-system
2016/11/22 13:31:32 root CA certificate does not exist: /tmp/secret/ca.pem
Do I need to provide it a certificate?
What specific version of kubernetes are you trying to install? You can check it with:
apt-get policy kubelet