How do I find and call the kube-apiserver in k3s / k3d (with Calico and without it)? - kubernetes

I want to use the kube-apiserver to enable/disable admission controllers (e.g. kube-apiserver --enable-admission-plugins=NamespaceLifecycle), but I cannot find it anywhere.
When I run the following, I don't see it anywhere:
# Running this:
kubectl get pods -n kube-system
# Shows only this:
# NAME READY STATUS RESTARTS AGE
# helm-install-traefik-fvs4z 0/1 Completed 0 10d
# local-path-provisioner-5ff76fc89d-rrntw 1/1 Running 4 10d
# coredns-854c77959c-vz4s2 1/1 Running 4 10d
# metrics-server-86cbb8457f-6kl5n 1/1 Running 4 10d
# svclb-traefik-cc7zx 2/2 Running 8 10d
# calico-kube-controllers-5dc5c9f744-6bwdj 1/1 Running 4 10d
# calico-node-xcjz8 1/1 Running 4 10d
# traefik-6f9cbd9bd4-b6nk7 1/1 Running 4 10d
I thought it might be due to using Calico, but even creating a cluster without Calico still shows no kube-apiserver:
# Running this:
kubectl get pods -n kube-system
# Shows only this:
# NAME READY STATUS RESTARTS AGE
# local-path-provisioner-5ff76fc89d-d28gc 1/1 Running 0 2m31s
# coredns-854c77959c-lh78n 1/1 Running 0 2m31s
# metrics-server-86cbb8457f-xlzl2 1/1 Running 0 2m31s
# helm-install-traefik-nhxp4 0/1 Completed 0 2m31s
# svclb-traefik-hqndx 2/2 Running 0 2m21s
# traefik-6f9cbd9bd4-m42jg 1/1 Running 0 2m21s
Where is the kube-apiserver? How do I enable and disable controllers in k3d?

It's not running via static pod, so it doesn't show up as a pod. With k3s you would usually install it as a systemd service unit. With k3d you can see it via docker ps.

Related

calico-kube-controller stays in pending state

I have a new install of kubernetes on Ubuntu-18 using version 1.24.3 with Calico. The calico-controller will not start:
$ sudo kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-555bc4b957-z4q2p 0/1 Pending 0 5m14s
kube-system calico-node-jz2j7 1/1 Running 0 5m15s
kube-system coredns-6d4b75cb6d-hwfx9 1/1 Running 0 5m14s
kube-system coredns-6d4b75cb6d-wdh55 1/1 Running 0 5m14s
kube-system etcd-ubuntu-18-extssd 1/1 Running 1 5m27s
kube-system kube-apiserver-ubuntu-18-extssd 1/1 Running 1 5m28s
kube-system kube-controller-manager-ubuntu-18-extssd 1/1 Running 1 5m26s
kube-system kube-proxy-t5z2r 1/1 Running 0 5m15s
kube-system kube-scheduler-ubuntu-18-extssd 1/1 Running 1 5m27s
Someone suggested setting a couple of Calico timeouts to 60 seconds, but that didn't work either.
What could be causing the calico-controller to fail to start, especially since the calico-node is running?
Also, is there a more trouble-free CNI implementation to use? Calico seems very error-prone.
I solved this by installing Weave:
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
with this cidr:
sudo kubeadm init --pod-network-cidr=192.168.0.0/16

Kubernetes API container dies constantly

I just installed from scratch a small Kubernetes test cluster in a 4 Armbian/Odroid_MC1 (Debian 10) nodes. The install process is this 1, nothing fancy or special, adding k8s apt repo and install with apt.
The problem is that the API server dies constantly, like every 5 to 10 minutes, after the controller-manager and the scheduler die together, who seem to stop simultaneously before. Evidently, the API becomes unusable for like a minute. All three services do restart, and things run fine for the next four to nine minutes, when the loop repeats. Logs are here 2. This is an excerpt:
$ kubectl get pods -o wide --all-namespaces
The connection to the server 192.168.1.91:6443 was refused - did you specify the right host or port?
(a minute later)
$ kubectl get pods -o wide --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-74ff55c5b-8pm9r 1/1 Running 2 88m 10.244.0.7 mc1 <none> <none>
kube-system coredns-74ff55c5b-pxdqz 1/1 Running 2 88m 10.244.0.6 mc1 <none> <none>
kube-system etcd-mc1 1/1 Running 2 88m 192.168.1.91 mc1 <none> <none>
kube-system kube-apiserver-mc1 0/1 Running 12 88m 192.168.1.91 mc1 <none> <none>
kube-system kube-controller-manager-mc1 1/1 Running 5 31m 192.168.1.91 mc1 <none> <none>
kube-system kube-flannel-ds-fxg2s 1/1 Running 5 45m 192.168.1.94 mc4 <none> <none>
kube-system kube-flannel-ds-jvvmp 1/1 Running 5 48m 192.168.1.92 mc2 <none> <none>
kube-system kube-flannel-ds-qlvbc 1/1 Running 6 45m 192.168.1.93 mc3 <none> <none>
kube-system kube-flannel-ds-ssb9t 1/1 Running 3 77m 192.168.1.91 mc1 <none> <none>
kube-system kube-proxy-7t9ff 1/1 Running 2 45m 192.168.1.93 mc3 <none> <none>
kube-system kube-proxy-8jhc7 1/1 Running 2 88m 192.168.1.91 mc1 <none> <none>
kube-system kube-proxy-cg75m 1/1 Running 2 45m 192.168.1.94 mc4 <none> <none>
kube-system kube-proxy-mq8j7 1/1 Running 2 48m 192.168.1.92 mc2 <none> <none>
kube-system kube-scheduler-mc1 1/1 Running 5 31m 192.168.1.91 mc1 <none> <none>
$ docker ps -a # (check the exited and restarted services)
CONTAINER ID NAMES STATUS IMAGE NETWORKS PORTS
0e179c6495db k8s_kube-apiserver_kube-apiserver-mc1_kube-system_c55114bd57b1bf357c8f4c0d749ae105_13 Up About a minute 66eaad223e2c
2ccb014beb73 k8s_kube-scheduler_kube-scheduler-mc1_kube-system_fe362b2b6b08ca576b7416df7f2e7845_6 Up 3 minutes 21e17680ca2d
3322f6ec1546 k8s_kube-controller-manager_kube-controller-manager-mc1_kube-system_17cf17caf36ba27e3d2ec4f113a0cf6f_6 Up 3 minutes a1ab72ce4ba2
583129da455f k8s_kube-apiserver_kube-apiserver-mc1_kube-system_c55114bd57b1bf357c8f4c0d749ae105_12 Exited (137) About a minute ago 66eaad223e2c
72268d8e1503 k8s_install-cni_kube-flannel-ds-ssb9t_kube-system_dbf3513d-dad2-462d-9107-4813acf9c23a_0 Exited (0) 5 minutes ago 263b01b3ca1f
fe013d07f186 k8s_kube-controller-manager_kube-controller-manager-mc1_kube-system_17cf17caf36ba27e3d2ec4f113a0cf6f_5 Exited (255) 3 minutes ago a1ab72ce4ba2
34ef8757b63d k8s_kube-scheduler_kube-scheduler-mc1_kube-system_fe362b2b6b08ca576b7416df7f2e7845_5 Exited (255) 3 minutes ago 21e17680ca2d
fd8e0c0ba27f k8s_coredns_coredns-74ff55c5b-8pm9r_kube-system_3b813dc9-827d-4cf6-88cc-027491b350f1_2 Up 32 minutes 15c1a66b013b
f44e2c45ed87 k8s_coredns_coredns-74ff55c5b-pxdqz_kube-system_c3b7fbf2-2064-4f3f-b1b2-dec5dad904b7_2 Up 32 minutes 15c1a66b013b
04fa4eca1240 k8s_POD_coredns-74ff55c5b-8pm9r_kube-system_3b813dc9-827d-4cf6-88cc-027491b350f1_42 Up 32 minutes k8s.gcr.io/pause:3.2 none
f00c36d6de75 k8s_POD_coredns-74ff55c5b-pxdqz_kube-system_c3b7fbf2-2064-4f3f-b1b2-dec5dad904b7_42 Up 32 minutes k8s.gcr.io/pause:3.2 none
a1d6814e1b04 k8s_kube-flannel_kube-flannel-ds-ssb9t_kube-system_dbf3513d-dad2-462d-9107-4813acf9c23a_3 Up 32 minutes 263b01b3ca1f
94b231456ed7 k8s_kube-proxy_kube-proxy-8jhc7_kube-system_cc637e27-3b14-41bd-9f04-c1779e500a3a_2 Up 33 minutes 377de0f45e5c
df91856450bd k8s_POD_kube-flannel-ds-ssb9t_kube-system_dbf3513d-dad2-462d-9107-4813acf9c23a_2 Up 34 minutes k8s.gcr.io/pause:3.2 host
b480b844671a k8s_POD_kube-proxy-8jhc7_kube-system_cc637e27-3b14-41bd-9f04-c1779e500a3a_2 Up 34 minutes k8s.gcr.io/pause:3.2 host
1d4a7bcaad38 k8s_etcd_etcd-mc1_kube-system_14b7b6d6446e21cc57f0b40571ae3958_2 Up 35 minutes 2e91dde7e952
e5d517a9c29d k8s_POD_kube-controller-manager-mc1_kube-system_17cf17caf36ba27e3d2ec4f113a0cf6f_1 Up 35 minutes k8s.gcr.io/pause:3.2 host
3a3da7dbf3ad k8s_POD_kube-apiserver-mc1_kube-system_c55114bd57b1bf357c8f4c0d749ae105_2 Up 35 minutes k8s.gcr.io/pause:3.2 host
eef29cdebf5f k8s_POD_etcd-mc1_kube-system_14b7b6d6446e21cc57f0b40571ae3958_2 Up 35 minutes k8s.gcr.io/pause:3.2 host
3631d43757bc k8s_POD_kube-scheduler-mc1_kube-system_fe362b2b6b08ca576b7416df7f2e7845_1 Up 35 minutes k8s.gcr.io/pause:3.2 host
I see no weird issues on the logs (I'm a k8s beginner). This was working until a month ago, when I've reinstalled this for practicing, this is probably my tenth install attempt, I've tried different options, versions and googled a lot, but can't find no solution.
What could be the reason? What else can I try? How can I get to the root of the problem?
UPDATE 2021/02/06
The problem is not occurring anymore. Apparently, the issue was the version in this specific case. Didn't filed an issue because I didn't found clues regarding what specific issue to report.
The installation procedure in all cases was this:
# swapoff -a
# curl -sL get.docker.com|sh
# usermod -aG docker rodolfoap
# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
# echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" > /etc/apt/sources.list.d/kubernetes.list
# apt-get update
# apt-get install -y kubeadm kubectl kubectx # Master
# kubeadm config images pull
# kubeadm init --apiserver-advertise-address=0.0.0.0 --pod-network-cidr=10.244.0.0/16
Armbian-20.08.1 worked fine. My installation procedure has not changed since.
Armbian-20.11.3 had the issue: the API, scheduler and coredns restarted every 5 minutes, blocking the access to the API 5 of each 8 minutes, average..
Armbian-21.02.1 works fine. Worked at the first install, same procedure.
All versions were updated to the last kernel, at the moment of the install, current is 5.10.12-odroidxu4.
As you can see, after around two hours, no API reboots:
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE LABELS
kube-system coredns-74ff55c5b-gnvf2 1/1 Running 0 173m 10.244.0.2 mc1 k8s-app=kube-dns,pod-template-hash=74ff55c5b
kube-system coredns-74ff55c5b-wvnnz 1/1 Running 0 173m 10.244.0.3 mc1 k8s-app=kube-dns,pod-template-hash=74ff55c5b
kube-system etcd-mc1 1/1 Running 0 173m 192.168.1.91 mc1 component=etcd,tier=control-plane
kube-system kube-apiserver-mc1 1/1 Running 0 173m 192.168.1.91 mc1 component=kube-apiserver,tier=control-plane
kube-system kube-controller-manager-mc1 1/1 Running 0 173m 192.168.1.91 mc1 component=kube-controller-manager,tier=control-plane
kube-system kube-flannel-ds-c4jgv 1/1 Running 0 123m 192.168.1.93 mc3 app=flannel,controller-revision-hash=64465d999,pod-template-generation=1,tier=node
kube-system kube-flannel-ds-cl6n5 1/1 Running 0 75m 192.168.1.94 mc4 app=flannel,controller-revision-hash=64465d999,pod-template-generation=1,tier=node
kube-system kube-flannel-ds-z2nmw 1/1 Running 0 75m 192.168.1.92 mc2 app=flannel,controller-revision-hash=64465d999,pod-template-generation=1,tier=node
kube-system kube-flannel-ds-zqxh7 1/1 Running 0 150m 192.168.1.91 mc1 app=flannel,controller-revision-hash=64465d999,pod-template-generation=1,tier=node
kube-system kube-proxy-bd596 1/1 Running 0 75m 192.168.1.94 mc4 controller-revision-hash=b89db7f56,k8s-app=kube-proxy,pod-template-generation=1
kube-system kube-proxy-n6djp 1/1 Running 0 75m 192.168.1.92 mc2 controller-revision-hash=b89db7f56,k8s-app=kube-proxy,pod-template-generation=1
kube-system kube-proxy-rf4cr 1/1 Running 0 173m 192.168.1.91 mc1 controller-revision-hash=b89db7f56,k8s-app=kube-proxy,pod-template-generation=1
kube-system kube-proxy-xhl95 1/1 Running 0 123m 192.168.1.93 mc3 controller-revision-hash=b89db7f56,k8s-app=kube-proxy,pod-template-generation=1
kube-system kube-scheduler-mc1 1/1 Running 0 173m 192.168.1.91 mc1 component=kube-scheduler,tier=control-plane
Cluster is fully functional :)
I have the same problem, but with Ubuntu:
PRETTY_NAME="Ubuntu 22.04 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04 (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy
The cluster works good for:
Ubuntu 20.04 LTS
Ubuntu 18.04 LTS
Thought it will help someone else who is running ubuntu instead of Armbian.
Solution for ubuntu (possible for Armbian too) is here: Issues with "stability" with Kubernetes cluster before adding networking
Apparently it is a problem with the config of containerd on those versions.
UPDATE:
The problem is that if you use sudo apt install containerd, you will install the version v1.5.9 which has the option SystemdCgroup = false that worked, in my case, in Ubuntu 20.04 but on the Ubuntu 22.04 doesn't work. But if you change it to SystemdCgroup = trueit works.(this feature is updated in containerd v1.6.2 so that it is set on true). This will hopefully fix your problem too.

How to resolve Kubernetes DNS issues when trying to install Weave Cloud Agents for Minikube

I was trying to install the Weave Cloud Agents for my minikube. I used the provided command
curl -Ls https://get.weave.works |sh -s -- --token=xxx
but keep getting the following error:
There was an error while performing a DNS check: checking DNS failed, the DNS in the Kubernetes cluster is not working correctly. Please check that your cluster can download images and run pods.
I have following dns:
kube-system coredns-6955765f44-7zt4x 1/1 Running 0 38m
kube-system coredns-6955765f44-xdnd9 1/1 Running 0 38m
I tried different suggestions such as https://www.jeffgeerling.com/blog/2019/debugging-networking-issues-multi-node-kubernetes-on-virtualbox or https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/. However none of them resolved my issue.
It seems to an issue which happened before https://github.com/weaveworks/launcher/issues/285.
My Kubernetes is on v1.17.3
Reproduced you issue, have the same error.
minikube v1.7.2 on Centos 7.7.1908
Docker 19.03.5
vm-driver=virtualbox
Connecting cluster to "Old Tree 34" (id: old-tree-34) on Weave Cloud
Installing Weave Cloud agents on minikube at https://192.168.99.100:8443
Performing a check of the Kubernetes installation setup.
There was an error while performing a DNS check: checking DNS failed, the DNS in the Kubernetes cluster is not working correctly. Please check that your cluster can download images and run pods.
I wasnt able to fix this problem, instead of that found a workaround - use Helm. You have second tab 'Helm 'in 'Install the Weave Cloud Agents' with provided command, like
helm repo update && helm upgrade --install --wait weave-cloud \
--set token=xxx \
--namespace weave \
stable/weave-cloud
Lets install Helm and use it.
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get | bash
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller
.....
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
helm repo update
helm upgrade --install --wait weave-cloud \
> --set token=xxx \
> --namespace weave \
> stable/weave-cloud
Release "weave-cloud" does not exist. Installing it now.
NAME: weave-cloud
LAST DEPLOYED: Thu Feb 13 14:52:45 2020
NAMESPACE: weave
STATUS: DEPLOYED
RESOURCES:
==> v1/Deployment
NAME AGE
weave-agent 35s
==> v1/Pod(related)
NAME AGE
weave-agent-69fbf74889-dw77c 35s
==> v1/Secret
NAME AGE
weave-cloud 35s
==> v1/ServiceAccount
NAME AGE
weave-cloud 35s
==> v1beta1/ClusterRole
NAME AGE
weave-cloud 35s
==> v1beta1/ClusterRoleBinding
NAME AGE
weave-cloud 35s
NOTES:
Weave Cloud agents had been installed!
First, verify all Pods are running:
kubectl get pods -n weave
Next, login to Weave Cloud (https://cloud.weave.works) and verify the agents are connect to your instance.
If you need help or have any question, join our Slack to chat to us – https://slack.weave.works.
Happy hacking!
Check(wait around 10 min to deploy everything):
kubectl get pods -n weave
NAME READY STATUS RESTARTS AGE
kube-state-metrics-64599b7996-d8pnw 1/1 Running 0 29m
prom-node-exporter-2lwbn 1/1 Running 0 29m
prometheus-5586cdd667-dtdqq 2/2 Running 0 29m
weave-agent-6c77dbc569-xc9qx 1/1 Running 0 29m
weave-flux-agent-65cb4694d8-sllks 1/1 Running 0 29m
weave-flux-memcached-676f88fcf7-ktwnp 1/1 Running 0 29m
weave-scope-agent-7lgll 1/1 Running 0 29m
weave-scope-cluster-agent-8fb596b6b-mddv8 1/1 Running 0 29m
[vkryvoruchko#nested-vm-image1 bin]$ kubectl get all -n weave
NAME READY STATUS RESTARTS AGE
pod/kube-state-metrics-64599b7996-d8pnw 1/1 Running 0 30m
pod/prom-node-exporter-2lwbn 1/1 Running 0 30m
pod/prometheus-5586cdd667-dtdqq 2/2 Running 0 30m
pod/weave-agent-6c77dbc569-xc9qx 1/1 Running 0 30m
pod/weave-flux-agent-65cb4694d8-sllks 1/1 Running 0 30m
pod/weave-flux-memcached-676f88fcf7-ktwnp 1/1 Running 0 30m
pod/weave-scope-agent-7lgll 1/1 Running 0 30m
pod/weave-scope-cluster-agent-8fb596b6b-mddv8 1/1 Running 0 30m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/prometheus ClusterIP 10.108.197.29 <none> 80/TCP 30m
service/weave-flux-memcached ClusterIP None <none> 11211/TCP 30m
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/prom-node-exporter 1 1 1 1 1 <none> 30m
daemonset.apps/weave-scope-agent 1 1 1 1 1 <none> 30m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/kube-state-metrics 1/1 1 1 30m
deployment.apps/prometheus 1/1 1 1 30m
deployment.apps/weave-agent 1/1 1 1 31m
deployment.apps/weave-flux-agent 1/1 1 1 30m
deployment.apps/weave-flux-memcached 1/1 1 1 30m
deployment.apps/weave-scope-cluster-agent 1/1 1 1 30m
NAME DESIRED CURRENT READY AGE
replicaset.apps/kube-state-metrics-64599b7996 1 1 1 30m
replicaset.apps/prometheus-5586cdd667 1 1 1 30m
replicaset.apps/weave-agent-69fbf74889 0 0 0 31m
replicaset.apps/weave-agent-6c77dbc569 1 1 1 30m
replicaset.apps/weave-flux-agent-65cb4694d8 1 1 1 30m
replicaset.apps/weave-flux-memcached-676f88fcf7 1 1 1 30m
replicaset.apps/weave-scope-cluster-agent-8fb596b6b 1 1 1 30m
Login to https://cloud.weave.works/ and check the same:
Started installing agents on Kubernetes cluster v1.17.2
All Weave Cloud agents are connected!

Kubernetes: how do you list components running on master?

How do you list components running on the master Kubernetes node?
I assume there should be a kubeadm or kubectl command but can't find anything.
E.g. I'm looking to see if the Scheduler is running and I've used kubeadm config view which lists:
scheduler: {}
but not sure if that means the Scheduler is not running or there's simply no config for it.
Since you have installed with kubeadm, the control plane components must be running as pods in kube-system namespace. So you can run the following command to see if scheduler is running.
# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
calico-node-4x9fp 2/2 Running 0 4d6h
coredns-86c58d9df4-bw2q9 1/1 Running 0 4d6h
coredns-86c58d9df4-gvcl9 1/1 Running 0 4d6h
etcd-k1 1/1 Running 0 4d6h
kube-apiserver-k1 1/1 Running 0 4d6h
kube-controller-manager-k1 1/1 Running 83 4d6h
kube-dash-kubernetes-dashboard-5b7cf769bc-pd2n2 1/1 Running 0 4d6h
kube-proxy-jmrrz 1/1 Running 0 4d6h
kube-scheduler-k1 1/1 Running 82 4d6h
metrics-server-8544b5c78b-k2lwt 1/1 Running 16 4d6h
tiller-deploy-5f4fc5bcc6-gvhlz 1/1 Running 0 4d6h
If you want to know all pods running on a master node(or any particular node), you can use field-selector to select the node.
kubectl get pod --all-namespaces --field-selector spec.nodeName=<nodeName>
To filter pods only in kube-system namespace running on particular node -
kubectl get pod -n kube-system --field-selector spec.nodeName=<nodeName>
Assuming that you want to check what is running in master node and you are unable not do that via Kubernetes API server.
For kubelet since its running as systemd service you can check systemctl status kubelet.service.
Other components such as scheduler is run as container by kubelet so you can check them with standard docker command such as docker ps.

kubectl logs not working after creating cluster with kubeadm

I followed the guide on "Using kubeadm to Create a Cluster" but I am not able to view logs using kubectl:
root#o1:~# kubectl logs -n kube-system etcd-o1
Error from server: Get https://149.156.11.4:10250/containerLogs/kube-system/etcd-o1/etcd: tls: first record does not look like a TLS handshake
The above IP address is the cloud frontend address not the address of the VM which probably causes the problem. Some other kubectl cmds seem to work:
root#o1:~# kubectl cluster-info
Kubernetes master is running at https://10.6.16.88:6443
KubeDNS is running at https://10.6.16.88:6443/api/v1/namespaces/kube-system/services/kube-dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
root#o1:~# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-o1 1/1 Running 0 3h
kube-system kube-apiserver-o1 1/1 Running 0 3h
kube-system kube-controller-manager-o1 1/1 Running 0 3h
kube-system kube-dns-545bc4bfd4-mhbfb 3/3 Running 0 3h
kube-system kube-flannel-ds-lw87h 2/2 Running 0 1h
kube-system kube-flannel-ds-rkqxg 2/2 Running 2 1h
kube-system kube-proxy-hnhfs 1/1 Running 0 3h
kube-system kube-proxy-qql4r 1/1 Running 0 1h
kube-system kube-scheduler-o1 1/1 Running 0 3h
Please help.
Maybe change the address in the $HOME/admin.conf.