kube-apiserver: constantly 5 to 10% CPU: Although there is no single request - kubernetes

I installed kind to play around with Kubernetes.
If I use top and sort by CPU usage (key C), then I see that kube-apiserver is constantly consuming 5 to 10% CPU.
Why?
I don't have installed something up to now:
guettli#p15:~$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-558bd4d5db-ntg7c 1/1 Running 0 40h
kube-system coredns-558bd4d5db-sx8w9 1/1 Running 0 40h
kube-system etcd-kind-control-plane 1/1 Running 0 40h
kube-system kindnet-9zkkg 1/1 Running 0 40h
kube-system kube-apiserver-kind-control-plane 1/1 Running 0 40h
kube-system kube-controller-manager-kind-control-plane 1/1 Running 0 40h
kube-system kube-proxy-dthwl 1/1 Running 0 40h
kube-system kube-scheduler-kind-control-plane 1/1 Running 0 40h
local-path-storage local-path-provisioner-547f784dff-xntql 1/1 Running 0 40h
guettli#p15:~$ kubectl get services --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 40h
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 40h
guettli#p15:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kind-control-plane Ready control-plane,master 40h v1.21.1
guettli#p15:~$ kubectl get nodes --all-namespaces
NAME STATUS ROLES AGE VERSION
kind-control-plane Ready control-plane,master 40h v1.21.1
I am curious. Where does the CPU usage come from? How can I investigate this?

Even in an empty cluster with just one master node, there are at least 5 components that reach out to the API server on a regular basis:
kubelet for the master node
Controller manager
Scheduler
CoreDNS
Kube proxy
This is because API Server acts as the only entry point for all components in Kubernetes to know what the cluster state should be and take action if needed.
If you are interested in the details, you could enable audit logs in the API server and get a very verbose file with all the requests being made.
How to do so is not the goal of this answer, but you can start from the apiserver documentation.

Related

Kiali Dashboard Not able to fetch the k8 namespaces application

I have successfully installed istio and deployed some sample app and application is up and running.
root#master:~# kubectl get pod
NAME READY STATUS RESTARTS AGE
mydata-v1-847cd777c4-kc495 2/2 Running 0 39m
mydata-v2-65bbf55977-j67xp 2/2 Running 0 39m
myweb-66dc56ccd6-5g64b 2/2 Running 0 40m
NAME READY STATUS RESTARTS AGE
grafana-784c89f4cf-cxpcz 1/1 Running 0 15d
istio-egressgateway-bd477794-qv7n8 1/1 Running 0 15d
istio-ingressgateway-79df7c789f-qlqcf 1/1 Running 0 15d
istiod-6dc55bbdd-t5klg 1/1 Running 0 15d
jaeger-7f78b6fb65-xhz8j 1/1 Running 0 15d
kiali-dc84967d9-99lwv 1/1 Running 1 13d
prometheus-7bfddb8dbf-nd4gn 2/2 Running 35 15d
Next i changed kiali dashboard cluster IP to Nodeport to access the dash brad from the browser
kubectl patch svc kiali -n istio-system --type='json' -p '[{"op":"replace","path":"/spec/type","value":"NodePort"},{"op":"replace","path":"/spec/ports/0/nodePort","value":30010}]'
Finally i can able to access the dashboard using node port with my host Ip http://machineip_port/ and could see my k8 namespaces without any apps please find the attached screen shot
could you please help me someone last one week i am running into this issue.
The problem is that
"Namespaces that do not exist at the time of install but are created
later in the future will not be accessible by Kiali". Resource.
So, first, keep in mind you should not edit kiali's ConfigMap, but only Kiali's Custom Resource Definition(CRD), which is used by Kiali Operator.
Run kubectl edit kiali kiali in the namespace you have the CRD available.
Then add the following under spec:
spec:
deployment:
accessible_namespaces:
- ["**"]
This will give Kiali access to all current namespaces and to any you'll create in the future.

Cannot delete kubernetes service with no deployment [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I cannot force delete kubernetes service. However I don't have any deployments at the moment.
~$ kubectl get all --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/etcd-kubernetes-master 1/1 Running 0 26m
kube-system pod/kube-apiserver-kubernetes-master 1/1 Running 0 26m
kube-system pod/kube-controller-manager-kubernetes-master 1/1 Running 0 26m
kube-system pod/kube-flannel-ds-amd64-5h46j 0/1 CrashLoopBackOff 9 26m
kube-system pod/kube-proxy-ltz4v 1/1 Running 0 26m
kube-system pod/kube-scheduler-kubernetes-master 1/1 Running 0 26m
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 17m
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 48m
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/kube-flannel-ds-amd64 1 1 0 1 0 <none> 47m
kube-system daemonset.apps/kube-flannel-ds-arm 0 0 0 0 0 <none> 47m
kube-system daemonset.apps/kube-flannel-ds-arm64 0 0 0 0 0 <none> 47m
kube-system daemonset.apps/kube-flannel-ds-ppc64le 0 0 0 0 0 <none> 47m
kube-system daemonset.apps/kube-flannel-ds-s390x 0 0 0 0 0 <none> 47m
kube-system daemonset.apps/kube-proxy 1 1 1 1 1 kubernetes.io/os=linux 48m
~$ kubectl get deployments --all-namespaces
No resources found
Please help to stop and delete the kubernetes service
It's not mandatory to a have a deployment for pods. Generally, system pods running in kube-system namespace are created as static pods directly.
You can delete a pod via kubectl delete po podname and deamonset via kubectl delete ds daemonsetname and a service via kubectl delete svc servicename
The services kubernetes and kube-dns in kube-system namespace are managed by kubernetes control plane and will be recreated automatically upon removal. Also I don't think you have a reason to delete those.
The list which you posted are all core kubernetes services, these should not be deleted. If you have used kubeadm to create the kubernetes cluster, you can run kubeadm reset to destroy the cluster.

Virtualbox kubernetes NodePort access

Morning,
I have a simple nginx setup that is using NodePort to access on an alternate port 30000. I cannot seem to figure out how to actually access it on my workstation that has the virtualbox installed.
Some basic stuff:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP
25h
nginx-55bc597fbf-zb2ml ClusterIP 10.101.124.73 <none> 8080/TCP
24h
nginx-service-np NodePort 10.105.157.230 <none>
8082:30000/TCP 22h
user-login-service NodePort 10.106.129.60 <none>
5000:31395/TCP 38m
I am using flannel
kubectl cluster-info
Kubernetes master is running at https://192.168.56.101:6443
KubeDNS is running at https://192.168.56.101:6443/api/v1/namespaces/kube-
system/services/kube-dns:dns/proxy
kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 25h v1.15.1
k8s-worker1 Ready <none> 94m v1.15.1
k8s-worker2 Ready <none> 98m v1.15.1
I did port forwarding for NAT where it is supposed to forward 30000 to 80 and also did 31395 to 31396 for the user-login-service
Trying to access using master ip https://192.168.56.101:80 or https://192.168.56.101:31396 fails. I did try http as well, but cluster-info seems to show master using https and kubernetes is using 443/tcp.
There are two adapters for master and the workers. One adapter is NAT and used to allow flow of traffic outbound (e.g., for use with apt-get commands)
This seems to use 10.0.3.15 address assigned to all three nodes
The other adapter is host-ip and is what is giving the servers addresses in the 192.168.56.0 network. I did set those as static using netplan.
The three servers can see each other fine. I can do external traffic fine.
/etc/netplan# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-5c98db65d4-4xg8h 1/1 Running 17
120m
coredns-5c98db65d4-xn797 1/1 Running 17
120m
etcd-k8s-master 1/1 Running 8 25h
kube-apiserver-k8s-master 1/1 Running 8 25h
kube-controller-manager-dashap-k8s-master 1/1 Running 12 25h
kube-flannel-ds-amd64-6fw7x 1/1 Running 0 25h
kube-flannel-ds-amd64-hd4ng 1/1 Running 0
122m
kube-flannel-ds-amd64-z2wls 1/1 Running 0
126m
kube-proxy-g8k5l 1/1 Running 0 25h
kube-proxy-khn67 1/1 Running 0
126m
kube-proxy-zsvqs 1/1 Running 0
122m
kube-scheduler-k8s-master 1/1 Running 10 25h
weave-net-2l5cs 2/2 Running 0 44m
weave-net-n4zmr 2/2 Running 0 44m
weave-net-v6t74 2/2 Running 0 44m
This is my first setup, so it is hard to troubleshoot for me. Any help on how to reach the the two services using my browser on my workstation and not within the nodes would be appreciated.

Kubernetes Dashborad is not opening

My Master node ip address is 192.168.56.101. there is no node connected to master yet.
master#kmaster:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kmaster Ready master 125m v1.15.1
master#kmaster:~$
When i deployed my kubernetes-dashborad using below command, why running IP Address of kubernetes-dashboard-5c8f9556c4-f2jpz is 192.168.189.6
Similarly the other pods has also different IP address.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta1/aio/deploy/recommended.yaml
master#kmaster:~$ kubectl get pods -o wide --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system calico-kube-controllers-7bd78b474d-r2bwg 1/1 Running 0 113m 192.168.189.2 kmaster <none> <none>
kube-system calico-node-dsgqt 1/1 Running 0 113m 192.168.56.101 kmaster <none> <none>
kube-system coredns-5c98db65d4-n2wml 1/1 Running 0 114m 192.168.189.3 kmaster <none> <none>
kube-system coredns-5c98db65d4-v5qc8 1/1 Running 0 114m 192.168.189.1 kmaster <none> <none>
kube-system etcd-kmaster 1/1 Running 0 114m 192.168.56.101 kmaster <none> <none>
kube-system kube-apiserver-kmaster 1/1 Running 0 114m 192.168.56.101 kmaster <none> <none>
kube-system kube-controller-manager-kmaster 1/1 Running 0 114m 192.168.56.101 kmaster <none> <none>
kube-system kube-proxy-bgtmr 1/1 Running 0 114m 192.168.56.101 kmaster <none> <none>
kube-system kube-scheduler-kmaster 1/1 Running 0 114m 192.168.56.101 kmaster <none> <none>
kubernetes-dashboard kubernetes-dashboard-5c8f9556c4-f2jpz 1/1 Running 0 107m 192.168.189.6 kmaster <none> <none>
kubernetes-dashboard kubernetes-metrics-scraper-86456cdd8f-w45w2 1/1 Running 0 107m 192.168.189.4 kmaster <none> <none>
master#kmaster:~$
And also not able to access the kubernetes-dashboard UI. i am using the link
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/.
and the link KubeDNS https://192.168.56.101:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy is also not working.
but when trying to access Kubernetes master at https://192.168.56.101:6443 is working.
master#kmaster:~$ kubectl cluster-info
Kubernetes master is running at https://192.168.56.101:6443
KubeDNS is running at https://192.168.56.101:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Any suggestions.
Solution (see comments): Don't mix your physical and overlay network ranges.
Accessing the KubeDNS is only possible with DNS as protocol, not HTTP. If you want to query the DNS service you need to kubectl port-forward, not the HTTP (API) proxy.
If you try to access the dashboard with localhost:8081, you have to run kubectl proxy --port 8081 from your console to setup the proxy between you localhost to the k8s apiserver.
If you want to access dashboard from apiserver directly without the local proxy, try the following url https://192.168.56.101:6443/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy (assuming your service name is kubernetes-dashboard)
You can also run kubectl port-forward svc/kubernetes-dashboard -n kubernetes-dashboard 443, then access the dashboard with https://localhost:443

What is POD and SERVICE in kubectl commands?

I am probably missing some of the basic. kubectl logs command usage is the following:
"kubectl logs [-f] [-p] POD [-c CONTAINER] [options]"
list of my pods is the following:
ubuntu#master:~$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-master 1/1 Running 0 24m
kube-system kube-apiserver-master 1/1 Running 0 24m
kube-system kube-controller-manager-master 1/1 Running 0 24m
kube-system kube-discovery-982812725-3kt85 1/1 Running 0 24m
kube-system kube-dns-2247936740-kimly 3/3 Running 0 24m
kube-system kube-proxy-amd64-gwv99 1/1 Running 0 20m
kube-system kube-proxy-amd64-r08h9 1/1 Running 0 24m
kube-system kube-proxy-amd64-szl6w 1/1 Running 0 14m
kube-system kube-scheduler-master 1/1 Running 0 24m
kube-system kubernetes-dashboard-1655269645-x3uyt 1/1 Running 0 24m
kube-system weave-net-4g1g8 1/2 CrashLoopBackOff 7 14m
kube-system weave-net-8zdm3 1/2 CrashLoopBackOff 8 20m
kube-system weave-net-qm3q5 2/2 Running 0 24m
I assume POD for logs command is anything from the second "name" column above. So, I try the following commands.
ubuntu#master:~$ kubectl logs etcd-master
Error from server: pods "etcd-master" not found
ubuntu#master:~$ kubectl logs weave-net-4g1g8
Error from server: pods "weave-net-4g1g8" not found
ubuntu#master:~$ kubectl logs weave-net
Error from server: pods "weave-net" not found
ubuntu#master:~$ kubectl logs weave
Error from server: pods "weave" not found
So, what is the POD in the logs command?
I have got the same question about services as well. How to identify a SERVICE to supply into a command, for example for 'describe' command?
ubuntu#master:~$ kubectl get services --all-namespaces
NAMESPACE NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes 100.64.0.1 <none> 443/TCP 40m
kube-system kube-dns 100.64.0.10 <none> 53/UDP,53/TCP 39m
kube-system kubernetes-dashboard 100.70.83.136 <nodes> 80/TCP 39m
ubuntu#master:~$ kubectl describe service kubernetes-dashboard
Error from server: services "kubernetes-dashboard" not found
ubuntu#master:~$ kubectl describe services kubernetes-dashboard
Error from server: services "kubernetes-dashboard" not found
Also, is it normal that weave-net-8zdm3 is in CrashLoopBackOff state? It seems I have got one for each connected worker. If it is not normal, how can I fix it? I have found similar question here: kube-dns and weave-net not starting but it does not give any practical answer.
Thanks for your help!
It seems you are running your pods in a different namespace than default.
ubuntu#master:~$ kubectl get pods --all-namespaces returns your pods but ubuntu#master:~$ kubectl logs etcd-masterreturns not found. Try running kubectl logs etcd-master --all-namespaces or if you know your namespace kubectl logs etcd-mastern --namespace=mynamespace.
The same thing goes for your services.