Cannot delete kubernetes service with no deployment [closed] - kubernetes

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I cannot force delete kubernetes service. However I don't have any deployments at the moment.
~$ kubectl get all --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/etcd-kubernetes-master 1/1 Running 0 26m
kube-system pod/kube-apiserver-kubernetes-master 1/1 Running 0 26m
kube-system pod/kube-controller-manager-kubernetes-master 1/1 Running 0 26m
kube-system pod/kube-flannel-ds-amd64-5h46j 0/1 CrashLoopBackOff 9 26m
kube-system pod/kube-proxy-ltz4v 1/1 Running 0 26m
kube-system pod/kube-scheduler-kubernetes-master 1/1 Running 0 26m
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 17m
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 48m
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/kube-flannel-ds-amd64 1 1 0 1 0 <none> 47m
kube-system daemonset.apps/kube-flannel-ds-arm 0 0 0 0 0 <none> 47m
kube-system daemonset.apps/kube-flannel-ds-arm64 0 0 0 0 0 <none> 47m
kube-system daemonset.apps/kube-flannel-ds-ppc64le 0 0 0 0 0 <none> 47m
kube-system daemonset.apps/kube-flannel-ds-s390x 0 0 0 0 0 <none> 47m
kube-system daemonset.apps/kube-proxy 1 1 1 1 1 kubernetes.io/os=linux 48m
~$ kubectl get deployments --all-namespaces
No resources found
Please help to stop and delete the kubernetes service

It's not mandatory to a have a deployment for pods. Generally, system pods running in kube-system namespace are created as static pods directly.
You can delete a pod via kubectl delete po podname and deamonset via kubectl delete ds daemonsetname and a service via kubectl delete svc servicename
The services kubernetes and kube-dns in kube-system namespace are managed by kubernetes control plane and will be recreated automatically upon removal. Also I don't think you have a reason to delete those.

The list which you posted are all core kubernetes services, these should not be deleted. If you have used kubeadm to create the kubernetes cluster, you can run kubeadm reset to destroy the cluster.

Related

What is the difference between CNI calico and calico tigera? [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 months ago.
Improve this question
I am unsure what the difference between "plain calico"
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
and the "calico tigera" (operator) is.
helm repo add projectcalico https://projectcalico.docs.tigera.io/charts
helm install calico projectcalico/tigera-operator --version v3.24.1\
--create-namespace -f values.yaml --namespace tigera-operator
I only really need a CNI, ideally the least contorted.
My impression is that the tigera is somehow a "new extented version" and it makes me
sad to see suddenly a much fuller K8s cluster because of this
(seems hence like mainly the devs of Calico wanted to get funding and needed to blow up
the complexity for fame of their product, but I might be wrong hence the question)
root#cp:~# kubectl get all -A | grep -e 'NAMESPACE\|calico'
NAMESPACE NAME READY STATUS RESTARTS AGE
calico-apiserver pod/calico-apiserver-8665d9fcfb-6z7sv 1/1 Running 0 7m30s
calico-apiserver pod/calico-apiserver-8665d9fcfb-95rlh 1/1 Running 0 7m30s
calico-system pod/calico-kube-controllers-78687bb75f-ns5nj 1/1 Running 0 8m3s
calico-system pod/calico-node-2q8h9 1/1 Running 0 7m43s
calico-system pod/calico-typha-6d48dfd49d-p5p47 1/1 Running 0 7m47s
calico-system pod/csi-node-driver-9gjc4 2/2 Running 0 8m4s
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
calico-apiserver service/calico-api ClusterIP 10.105.6.52 <none> 443/TCP 7m30s
calico-system service/calico-kube-controllers-metrics ClusterIP 10.105.39.117 <none> 9094/TCP 8m3s
calico-system service/calico-typha ClusterIP 10.102.152.6 <none> 5473/TCP 8m5s
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
calico-system daemonset.apps/calico-node 1 1 1 1 1 kubernetes.io/os=linux 8m4s
calico-system daemonset.apps/csi-node-driver 1 1 1 1 1 kubernetes.io/os=linux 8m4s
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
calico-apiserver deployment.apps/calico-apiserver 2/2 2 2 7m30s
calico-system deployment.apps/calico-kube-controllers 1/1 1 1 8m3s
calico-system deployment.apps/calico-typha 1/1 1 1 8m4s
NAMESPACE NAME DESIRED CURRENT READY AGE
calico-apiserver replicaset.apps/calico-apiserver-8665d9fcfb 2 2 2 7m30s
calico-system replicaset.apps/calico-kube-controllers-78687bb75f 1 1 1 8m3s
calico-system replicaset.apps/calico-typha-588b4ff644 0 0 0 8m4s
calico-system replicaset.apps/calico-typha-6d48dfd49d 1 1 1 7m47s
Tigera is a Cloud-Native Application Protection Platform (CNAPP).
For sure, you just want the first copy, Calico CNI.
CNI is a small network plugin that is used for allocating IP address, but calico tigera is responsible for whole kubernetes networking and connecting nodes and services

kube-apiserver: constantly 5 to 10% CPU: Although there is no single request

I installed kind to play around with Kubernetes.
If I use top and sort by CPU usage (key C), then I see that kube-apiserver is constantly consuming 5 to 10% CPU.
Why?
I don't have installed something up to now:
guettli#p15:~$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-558bd4d5db-ntg7c 1/1 Running 0 40h
kube-system coredns-558bd4d5db-sx8w9 1/1 Running 0 40h
kube-system etcd-kind-control-plane 1/1 Running 0 40h
kube-system kindnet-9zkkg 1/1 Running 0 40h
kube-system kube-apiserver-kind-control-plane 1/1 Running 0 40h
kube-system kube-controller-manager-kind-control-plane 1/1 Running 0 40h
kube-system kube-proxy-dthwl 1/1 Running 0 40h
kube-system kube-scheduler-kind-control-plane 1/1 Running 0 40h
local-path-storage local-path-provisioner-547f784dff-xntql 1/1 Running 0 40h
guettli#p15:~$ kubectl get services --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 40h
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 40h
guettli#p15:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kind-control-plane Ready control-plane,master 40h v1.21.1
guettli#p15:~$ kubectl get nodes --all-namespaces
NAME STATUS ROLES AGE VERSION
kind-control-plane Ready control-plane,master 40h v1.21.1
I am curious. Where does the CPU usage come from? How can I investigate this?
Even in an empty cluster with just one master node, there are at least 5 components that reach out to the API server on a regular basis:
kubelet for the master node
Controller manager
Scheduler
CoreDNS
Kube proxy
This is because API Server acts as the only entry point for all components in Kubernetes to know what the cluster state should be and take action if needed.
If you are interested in the details, you could enable audit logs in the API server and get a very verbose file with all the requests being made.
How to do so is not the goal of this answer, but you can start from the apiserver documentation.

How to expose Kubernetes Dashboard with Nginx Ingress? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
The whole cluster consists of 3 nodes and everything seems to run correctly:
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default ingress-nginx-controller-5c8d66c76d-wk26n 1/1 Running 0 12h
ingress-nginx-2 ingress-nginx-2-controller-6bfb65b8-9zcjm 1/1 Running 0 12h
kube-system calico-kube-controllers-684bcfdc59-2p72w 1/1 Running 1 (7d11h ago) 7d11h
kube-system calico-node-4zdwr 1/1 Running 2 (5d10h ago) 7d11h
kube-system calico-node-g5zt7 1/1 Running 0 7d11h
kube-system calico-node-x4whm 1/1 Running 0 7d11h
kube-system coredns-8474476ff8-jcj96 1/1 Running 0 5d10h
kube-system coredns-8474476ff8-v5rvz 1/1 Running 0 5d10h
kube-system dns-autoscaler-5ffdc7f89d-9s7rl 1/1 Running 2 (5d10h ago) 7d11h
kube-system kube-apiserver-node1 1/1 Running 2 (5d10h ago) 7d11h
kube-system kube-controller-manager-node1 1/1 Running 3 (5d10h ago) 7d11h
kube-system kube-proxy-2x8fg 1/1 Running 2 (5d10h ago) 7d11h
kube-system kube-proxy-pqqv7 1/1 Running 0 7d11h
kube-system kube-proxy-wdb45 1/1 Running 0 7d11h
kube-system kube-scheduler-node1 1/1 Running 3 (5d10h ago) 7d11h
kube-system nginx-proxy-node2 1/1 Running 0 7d11h
kube-system nginx-proxy-node3 1/1 Running 0 7d11h
kube-system nodelocaldns-6mrqv 1/1 Running 2 (5d10h ago) 7d11h
kube-system nodelocaldns-lsv8x 1/1 Running 0 7d11h
kube-system nodelocaldns-pq6xl 1/1 Running 0 7d11h
kubernetes-dashboard dashboard-metrics-scraper-856586f554-6s52r 1/1 Running 0 4d11h
kubernetes-dashboard kubernetes-dashboard-67484c44f6-gp8r5 1/1 Running 0 4d11h
The Dashboard service works fine as well:
$ kubectl get svc -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 10.233.20.30 <none> 8000/TCP 4d11h
kubernetes-dashboard ClusterIP 10.233.62.70 <none> 443/TCP 4d11h
What I did recently, was creating an Ingress to expose the Dashboard to be available globally:
$ cat ingress.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dashboard
spec:
defaultBackend:
service:
name: kubernetes-dashboard
port:
number: 443
After applying the configuration above, it looks like it works correctly:
$ kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
dashboard <none> * 80 10h
However, trying to access the Dashboard on any of the URLs below, both http and https, returns Connection Refused error:
https://10.11.12.13/api/v1/namespaces/kube-system/services/kube-dns/proxy
https://10.11.12.13/api/v1/
https://10.11.12.13/
What did I miss in this configuration? Additional comment: I don't want to assign any domain to the Dashboard, at the moment it's OK to access its IP address.
Ingress is namespaced resource , and kubernetes-dashboard pod located in "kubernetes-dashboard" namespace .
so you need to move the ingress to the "kubernetes-dashboard" namespace.
:: To list all namespaced k8s resources ::
kubectl api-resources --namespaced=true
Are you running metallb or a similar loadbalancer for nginx or are you using a nodeport for the ingress endpoint?
You need to access the ingress either via a loadbalancer IP or a nginx NodePort. And afaik you will need a hostname/DNS entry for ingress entry.
If you just want to access the dashboard without a hostname you don't need an ingress but a loadbalancer service or NodePort service to the dashboard pods.

Understanding Kubernetes networking, pods with same ip

I checked the pods in the kube-system namespace and noticed that some pods share the same ip address.The pods that share the same ip address appear to be on the same node.
In the Kubernetes documenatation it said that "Evert pod gets its own ip address." (https://kubernetes.io/docs/concepts/cluster-administration/networking/). I'm confused as to how same ip for some pods came about.
This was reported in issue 51322 and can depend on the network plugin you are using.
The issue was seen when using the basic kubenet network plugin on Linux.
Sometime, a reset/reboot can help
I suspect nodes have been configured with overlapped podCIDRs for such cases.
The pod CIDR could be checked by kubectl get node -o jsonpath='{.items[*].spec.podCIDR}'
Please check the Kubernetes manifests of the pods that have the same IP address as their node. If they have the parameter 'hostNetwork' set to be true, then this is not an issue.
master-node after logging in using PuTTY
worker-node01 after logging in using PuTTY
It clearly shows a separate CIDR for weave network. So it depends on the network plug-in. And some cases will override the pod specification CIDR provided during initialization.
After re-deploying across the new node - worker-node02
Yes. I have checked my 2 node clusters created using kubeadm on VMs running on AWS.
In the manifest files for static Pods hostNetwork=true is set.
Pods are:
-rw------- 1 root root 2100 Feb 4 16:48 etcd.yaml
-rw------- 1 root root 3669 Feb 4 16:48 kube-apiserver.yaml
-rw------- 1 root root 3346 Feb 4 16:48 kube-controller-manager.yaml
-rw------- 1 root root 1385 Feb 4 16:48 kube-scheduler.yaml
I have checked with weave and flannel.
All other pods getting IP, which was set during cluster initialization by kubeadm:
kubeadm init --pod-network-cidr=10.244.0.0/16
ubuntu#master-node:~$ kubectl get all -o wide --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default pod/my-nginx-deployment-5976fbfd94-2n2ff 1/1 Running 0 20m 10.244.1.17 worker-node01
default pod/my-nginx-deployment-5976fbfd94-4sghq 1/1 Running 0 20m 10.244.1.12 worker-node01
default pod/my-nginx-deployment-5976fbfd94-57lfp 1/1 Running 0 20m 10.244.1.14 worker-node01
default pod/my-nginx-deployment-5976fbfd94-77nrr 1/1 Running 0 20m 10.244.1.18 worker-node01
default pod/my-nginx-deployment-5976fbfd94-m7qbn 1/1 Running 0 20m 10.244.1.15 worker-node01
default pod/my-nginx-deployment-5976fbfd94-nsxvm 1/1 Running 0 20m 10.244.1.19 worker-node01
default pod/my-nginx-deployment-5976fbfd94-r5hr6 1/1 Running 0 20m 10.244.1.16 worker-node01
default pod/my-nginx-deployment-5976fbfd94-whtcg 1/1 Running 0 20m 10.244.1.13 worker-node01
kube-system pod/coredns-f9fd979d6-nghhz 1/1 Running 0 63m 10.244.0.3 master-node
kube-system pod/coredns-f9fd979d6-pdbrx 1/1 Running 0 63m 10.244.0.2 master-node
kube-system pod/etcd-master-node 1/1 Running 0 63m 172.31.8.115 master-node
kube-system pod/kube-apiserver-master-node 1/1 Running 0 63m 172.31.8.115 master-node
kube-system pod/kube-controller-manager-master-node 1/1 Running 0 63m 172.31.8.115 master-node
kube-system pod/kube-proxy-8k9s4 1/1 Running 0 63m 172.31.8.115 master-node
kube-system pod/kube-proxy-ln6gb 1/1 Running 0 37m 172.31.3.75 worker-node01
kube-system pod/kube-scheduler-master-node 1/1 Running 0 63m 172.31.8.115 master-node
kube-system pod/weave-net-jc92w 2/2 Running 1 24m 172.31.8.115 master-node
kube-system pod/weave-net-l9rg2 2/2 Running 1 24m 172.31.3.75 worker-node01
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default service/kubernetes ClusterIP 10.96.0.1 443/TCP 63m
kube-system service/kube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP,9153/TCP 63m k8s-app=kube-dns
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE CONTAINERS IMAGES SELECTOR
kube-system daemonset.apps/kube-proxy 2 2 2 2 2 kubernetes.io/os=linux 63m kube-proxy k8s.gcr.io/kube-proxy:v1.19.16 k8s-app=kube-proxy
kube-system daemonset.apps/weave-net 2 2 2 2 2 24m weave,weave-npc ghcr.io/weaveworks/launcher/weave-kube:2.8.1,ghcr.io/weaveworks/launcher/weave-npc:2.8.1 name=weave-net
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
default deployment.apps/my-nginx-deployment 8/8 8 8 20m nginx nginx app=my-nginx-deployment
kube-system deployment.apps/coredns 2/2 2 2 63m coredns k8s.gcr.io/coredns:1.7.0 k8s-app=kube-dns
NAMESPACE NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
default replicaset.apps/my-nginx-deployment-5976fbfd94 8 8 8 20m nginx nginx app=my-nginx-deployment,pod-template-hash=5976fbfd94
kube-system replicaset.apps/coredns-f9fd979d6 2 2 2 63m coredns k8s.gcr.io/coredns:1.7.0 k8s-app=kube-dns,pod-template-hash=f9fd979d6
ubuntu#master-node:~$
I will add another worker node and check.
Note: I was testing with a one master and 3 worker node cluster, where pods were getting IP from some other CIDR 10.38 and 10.39. I am not sure, but the way steps are followed matters. I could not fix that cluster.

What is POD and SERVICE in kubectl commands?

I am probably missing some of the basic. kubectl logs command usage is the following:
"kubectl logs [-f] [-p] POD [-c CONTAINER] [options]"
list of my pods is the following:
ubuntu#master:~$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-master 1/1 Running 0 24m
kube-system kube-apiserver-master 1/1 Running 0 24m
kube-system kube-controller-manager-master 1/1 Running 0 24m
kube-system kube-discovery-982812725-3kt85 1/1 Running 0 24m
kube-system kube-dns-2247936740-kimly 3/3 Running 0 24m
kube-system kube-proxy-amd64-gwv99 1/1 Running 0 20m
kube-system kube-proxy-amd64-r08h9 1/1 Running 0 24m
kube-system kube-proxy-amd64-szl6w 1/1 Running 0 14m
kube-system kube-scheduler-master 1/1 Running 0 24m
kube-system kubernetes-dashboard-1655269645-x3uyt 1/1 Running 0 24m
kube-system weave-net-4g1g8 1/2 CrashLoopBackOff 7 14m
kube-system weave-net-8zdm3 1/2 CrashLoopBackOff 8 20m
kube-system weave-net-qm3q5 2/2 Running 0 24m
I assume POD for logs command is anything from the second "name" column above. So, I try the following commands.
ubuntu#master:~$ kubectl logs etcd-master
Error from server: pods "etcd-master" not found
ubuntu#master:~$ kubectl logs weave-net-4g1g8
Error from server: pods "weave-net-4g1g8" not found
ubuntu#master:~$ kubectl logs weave-net
Error from server: pods "weave-net" not found
ubuntu#master:~$ kubectl logs weave
Error from server: pods "weave" not found
So, what is the POD in the logs command?
I have got the same question about services as well. How to identify a SERVICE to supply into a command, for example for 'describe' command?
ubuntu#master:~$ kubectl get services --all-namespaces
NAMESPACE NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes 100.64.0.1 <none> 443/TCP 40m
kube-system kube-dns 100.64.0.10 <none> 53/UDP,53/TCP 39m
kube-system kubernetes-dashboard 100.70.83.136 <nodes> 80/TCP 39m
ubuntu#master:~$ kubectl describe service kubernetes-dashboard
Error from server: services "kubernetes-dashboard" not found
ubuntu#master:~$ kubectl describe services kubernetes-dashboard
Error from server: services "kubernetes-dashboard" not found
Also, is it normal that weave-net-8zdm3 is in CrashLoopBackOff state? It seems I have got one for each connected worker. If it is not normal, how can I fix it? I have found similar question here: kube-dns and weave-net not starting but it does not give any practical answer.
Thanks for your help!
It seems you are running your pods in a different namespace than default.
ubuntu#master:~$ kubectl get pods --all-namespaces returns your pods but ubuntu#master:~$ kubectl logs etcd-masterreturns not found. Try running kubectl logs etcd-master --all-namespaces or if you know your namespace kubectl logs etcd-mastern --namespace=mynamespace.
The same thing goes for your services.