pods "kubernetes-dashboard-7ffd448895-56tlr" not found - kubernetes

Currently I'm using microk8s to run local cluster.
When I run k get pods -A, this result shown
...
kube-system kubernetes-dashboard-7ffd448895-56tlr 1/1 Running 1 3d14h
...
Ok.. It means there's a pod kubernetes-dashboard running in kube-system namespace.
And I tried to port forward that pod 443 into 10443 and this result shows up
$ k port-forward kubernetes-dashboard-7ffd448895-56tlr 10443:443
Error from server (NotFound): pods "kubernetes-dashboard-7ffd448895-56tlr" not found
I mean, there it is. The pod is there. But it keeps denying it.
I don't understand this result and stuck with no progress.
How can I port-forward my pods?

The result of k get pods -A indicates that the pod is in the namespace kube-system. Unless a resource is in the default namespace, you must specify the namespace:
k port-forward -n kube-system kubernetes-dashboard-7ffd448895-56tlr 10443:443
Alternatively, you can update your context to use a different namespace by default:
kubectl config set-context --current --namespace=kube-system
After you do that you can work with resources in kube-system without setting -n kube-system.

Related

Kubectl : No resource found even tough there are pods running in the namespace

I have 2 pods running on default namespace as shown below
NAMESPACE NAME READY STATUS RESTARTS AGE
default alpaca-prod 1/1 Running 0 36m
default alpaca-test 1/1 Running 0 4m26s
kube-system coredns-78fcd69978-xd7jw 1/1 Running 0 23h
But when I try to get deployments I do not see any
kubectl get deployments
No resources found in default namespace.
Can someone explain this behavior ?
I am running k8 on Minikube.
I think these are pods which were spawned without Deployment, StatefulSet or DaemonSet.
You can run pod like this using the command, e.g.:
kubectl run nginx-test --image=nginx -n default
pods created via DaemonSet usually end with -xxxxx
pods created via Deployment usually end with -xxxxxxxxxx-xxxxx
pods created via StatefulSet usually end with -0, -1 etc.
pods created without upper resource, usually have exact name as you specified e.g. nginx-test, nginx, etc.
So my guess that is a standalone Pod resource (last option)

Pods not found while using kubectl port-forward

I want to forward the ports
kubectl port-forward ...
But for this I need to find out the name of pods, I run the command
kubectl -n main_sp get pods
Getting a list:
NAME READY STATUS RESTARTS AGE
main-ms-hc-78469b74c-7lfdh 1/1 Running 0 13h
I'm trying
kubectl port-forward main-ms-hc-78469b74c-7lfdh 8080:80
and I get
Error from server (NotFound): pods "main-ms-hc-78469b74c-7lfdh" not found
What am I doing wrong?
You need to mention the namespace too while using port-forward:
$ kubectl port-forward -n main_sp main-ms-hc-78469b74c-7lfdh 8080:80
To port-forward a pod:
$ kubectl port-forward -n <namespace> <pod-name> <local-port>:<target-port>
To port-forward a pod via service name:
$ kubectl port-forward -n <namespace> svc/<servic-name> <local-port>:<target-port>

Add some labels to a deployment?

I'm very beginner with K8S and I've a question about the labels with kubernetes. On youtube video (In french here), I've seen that :
The man create three deploys with these commands and run the command kubectl get deployment then kubectl get deployment --show-labels :
kubectl run monnginx --image nginx --labels "env=prod,group=front"
kubectl run monnginx2 --image nginx --labels "env=dev,group=front"
kubectl run monnginx3 --image nginx --labels "env=prod,group=back"
root#kubmaster:# kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
monnginx 1/1 1 1 46s
monnginx2 1/1 1 1 22s
monnginx3 1/1 1 1 10s
root#kubmaster:# kubectl get deployments --show-labels
NAME READY UP-TO-DATE AVAILABLE AGE LABELS
monnginx 1/1 1 1 46s env=prod,group=front
monnginx2 1/1 1 1 22s env=dev,group=front
monnginx3 1/1 1 1 10s env=prod,group=back
Currently, if I try to do the same things :
root#kubermaster:~ kubectl run mynginx --image nginx --labels "env=prod,group=front"
pod/mynginx created
root#kubermaster:~ kubectl run mynginx2 --image nginx --labels "env=dev,group=front"
pod/mynginx2 created
root#kubermaster:~ kubectl run mynginx3 --image nginx --labels "env=dev,group=back"
pod/mynginx3 created
When I try the command kubectl get deployments --show-labels, the output is :
No resources found in default namespace.
But if I try kubectl get pods --show-labels, the output is :
NAME READY STATUS RESTARTS AGE LABELS
mynginx 1/1 Running 0 2m39s env=prod,group=front
mynginx2 1/1 Running 0 2m32s env=dev,group=front
mynginx3 1/1 Running 0 2m25s env=dev,group=back
If I follow every steps from the videos, there is a way to put some labels on deployments... But the command kubectl create deployment does not accept the flag --labels :
Error: unknown flag: --labels
There is someone to explain why I've this error and How put some label ?
Thanks a lot !
Because $ kubectl create deployment doesn't support --labels flag. But you can use $ kubectl label to add labels to your deployment.
Examples:
# Update deployment 'my-deployment' with the label 'unhealthy' and the value 'true'.
$ kubectl label deployment my-deployment unhealthy=true
# Update deployment 'my-deployment' with the label 'status' and the value 'unhealthy', overwriting any existing value.
$ kubectl label --overwrite deployment my-deployment status=unhealthy
It works with other Kubernetes objects too.
Format: kubectl label [--overwrite] (-f FILENAME | TYPE NAME) KEY_1=VAL_1 ... KEY_N=VAL_N
I think the problem is something different.
Until Kubernetes 1.17 the command kubectl run created a deployment.
Since Kubernetes 1.18 the command kubectl run creates a pod.
Release Notes of Kubernetes 1.18
kubectl run has removed the previously deprecated generators, along with flags
unrelated to creating pods. kubectl run now only creates pods. See specific
kubectl create subcommands to create objects other than pods. (#87077,
#soltysh) [SIG Architecture, CLI and Testing]
As of 2022 you can use the following imperative command to create a pod with labels as:-
kubectl run POD_NAME --image IMAGE_NAME -l myapp:app
where, myapp:app is the label name.

Inquiring pod and service subnets from inside Kubernetes cluster

How can one inquire the Kubernetes pod and service subnets in use (e.g. 10.244.0.0/16 and 10.96.0.0/12 respectively) from inside a Kubernetes cluster in a portable and simple way?
For instance, kubectl get cm -n kube-system kubeadm-config -o yaml reports podSubnet and serviceSubnet. But this is not fully portable because a cluster may have been set up by another means than kubeadm.
kubectl get cm -n kube-system kube-proxy -o yaml reports clusterCIDR (i.e. pod subnet) and kubectl get pod -n kube-system kube-apiserver-master1 -o yaml reports the value
passed as command-line option --service-cluster-ip-range to kube-apiserver (i.e. service subnet). master1 stands for the name of any control plane node. But this seems a bit complex.
Is there a better way available e.g. with the Kubernetes 1.17 API?
I don't think it would be possible to obtain what you want in a portable and simple way.
If you don't specify Cidr's parameters it will assign default one.
As you have many ways to run kubernetes as unmanaged clusters like kubeadm, minikbue, k3s, micork8s or managed like Cloud providers (GKE, Azure, AWS) it's hard to find one way to list all cidrs in all environments. Another obstacle can be versions of Kubernetes or CNI.
In Kubernetes 1.17 Release notes you can find information that
Deprecate the default service IP CIDR. The previous default was 10.0.0.0/24 which will be removed in 6 months/2 releases. Cluster admins must specify their own desired value, by using --service-cluster-ip-range on kube-apiserver.
As example of Kubeadm: $ kubeadm init --pod-network-cidr 10.100.0.0/12 --service-cidr 10.99.0.0/12
There are a few ways to get this pod and service-cidr:
$ kubectl cluster-info dump | grep -E '(service-cluster-ip-range|cluster-cidr)'
"--service-cluster-ip-range=10.99.0.0/12",
"--cluster-cidr=10.100.0.0/12",
$ kubeadm config view | grep Subnet
podSubnet: 10.100.0.0/12
serviceSubnet: 10.99.0.0/12
But if you will check all pods in this cluster, some pods are starting with 192.168.190.X or 192.168.137.X
$ kubectl get pods -A -owide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default nginx 1/1 Running 0 62m 192.168.190.129 kubeadm-worker <none> <none>
kube-system calico-kube-controllers-77c5fc8d7f-9n6m5 1/1 Running 0 118m 192.168.137.66 kubeadm-master <none> <none>
kube-system calico-node-2kx2v 1/1 Running 0 117m 10.128.0.4 kubeadm-worker <none> <none>
kube-system calico-node-8xqd9 1/1 Running 0 118m 10.128.0.3 kubeadm-master <none> <none>
kube-system coredns-66bff467f8-sgmkw 1/1 Running 0 120m 192.168.137.65 kubeadm-master <none> <none>
kube-system coredns-66bff467f8-t84ht 1/1 Running 0 120m 192.168.137.67 kubeadm-master <none> <none>
If you will describe any CNI pods you can find another CIDRs:
CALICO_IPV4POOL_CIDR: 192.168.0.0/16
For GKE example you will have:
node CIDRs
$ kubectl describe node | grep CIDRs
PodCIDRs: 10.52.1.0/24
PodCIDRs: 10.52.0.0/24
PodCIDRs: 10.52.2.0/24
$ gcloud container clusters describe cluster-2 --zone=europe-west2-b | grep Cidr
clusterIpv4Cidr: 10.52.0.0/14
clusterIpv4Cidr: 10.52.0.0/14
clusterIpv4CidrBlock: 10.52.0.0/14
servicesIpv4Cidr: 10.116.0.0/20
servicesIpv4CidrBlock: 10.116.0.0/20
podIpv4CidrSize: 24
servicesIpv4Cidr: 10.116.0.0/20
Honestly I don't think there is an easy and portable way to list all podCidrs and serviceCidrs in one simple command.

How to delete kubernetes dashboard ui pod using kubectl

I installed kubernetes dashboard ui , but the status is aways pending. Now I want to delete the pending pod . Query current pod:
[root#iZuf63refzweg1d9dh94t8Z ~]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system kubernetes-dashboard-7d75c474bb-b2lwd 0/1 Pending 0 8d
now I am trying to delete pod:
[root#iZuf63refzweg1d9dh94t8Z ~]# kubectl delete pod kubernetes-dashboard-7d75c474bb-b2lwd
Error from server (NotFound): pods "kubernetes-dashboard-7d75c474bb-b2lwd" not found
how to delete kubernetes dashboard pod?
You should either specify the namespace with your kubectl delete command or set the namespace context before executing the command.
kubectl config set-context --current --namespace=kube-system
kubectl delete pod kubernetes-dashboard-7d75c474bb-b2lwd
Unless you've previously specified it, kubectl operations default to being performed against the default namespace. You don't have a Pod called kubectl delete pod kubernetes-dashboard-7d75c474bb-b2lwd in the default namespace, but as you see in the kubectl get pod --all-namespaces output you do have it in the kube-system namespace. So, instead, do:
kubectl delete pod kubernetes-dashboard-7d75c474bb-b2lwd --namespace=kube-system