Listing Pods with name and field selector - kubernetes

I need to list down all the pods with status complete and with given name.
user#host:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
App1-something 1/1 Running 570 2d
App2-something 1/1 completed 597 2d
App3-something 1/1 completed 570 2d
App4-something 1/1 Running 597 2d
Using Field Selector i can list completed pods, but not able to find the correct command to list down required pod with specific name
Something to get below output
App3-something 1/1 completed 570 2d
kubectl get pod --field-selector=status.phase==Succeeded and pod name is App3-something

You can use comma to add multiple conditions like --field-selector=metadata.name=app3-something,,status.phase=Succeeded.
kubectl get pod --field-selector=metadata.name=App3-something,status.phase=Completed
Reference: https://kubernetes.io/docs/concepts/overview/working-with-objects/field-selectors/
--
C:\>kubectl get pods
NAME READY STATUS RESTARTS AGE
app1-something 1/1 Running 0 62s
app2-something 1/1 Running 0 56s
app3-something 1/1 Running 0 52s
C:\>kubectl get pod --field-selector=metadata.name=app3-something,,status.phase=Running
NAME READY STATUS RESTARTS AGE
app3-something 1/1 Running 0 57s

Use grep on the output of the kubectl command:
kubectl get pod --field-selector=status.phase==Succeeded | grep -n 'App3-something'

Related

How are pods in kube-system namespace managed?

I'm trying to understand how kubernetes works, so I tried to do this operation for my minikube:
~ kubectl delete pod --all -n kube-system
pod "coredns-f9fd979d6-5n4b6" deleted
pod "etcd-minikube" deleted
pod "kube-apiserver-minikube" deleted
pod "kube-controller-manager-minikube" deleted
pod "kube-proxy-879lg" deleted
pod "kube-scheduler-minikube" deleted
It's okay. Pods deleted as wish. But if I do kubectl get pods -n kube-system I will see:
NAME READY STATUS RESTARTS AGE
coredns-f9fd979d6-5d25r 1/1 Running 0 50s
etcd-minikube 1/1 Running 0 50s
kube-apiserver-minikube 1/1 Running 0 50s
kube-controller-manager-minikube 1/1 Running 0 50s
kube-proxy-nlw69 1/1 Running 0 43s
kube-scheduler-minikube 1/1 Running 0 49s
Okay. I thought it's ReplicaSet or DaemonSet:
➜ ~ kubectl get ds -n kube-system
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-proxy 1 1 1 1 1 kubernetes.io/os=linux 18m
➜ ~ kubectl get rs -n kube-system
NAME DESIRED CURRENT READY AGE
coredns-f9fd979d6 1 1 1 18m
It is true for coredns and kube-proxy. But what about others (apiserver, etcd, controller and scheduler)? Why are they still alive?
The control plane pods are run as static Pods - static Pods are not managed by the control plane controllers like e.g. DaemonSet and ReplicaSet. Static pods are instead managed by the Kubelet daemon on the local node directly.

Kubernetes: how do you list components running on master?

How do you list components running on the master Kubernetes node?
I assume there should be a kubeadm or kubectl command but can't find anything.
E.g. I'm looking to see if the Scheduler is running and I've used kubeadm config view which lists:
scheduler: {}
but not sure if that means the Scheduler is not running or there's simply no config for it.
Since you have installed with kubeadm, the control plane components must be running as pods in kube-system namespace. So you can run the following command to see if scheduler is running.
# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
calico-node-4x9fp 2/2 Running 0 4d6h
coredns-86c58d9df4-bw2q9 1/1 Running 0 4d6h
coredns-86c58d9df4-gvcl9 1/1 Running 0 4d6h
etcd-k1 1/1 Running 0 4d6h
kube-apiserver-k1 1/1 Running 0 4d6h
kube-controller-manager-k1 1/1 Running 83 4d6h
kube-dash-kubernetes-dashboard-5b7cf769bc-pd2n2 1/1 Running 0 4d6h
kube-proxy-jmrrz 1/1 Running 0 4d6h
kube-scheduler-k1 1/1 Running 82 4d6h
metrics-server-8544b5c78b-k2lwt 1/1 Running 16 4d6h
tiller-deploy-5f4fc5bcc6-gvhlz 1/1 Running 0 4d6h
If you want to know all pods running on a master node(or any particular node), you can use field-selector to select the node.
kubectl get pod --all-namespaces --field-selector spec.nodeName=<nodeName>
To filter pods only in kube-system namespace running on particular node -
kubectl get pod -n kube-system --field-selector spec.nodeName=<nodeName>
Assuming that you want to check what is running in master node and you are unable not do that via Kubernetes API server.
For kubelet since its running as systemd service you can check systemctl status kubelet.service.
Other components such as scheduler is run as container by kubelet so you can check them with standard docker command such as docker ps.

promethues operator alertmanager-main-0 pending and display

What happened?
kubernetes version: 1.12
promethus operator: release-0.1
I follow the README:
$ kubectl create -f manifests/
# It can take a few seconds for the above 'create manifests' command to fully create the following resources, so verify the resources are ready before proceeding.
$ until kubectl get customresourcedefinitions servicemonitors.monitoring.coreos.com ; do date; sleep 1; echo ""; done
$ until kubectl get servicemonitors --all-namespaces ; do date; sleep 1; echo ""; done
$ kubectl apply -f manifests/ # This command sometimes may need to be done twice (to workaround a race condition).
and then I use the command and then is showed like:
[root#VM_8_3_centos /data/hansenwu/kube-prometheus/manifests]# kubectl get pod -n monitoring
NAME READY STATUS RESTARTS AGE
alertmanager-main-0 2/2 Running 0 66s
alertmanager-main-1 1/2 Running 0 47s
grafana-54f84fdf45-kt2j9 1/1 Running 0 72s
kube-state-metrics-65b8dbf498-h7d8g 4/4 Running 0 57s
node-exporter-7mpjw 2/2 Running 0 72s
node-exporter-crfgv 2/2 Running 0 72s
node-exporter-l7s9g 2/2 Running 0 72s
node-exporter-lqpns 2/2 Running 0 72s
prometheus-adapter-5b6f856dbc-ndfwl 1/1 Running 0 72s
prometheus-k8s-0 3/3 Running 1 59s
prometheus-k8s-1 3/3 Running 1 59s
prometheus-operator-5c64c8969-lqvkb 1/1 Running 0 72s
[root#VM_8_3_centos /data/hansenwu/kube-prometheus/manifests]# kubectl get pod -n monitoring
NAME READY STATUS RESTARTS AGE
alertmanager-main-0 0/2 Pending 0 0s
grafana-54f84fdf45-kt2j9 1/1 Running 0 75s
kube-state-metrics-65b8dbf498-h7d8g 4/4 Running 0 60s
node-exporter-7mpjw 2/2 Running 0 75s
node-exporter-crfgv 2/2 Running 0 75s
node-exporter-l7s9g 2/2 Running 0 75s
node-exporter-lqpns 2/2 Running 0 75s
prometheus-adapter-5b6f856dbc-ndfwl 1/1 Running 0 75s
prometheus-k8s-0 3/3 Running 1 62s
prometheus-k8s-1 3/3 Running 1 62s
prometheus-operator-5c64c8969-lqvkb 1/1 Running 0 75s
I don't know why the pod altertmanager-main-0 pending and disaply then restart.
And I see the event, it is showed as:
72s Warning FailedCreate StatefulSet create Pod alertmanager-main-0 in StatefulSet alertmanager-main failed error: The POST operation against Pod could not be completed at this time, please try again.
72s Warning FailedCreate StatefulSet create Pod alertmanager-main-0 in StatefulSet alertmanager-main failed error: The POST operation against Pod could not be completed at this time, please try again.
72s Warning^Z FailedCreate StatefulSet
[10]+ Stopped kubectl get events -n monitoring
Most likely the alertmanager does not get enough time to start correctly.
Have a look at this answer : https://github.com/coreos/prometheus-operator/issues/965#issuecomment-460223268
You can set the paused field to true, and then modify the StatefulSet to try if extending the liveness/readiness solves your issue.

Kubernetes - does not start the role of master

I'm starting a Kubernetes cluster of 3 nodes (1 master, 2 worker)
Trying to go by steps described in Ansible playbook - https://gitlab.com/LinarNadyrov/gcp/tree/master
Applying playbook steps 1,2,3 consequentially
After than, I connect to master to check status:
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
NAME STATUS ROLES AGE VERSION
master NotReady master 17m v1.13.0
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
enter link description here
kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-86c58d9df4-7jc4b 0/1 Pending 0 3h45m
coredns-86c58d9df4-929xf 0/1 Pending 0 3h45m
etcd-officemasterkub 1/1 Running 2 7h26m
kube-apiserver-officemasterkub 1/1 Running 2 7h26m
kube-controller-manager-officemasterkub 1/1 Running 2 7h26m
kube-flannel-ds-5jhbx 0/1 Pending 0 7h20m
kube-flannel-ds-wqfvs 0/1 Pending 0 7h20m
kube-proxy-gmngj 1/1 Running 2 7h27m
kube-proxy-ppbqp 1/1 Running 1 7h20m
kube-proxy-r2rn6 1/1 Running 1 7h20m
kube-scheduler-officemasterkub 1/1 Running 2 7h26m
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Status is NotReady
Could anyone help me with it?
What's the problem? What should be done to fix it? Maybe I missed something?
Thanx in advance!
Линар Надыров the problem here is with your flannel yaml file. You did not specify any resources in the DaemonSet so there are no flannel pods spawning.
I did not check any further as it was enough reason why is this issue occurring. You can use this yaml if this is for testing purposes. Or edit your accordingly to the provided example.
In your file change the line 43 to:
shell: kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml >> pod_network_setup.txt
You can find more about DaemonSets here.

kube-dns kubedns/dnsmasq/sidecar fails to start

This is a really odd issue I've started to experience. Everything was working with out issue, however, now when I startup a cluster (kubeadm), setup flannel, kube-dns never starts up. Eventually, it errors out with the following output from kubectl describe
Error: failed to start container "sidecar": Error response from daemon: {"message":"invalid header field value \"oci runtime error: container_linux.go:240: creating new parent process caused \\\"container_linux.go:1245: running lstat on namespace path \\\\\\\"/proc/7420/ns/ipc\\\\\\\" caused \\\\\\\"lstat /proc/7420/ns/ipc: no such file or directory\\\\\\\"\\\"\\n\""}
Any ideas what this error really means? I get the same looking error for dnsmasq and kubedns as well.
I am using the switch "--pod-network-cidr 10.244.0.0/16" as always. As I said ,this was working, and then a few days later, it's not....
Here's the get pods output:
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-machiato-0 1/1 Running 0 3m
kube-system kube-apiserver-machiato-0 1/1 Running 0 3m
kube-system kube-controller-manager-machiato-0 1/1 Running 0 2m
kube-system kube-dns-2258483030-pd8qj 0/3 ContainerCreating 0 3m
kube-system kube-flannel-ds-0z0dd 2/2 Running 0 1m
kube-system kube-flannel-ds-3dccg 2/2 Running 0 1m
kube-system kube-proxy-gc8ft 1/1 Running 0 3m
kube-system kube-proxy-tjgzn 1/1 Running 0 1m
kube-system kube-scheduler-machiato-0 1/1 Running 0 3m
Eventually, "ContainerCreating" switches to "CrashLoopBackOff" then I see the lstat error above.
most likely its overlay network issue. can you check the dns pods log message and see any error message?
kubectl logs -n kube-system kube-dns-2258483030-pd8qj -c kubedns
kubectl logs -n kube-system kube-dns-2258483030-pd8qj -c dnsmasq
kubectl logs -n kube-system kube-dns-2258483030-pd8qj -c sidecar