Getting pod logs (Kubernetes for failed pod) - mongodb

when I get all the pods kubectl get pod
and I get:
NAME READY STATUS RESTARTS AGE
balanced-7bbf659597-sk478 1/1 Running 0 5d13h
hello-minikube-68ff6fd96-xxbx8 1/1 Running 0 5d13h
mongo-express-5bf4b56f47-5s2b6 1/1 Running 8 (13h ago) 14h
mongodb-deployment-844789cd64-fjlgk 0/1 CrashLoopBackOff 101 (4m30s ago) 13h
but then when I try to get the logs of the failed pod kubectl logs mongodb-deployment-844789cd64-fjlgk nothing happens, and then when I describe the pod: kubectl describe mongodb-deployment-844789cd64-fjlgk I get this error:
error: the server doesn't have a resource type "mongodb-deployment-844789cd64-fjlgk"

Your pod isn't available when you try see the log, you must access to the previous log file using the following command
kubectl logs mongodb-deployment-844789cd64-fjlgk --previous
Using previous parameter kubectl will dump logs for a previous instantiation of your container

Related

Why is pod not found? Error from server (NotFound): pods "curl-pod" not found

I can list pods in my prod namespace
kubectl get pods -n prod
NAME READY STATUS RESTARTS AGE
curl-pod 1/1 Running 1 (32m ago) 38m
web 1/1 Running 1 (33m ago) 38m
I got error
kubectl describe pods curl-pod
Error from server (NotFound): pods "curl-pod" not found
Get events show
Normal Scheduled pod/curl-pod Successfully assigned prod/curl-pod to minikube
Why?
kubernetes manages by namespace, so you must specify namespace otherwise kubernetes will use namespace default.
So, you must type:
kubectl describe pod/curl-pod -n prod

kubectl status.phase=Running return wrong results

When I run:
kubectl get pods --field-selector=status.phase=Running
I see:
NAME READY STATUS RESTARTS AGE
k8s-fbd7b 2/2 Running 0 5m5s
testm-45gfg 1/2 Error 0 22h
I don't understand why this command gives me pod that are in Error status?
According to K8S api, there is no such thing STATUS=Error.
How can I get only the pods that are in this Error status?
When I run:
kubectl get pods --field-selector=status.phase=Failed
It tells me that there are no pods in that status.
Using the kubectl get pods --field-selector=status.phase=Failed command you can display all Pods in the Failed phase.
Failed means that all containers in the Pod have terminated, and at least one container has terminated in failure (see: Pod phase):
Failed - All containers in the Pod have terminated, and at least one container has terminated in failure. That is, the container either exited with non-zero status or was terminated by the system.
In your example, both Pods are in the Running phase because at least one container is still running in each of these Pods.:
Running - The Pod has been bound to a node, and all of the containers have been created. At least one container is still running, or is in the process of starting or restarting.
You can check the current phase of Pods using the following command:
$ kubectl get pod -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.phase}{"\n"}{end}'
Let's check how this command works:
$ kubectl get pods
NAME READY STATUS
app-1 1/2 Error
app-2 0/1 Error
$ kubectl get pod -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.phase}{"\n"}{end}'
app-1 Running
app-2 Failed
As you can see, only the app-2 Pod is in the Failed phase. There is still one container running in the app-1 Pod, so this Pod is in the Running phase.
To list all pods with the Error status, you can simply use:
$ kubectl get pods -A | grep Error
default app-1 1/2 Error
default app-2 0/1 Error
Additionally, it's worth mentioning that you can check the state of all containers in Pods:
$ kubectl get pod -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.containerStatuses[*].state}{"\n"}{end}'
app-1 {"terminated":{"containerID":"containerd://f208e2a1ff08c5ce2acf3a33da05603c1947107e398d2f5fbf6f35d8b273ac71","exitCode":2,"finishedAt":"2021-08-11T14:07:21Z","reason":"Error","startedAt":"2021-08-11T14:07:21Z"}} {"running":{"startedAt":"2021-08-11T14:07:21Z"}}
app-2 {"terminated":{"containerID":"containerd://7a66cbbf73985efaaf348ec2f7a14d8e5bf22f891bd655c4b64692005eb0439b","exitCode":2,"finishedAt":"2021-08-11T14:08:50Z","reason":"Error","startedAt":"2021-08-11T14:08:50Z"}}
You can simply grep the Error pods using the
kubectl get pods --all-namespces | grep Error
Remove all error pods from the cluster
kubectl delete pod `kubectl get pods --namespace <yournamespace> | awk '$3 == "Error" {print $1}'` --namespace <yournamespace>
Mostly Pod failures return explicit error states that can be observed in the status field
Error :
Your pod is crashed, it was able to schedule on node successfully but crashed after that. To debug it more you can use different methods or commands
kubectl describe pod <Pod name > -n <Namespace>
https://kubernetes.io/docs/tasks/debug-application-cluster/debug-pod-replication-controller/#my-pod-is-crashing-or-otherwise-unhealthy
Here is an overkill go-template based attempt:
kubectl get pods -o go-template='{{range $index, $element := .items}}{{range .status.containerStatuses}}{{range .state }}{{if .reason }}{{if (eq .reason "Error") }}{{$element.metadata.name}} {{$element.metadata.namespace}}{{"\n"}}{{end}}{{end}}{{end}}{{end}}{{end}}'
job1-stn45 default
My pod status:
k get pod
NAME READY STATUS RESTARTS AGE
foo 1/1 Running 1 2d11h
nginx-0 1/1 Running 3 5d10h
nginx-2 1/1 Running 3 5d10h
nginx-1 1/1 Running 3 5d10h
job1-stn45 0/1 Error 0 113m
update-test-27145740-82z7s 0/1 ImagePullBackOff 0 96m
update-test-27145500-7f2l9 0/1 ImagePullBackOff 0 5h36m

status ErrImageNeverPull unable to set up kubernetes dashboard on Mac

I am using DOCKER desktop to setup kubernetes. I have used below command to install kubernetes dashboard on mac
kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/test-resources/kubernetes-dashboard-local.yaml
when i use kubectl get pod --namespace=kube-system
NAME READY STATUS RESTARTS AGE
coredns-6dcc67dcbc-2qdls 1/1 Running 0 4h50m
coredns-6dcc67dcbc-nqm76 1/1 Running 0 4h50m
etcd-docker-desktop 1/1 Running 0 4h49m
kube-apiserver-docker-desktop 1/1 Running 0 4h50m
kube-controller-manager-docker-desktop 1/1 Running 0 4h49m
kube-proxy-pq9pv 1/1 Running 0 4h50m
kube-scheduler-docker-desktop 1/1 Running 0 4h49m
kubernetes-dashboard-local-599bb4877f-6nnkz 0/1 ErrImageNeverPull 0 138m
kubernetes-metrics-scraper-head-787ff8f87-rrq67 1/1 Running 0 138m
i wanted to know what is that "ErrImageNeverPull" of that POD. it is not even allowing me to describe/delete by that name.
kubectl describe pod kubernetes-dashboard-local-599bb4877f-6nnkz
Error from server (NotFound): pods "kubernetes-dashboard-local-599bb4877f-6nnkz" not found
How to fix or get rid of that so that i can successfully proceed further.
That YAML file specifies:
image: kubernetes/kubernetes-dashboard-amd64:head
imagePullPolicy: Never
So ErrImageNeverPull means that (a) that exact image name doesn't exist on the node where the pod is scheduled, and (b) imagePullPolicy: Never tells it to not try to fetch it.
Since the pod is not in the default namespace, you need to provide the kubectl --namespace kube-system option option to every command that tries to interact with it (not just get pod but also describe pod, delete deployment, etc.).
It looks like you've pulled a deployment spec from inside the dashboard's test tree which is intended to be used by a developer actively working on the dashboard code. The installation instructions have a different YAML file to use. (This link to the GitHub repo is probably more stable than the link to the version-specific YAML file that's there.)

Error from server (NotFound): podmetrics.metrics.k8s.io "mem-example/memory-demo" not found

I am following this tutorial: https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/
I have created the memory pod demo and I am trying to get the metrics from the pod but it is not working.
I installed the metrics server by cloning: https://github.com/kubernetes-incubator/metrics-server
And then running this command from top level:
kubectl create -f deploy/1.8+/
I am using kubernetes version 1.10.11.
The pod is definitely created:
λ kubectl get pod memory-demo --namespace=mem-example
NAME READY STATUS RESTARTS AGE
memory-demo 1/1 Running 0 6m
But the metics command does not work and gives an error:
λ kubectl top pod memory-demo --namespace=mem-example
Error from server (NotFound): podmetrics.metrics.k8s.io "mem-example/memory-demo" not found
What did I do wrong?
There are some patches to be done to metrics server deployment to get the metrics working.
Follow the below steps
kubectl delete -f deploy/1.8+/
wait till the metrics server gets undeployed
run the below command
kubectl create -f https://raw.githubusercontent.com/epasham/docker-repo/master/k8s/metrics-server.yaml
master $ kubectl get po -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-78fcdf6894-6zg78 1/1 Running 0 2h
coredns-78fcdf6894-gk4sb 1/1 Running 0 2h
etcd-master 1/1 Running 0 2h
kube-apiserver-master 1/1 Running 0 2h
kube-controller-manager-master 1/1 Running 0 2h
kube-proxy-f5z9p 1/1 Running 0 2h
kube-proxy-ghbvn 1/1 Running 0 2h
kube-scheduler-master 1/1 Running 0 2h
metrics-server-85c54d44c8-rmvxh 2/2 Running 0 1m
weave-net-4j7cl 2/2 Running 1 2h
weave-net-82fzn 2/2 Running 1 2h
master $ kubectl top pod -n kube-system
NAME CPU(cores) MEMORY(bytes)
coredns-78fcdf6894-6zg78 2m 11Mi
coredns-78fcdf6894-gk4sb 2m 9Mi
etcd-master 14m 90Mi
kube-apiserver-master 24m 425Mi
kube-controller-manager-master 26m 62Mi
kube-proxy-f5z9p 2m 19Mi
kube-proxy-ghbvn 3m 17Mi
kube-scheduler-master 8m 14Mi
metrics-server-85c54d44c8-rmvxh 1m 19Mi
weave-net-4j7cl 2m 59Mi
weave-net-82fzn 1m 60Mi
Check and verify the below lines in metrics server deployment manifest.
command:
- /metrics-server
- --metric-resolution=30s
- --kubelet-preferred-address-types=InternalIP
- --kubelet-insecure-tls
On Minikube, I had to wait for 20-25 minutes after enabling the metrics-server addon. I was getting the same error for 20-25 minutes but later I could see the output without attempting for any solution.
I faced the similar issue of
Error from server (NotFound): podmetrics.metrics.k8s.io "default/apple-app" not found
I followed two steps and I was able to resolve the issue.
Download the latest customized components.yaml, which is their official file used for easy deployment.
Update the change
# - /metrics-server
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP
to the command section of the deployment specification. I have commented the first line because it is the entrypoint of the image used by kubernetes metrics-server.
$ docker image inspect k8s.gcr.io/metrics-server-amd64:v0.3.6 -f {{.ContainerConfig.Entrypoint}}
[/metrics-server]
Even If you use it or not, it doesn't matter.
Note: You have to wait for few seconds for it to properly work.
After this running the top command will work for you.
$ kubectl top pod apple-app
NAME CPU(cores) MEMORY(bytes)
apple-app 1m 3Mi
I know this is an old thread may be someone will find this answer useful.
You have to checkout the following repo:
https://github.com/kubernetes-incubator/metrics-server
Go to the root of the repo and checkout release-0.3.2.
Remove default metrics server by:
kubectl delete -f deploy/1.8+/
Download the container yaml
wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.6/components.yaml
Edit the container.yaml by adding the following lines to the argument section. You will see these two lines there
args:
- --kubelet-preferred-address-types=InternalIP
- --kubelet-insecure-tls=true
There is only one args parameter in that file.
Deploy your pod/deployment and you should be able to do:
kubectl top pod <pod-name>

kube-dns kubedns/dnsmasq/sidecar fails to start

This is a really odd issue I've started to experience. Everything was working with out issue, however, now when I startup a cluster (kubeadm), setup flannel, kube-dns never starts up. Eventually, it errors out with the following output from kubectl describe
Error: failed to start container "sidecar": Error response from daemon: {"message":"invalid header field value \"oci runtime error: container_linux.go:240: creating new parent process caused \\\"container_linux.go:1245: running lstat on namespace path \\\\\\\"/proc/7420/ns/ipc\\\\\\\" caused \\\\\\\"lstat /proc/7420/ns/ipc: no such file or directory\\\\\\\"\\\"\\n\""}
Any ideas what this error really means? I get the same looking error for dnsmasq and kubedns as well.
I am using the switch "--pod-network-cidr 10.244.0.0/16" as always. As I said ,this was working, and then a few days later, it's not....
Here's the get pods output:
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-machiato-0 1/1 Running 0 3m
kube-system kube-apiserver-machiato-0 1/1 Running 0 3m
kube-system kube-controller-manager-machiato-0 1/1 Running 0 2m
kube-system kube-dns-2258483030-pd8qj 0/3 ContainerCreating 0 3m
kube-system kube-flannel-ds-0z0dd 2/2 Running 0 1m
kube-system kube-flannel-ds-3dccg 2/2 Running 0 1m
kube-system kube-proxy-gc8ft 1/1 Running 0 3m
kube-system kube-proxy-tjgzn 1/1 Running 0 1m
kube-system kube-scheduler-machiato-0 1/1 Running 0 3m
Eventually, "ContainerCreating" switches to "CrashLoopBackOff" then I see the lstat error above.
most likely its overlay network issue. can you check the dns pods log message and see any error message?
kubectl logs -n kube-system kube-dns-2258483030-pd8qj -c kubedns
kubectl logs -n kube-system kube-dns-2258483030-pd8qj -c dnsmasq
kubectl logs -n kube-system kube-dns-2258483030-pd8qj -c sidecar