Pod is not visible after sometime - kubernetes

I deployed a pod in Kubernetes cluster. The deployment is success and I am able to see my pod running. But after sometime my pod is missing from the list of workloads.Why is this so?

Related

While deploying Kafka on on premises k8s the status of pod is pending for long time

I am trying to use helm charts for deploying kafka and zookeeper in local k8s but while checking the status of respective pods it shows PENDING for long time and pod is not assigning to any node nevertheless i have 2 worker nodes running which are healthy
I tried by deleting the pods and redeployed still i landed in same situation not able to make pods run need help on how i can run this pods

Kubernetes pod failed to update

We have a Gitlab CI/CD to deploy pod via Kubernetes. However, the updated pod is always pending and the deleted pod is always stuck at terminating.
The controller and scheduler are both okay.
If I described the pending pod, it shows it is scheduled but nothing else.
This is the pending pod's logs:
$ kubectl logs -f robo-apis-dev-7b79ccf74b-nr9q2 -n xxx -f Error from
server (BadRequest): container "robo-apis-dev" in pod
"robo-apis-dev-7b79ccf74b-nr9q2" is waiting to start:
ContainerCreating
What could be the issue? Our Kubernetes cluster never had this issue before.
Okay, it turns out we used to have an NFS server as PVC. But we have moved to AWS EKS recently, thus cleaning the NFS servers. Maybe there are some resources from nodes that are still on the NFS server. Once we temporarily roll back the NFS server, the pods start to move to RUNNING state.
The issue was discussed here - Orphaned pod https://github.com/kubernetes/kubernetes/issues/60987

Airflow Kubernetes Executor pods go into "NotReady" state instead of being deleted

Installed airflow in kubernetes using the repo https://airflow-helm.github.io/charts and airflow-stable/airflow with version 8.1.3. So I have Airflow v2.0.1 installed. I have it setup using external postgres database and using the kubernetes executor.
What I have noticed is when airflow related pods are done they go into a "NotReady" status. This happens with the update-db pod at startup and also pods launched by the kubernetes executioner. When I go into airflow and look at the task some are successful and some can be failure, but either way the related pods are in "NotReady" status. In the values file I set the below thinking it would delete the pods when they are done. I've gone through the logs and made sure one of the dags ran as intended and was success in the related task was success and of course the related pod when it was done went into "NotReady" status.
The values below are located in Values.airflow.config.
AIRFLOW__KUBERNETES__DELETE_WORKER_PODS: "true"
AIRFLOW__KUBERNETES__DELETE_WORKER_PODS_ON_FAILURE: "true"
So I'm not really sure what I'm missing and if anyone has seen this behavior? It's also really strange that the upgrade-db pod is doing this too.
Screenshot of kubectl get pods for the namespace airflow is deployed in with the "NotReady" pods
Figured it out. The K8 namespace had auto injection of linkerd sidecar container for each pod. Would have to just use celery executioner or setup some sort of k8 job to cleanup completed pods and jobs that don’t get cleaned up due to the linkerd container running forever in those pods.

AWS EKS kubernetes Deployments are not ready NodePort and LoadBalancer is not reachable

I am trying to deploy pods on the EKS cluster. Below are some screen shots which shows that AWS EKS cluster is created and is active, group nodes are also active, now when i try to deploy any pod like nginx, wordpress or something else, these are not in the ready state. I tried deploying kubernetes dashboard and its in ready state, but why others are not in ready state do not know and that's why their URLs are not reachable.
also, while checking logs it says as below:
Error from server (NotFound): pods "deployment-2048-64549f6964-87d59" not found
Pods are in pending state. If a Pod is stuck in Pending it means that it can not be scheduled onto a node. It can happen because there are insufficient resources of one type or another that prevent pods scheduling.
You can look at the output by kubectl describe <deployment/pod_name>. There will be messages from the scheduler about why it can not schedule your pod.

kubectl get pod status always ContainerCreating

k8s version: 1.12.1
I created pod with api on node and allocated an IP (through flanneld). When I used the kubectl describe pod command, I could not get the pod IP, and there was no such IP in etcd storage.
It was only a few minutes later that the IP could be obtained, and then kubectl get pod STATUS was Running.
Has anyone ever encountered this problem?
Like MatthiasSommer mentioned in comment, process of creating pod might take a while.
If POD will stay for a longer time in ContainerCreating status you can check what is stopping it change to status Running by command:
kubectl describe pod <pod_name>
Why creating of pod may take a longer time?
Depends on what is included in manifest, pod can share namespace, storage volumes, secrets, assignin resources, configmaps etc.
kube-apiserver validates and configures data for api objects.
kube-scheduler needs to check and collect resurces requrements, constraints, etc and assign pod to the node.
kubelet is running on each node and is ensures that all containers fulfill pod specification and are healty.
kube-proxy is also running on each node and it is responsible for network on pod.
As you see there are many requests, validates, syncs and it need a while to create pod fulfill all requirements.