I am trying to create a deployment on a K8s cluster with one master and two worker nodes. The cluster is running on 3 AWS EC2 instances. I have been using this environment for quite sometime to play with Kubernetes. Three days back, I have started to see all the pods status to change to ContainerCreating from Running. Only the pods that are scheduled on master are shown as Running. The pods running on worker nodes are shown as ContainerCreating. When I run kubectl describe pod <podname>, it shows in the event the following
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 34s default-scheduler Successfully assigned nginx-8586cf59-5h2dp to ip-172-31-20-57
Normal SuccessfulMountVolume 34s kubelet, ip-172-31-20-57 MountVolume.SetUp succeeded for volume "default-token-wz7rs"
Warning FailedCreatePodSandBox 4s kubelet, ip-172-31-20-57 Failed create pod sandbox.
Normal SandboxChanged 3s kubelet, ip-172-31-20-57 Pod sandbox changed, it will be killed and re-created.
This error has been bugging me now. I tried to search around online on related error but I couldn't get anything specific. I did kubeadm reset on the cluster including master and worker nodes and brought up the cluster again. The nodes status shows ready. But I run into the same problem again whenever I try to create a deployment using the below command for example:
kubectl run nginx --image=nginx --replicas=2
This can occur if you specify a limit or request on memory and use the wrong unit.
Below triggered the message:
resources:
limits:
cpu: "300m"
memory: "256m"
requests:
cpu: "50m"
memory: "64m"
The correct line would be:
resources:
limits:
cpu: "300m"
memory: "256Mi"
requests:
cpu: "50m"
memory: "64Mi"
It might someone else, but I've spent a weekend on this until I noticed I had requested 1000 mem, insted of 1000Mi...
I run k8s on a few DO droplets and was stuck on this very issue. No other info was given - just the FailedCreatePodSandBox complaining about a file I had never seen before.
Spent a lotta time trying to figure it out - the only thing that fixed the issue for me was restarting my master and each node in their entirety. That got things going instantly.
sudo shutdown -r now
Related
I'm getting below error whenever I'm trying to apply an ingress resource/rules yaml file:
failed calling webhook "validate.nginx.ingress.kubernetes.io": Post "https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1/ingresses?timeout=10s": EOF
It seems there are multiple errors for "failed calling webhook "validate.nginx.ingress.kubernetes.io": Post "https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1/ingresses?timeout=10s": Error here
Like below:
context deadline exceeded
x509: certificate signed by unknown authority
Temporary Redirect
EOF
no endpoints available for service "ingress-nginx-controller-admission"
...and many more.
My Observations:
As soon as the the ingress resource/rules yaml is applied, the above error is shown and the Ingress Controller gets restarted as shown below:
NAME READY STATUS RESTARTS AGE
ingress-nginx-controller-5cf97b7d74-zvrr6 1/1 Running 6 30m
ingress-nginx-controller-5cf97b7d74-zvrr6 0/1 OOMKilled 6 30m
ingress-nginx-controller-5cf97b7d74-zvrr6 0/1 CrashLoopBackOff 6 30m
ingress-nginx-controller-5cf97b7d74-zvrr6 0/1 Running 7 31m
ingress-nginx-controller-5cf97b7d74-zvrr6 1/1 Running 7 32m
One possible solution could be (not sure though) mentioned here:
https://stackoverflow.com/a/69289313/12241977
But not sure if it could possibly work in case of Managed Kubernetes services like AWS EKS as we don't have access to kube-api server.
Also the section "kind: ValidatingWebhookConfiguration" has below field from yaml:
clientConfig:
service:
namespace: ingress-nginx
name: ingress-nginx-controller-admission
path: /networking/v1/ingresses
So what does the "path: /networking/v1/ingresses" do & where it resides or simply where we can find this path?
I checked the validation webhook using below command but, not able to get where to find the above path
kubectl describe validatingwebhookconfigurations ingress-nginx-admission
Setup Details
I installed using the Bare-metal method exposed with NodePort
Ingress Controller Version - v1.1.0
Kubernetes Cluster Version (AWS EKS): 1.21
Ok, I got this working now:
I was getting the status as "OOMKilled" (Out Of Memory). So what I did is I've added the "limits:" section under "resources:" section of Deployment yaml as below:
resources:
requests:
cpu: 100m
memory: 90Mi
limits:
cpu: 200m
memory: 190Mi
Now, it works fine for me.
When I run minikube, I get ErrImageNeverPull intermittently. I am not sure why, so I ask. First of all, I set imagePullPolicy: Never to this (writes the internal image), and I verified that everything works fine. However, sometimes phpmyadmin is ErrImageNeverPull, wordpress is ErrImageNeverPull, and so on. The environment is running on a laptop Mac Catalina.
I don't know the exact reason, but what is the reason to infer?
kubectl logs wordpress-deployment-5545dcd6f5-h6mfx
Error from server (BadRequest): container "wordpress" in pod "wordpress-deployment-5545dcd6f5-h6mfx" is waiting to start: ErrImageNeverPull
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 9m14s (x2 over 9m14s) default-scheduler persistentvolumeclaim "wordpress-pv-claim" not found
Normal Scheduled 9m11s default-scheduler Successfully assigned default/wordpress-deployment-5545dcd6f5-h6mfx to minikube
Warning Failed 6m55s (x13 over 9m8s) kubelet, minikube Error: ErrImageNeverPull
Warning ErrImageNeverPull 4m8s (x26 over 9m8s) kubelet, minikube Container image "wordpress-dockerfile" is not present with pull policy of Never
Oh, of course, I also applied the following command.
# eval $(minikube docker-env)
eval $(minikube -p minikube docker-env)
Again, the shocking fact is that I have confirmed that all of these are working correctly and it happens intermittently.
I fixed the problem just before. The reason is that I ran it on a personal laptop, but the number of Pods created was probably so the laptop couldn't stand. When I ran it to the actual desktop, all 10 out of 10 ran fine without any errors. In actual minikube start, I did not give a separate cpu or memory option, but it seems that the cause of the error was that the total usage was not considered.
This question already has answers here:
My kubernetes pods keep crashing with "CrashLoopBackOff" but I can't find any log
(21 answers)
Closed 2 years ago.
I am new to Kubernetes and trying to learn but I am stuck with an error that I cannot find an explanation for. I am running Pods and Deployments in my cluster and they are running perfectly as shown in the CLI, but after a while they keep crashing and the Pods need to restart.
I did some research to fix my issue before posting here, but the way I understood it, I will have to make a deployment so that my replicaSets will manage my Pods lifecycle and not deploy Pods independently. But as you can see also Pods in deployment is crashing.
kubectl get pods
operator-5bf8c8484c-fcmnp 0/1 CrashLoopBackOff 9 34m
operator-5bf8c8484c-phptp 0/1 CrashLoopBackOff 9 34m
operator-5bf8c8484c-wh7hm 0/1 CrashLoopBackOff 9 34m
operator-pod 0/1 CrashLoopBackOff 12 49m
kubectl describe pods operator
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/operator-pod to workernode
Normal Created 30m (x5 over 34m) kubelet, workernode Created container operator-pod
Normal Started 30m (x5 over 34m) kubelet, workernode Started container operator-pod
Normal Pulled 29m (x6 over 34m) kubelet, workernode Container image "operator-api_1:java" already present on machine
Warning BackOff 4m5s (x101 over 33m) kubelet, workernode Back-off restarting failed container
deployment yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: operator
labels:
app: java
spec:
replicas: 3
selector:
matchLabels:
app: call
template:
metadata:
labels:
app: call
spec:
containers:
- name: operatorapi
image: operator-api_1:java
ports:
- containerPort: 80
Can someone help me out, how can I debug?
The reason is most probably the process running in container finished its task and terminated by container OS after a while. Then the pod is being restarted by kubelet.
What I recommend you to solve this issue, please check the process running in container and try to keep it alive forever. You can create a loop to run this process in container or you can use some commands for container on the deployment.yaml
Here is a reference for you to understand and debug pod failure reason.
https://kubernetes.io/docs/tasks/debug-application-cluster/determine-reason-pod-failure/
There are several ways to debug such a scenario and I recommend viewing Kubernetes documentation for best-practices. I typically have success with the following 2 approaches:
Logs: You can view the logs for the application using the command below:
kubectl logs -l app=java
If you have multiple containers within that pod, you can filter it down:
kubectl logs -l app=java -c operatorapi
Events: You can get a lot of information from events as shown below (sorted by timestamp). Keep in mind that there could be a lot of noise in events depending on the number of apps and services that you may have so you have to filter it down further:
kubectl get events --sort-by='.metadata.creationTimestamp'
Feel free to share the output from those two and I can help you debug further.
I have an AKS cluster with a mix of Windows and Linux nodes and an nginx-ingress.
This all worked great, but a few days ago all my windows pods have become unresponsive.
Everything is still green on the K8s dashboard, but they don't respond to HTTP requests and kubectl exec fails.
All the linux pods still work.
I created a new deployment with the exact same image and other properties, and this new pod works, responds to HTTP and kubectl exec works.
Q: How can I find out why my old pods died? How can I prevent this from occuring again in the future?
Note that this is a test cluster, so I have the luxury of being able to investigate, if this was prod I would have burned and recreated the cluster already.
Details:
https://aks-test.progress-cloud.com/eboswebApi/ is one of the old pods, https://aks-test.progress-cloud.com/eboswebApi2/ is the new pod.
When I look at the nginx log, I see a lot of connect() failed (111: Connection refused) while connecting to upstream.
When I try kubectl exec -it <podname> --namespace <namespace> -- cmd I get one of two behaviors:
Either the command immediatly returns without printing anything, or I get an error:
container 1dfffa08d834953c29acb8839ea2d4c6b78b7a530371d98c16b15132d49f5c52 encountered an error during CreateProcess: failure in a Windows system call: The remote procedure call failed and did not execute. (0x6bf) extra info: {"CommandLine":"cmd","WorkingDirectory":"C:\\inetpub\\wwwroot","Environment":{...},"EmulateConsole":true,"CreateStdInPipe":true,"CreateStdOutPipe":true,"ConsoleSize":[0,0]}
command terminated with exit code 126
kubectl describe pod works on both.
The only difference I could find was that on the old pod, I don't get any events:
Events: <none>
whereas on the new pod I get a bunch of them for pulling the image etc:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 39m default-scheduler Successfully assigned ingress-basic/ebos-webapi-test-2-78786968f4-xmvfw to aksnpwin000000
Warning Failed 38m kubelet, aksnpwin000000 Error: failed to start container "ebos-webapi-test-2": Error response from daemon: hcsshim::CreateComputeSystem ebos-webapi-test-2: The binding handle is invalid.
(extra info: {"SystemType":"Container","Name":"ebos-webapi-test-2","Owner":"docker","VolumePath":"\\\\?\\Volume{dac026db-26ab-11ea-bb33-e3730ff9432d}","IgnoreFlushesDuringBoot":true,"LayerFolderPath":"C:\\ProgramData\\docker\\windowsfilter\\ebos-webapi-test-2","Layers":[{"ID":"8c160b6e-685a-58fc-8c4b-beb407ad09b4","Path":"C:\\ProgramData\\docker\\windowsfilter\\12061f29088664dc41c0836c911ed7ced1f6d7ed38b1c932c25cd8ca85a3a88e"},{"ID":"6a230a46-a97c-5e30-ac4a-636e62cd9253","Path":"C:\\ProgramData\\docker\\windowsfilter\\8c0ce5a9990bc433c4d937aa148a4251ef55c1aa7caccf1b2025fd64b4feee97"},{"ID":"240d5705-d8fe-555b-a966-1fc304552b64","Path":"C:\\ProgramData\\docker\\windowsfilter\\2b334b769fe19d0edbe1ad8d1ae464c8d0103a7225b0c9e30fdad52e4b454b35"},{"ID":"5f5d8837-5f62-5a76-a706-9afb789e45e4","Path":"C:\\ProgramData\\docker\\windowsfilter\\3d1767755b0897aaae21e3fb7b71e2d880de22473f0071b0dca6301bb6110077"},{"ID":"978503cb-b816-5f66-ba41-ed154db333d5","Path":"C:\\ProgramData\\docker\\windowsfilter\\53d2e85a90d2b8743b0502013355df5c5e75448858f0c1f5b435281750653520"},{"ID":"d7d0d14e-b097-5104-a492-da3f9396bb06","Path":"C:\\ProgramData\\docker\\windowsfilter\\38830351b46e7a0598daf62d914eb2bf01e6eefde7ac560e8213f118d2bd648c"},{"ID":"90b1c608-be4c-55a1-a787-db3a97670149","Path":"C:\\ProgramData\\docker\\windowsfilter\\84b71fda82ea0eacae7b9382eae2a26f3c71bf118f5c80e7556496f21e754126"},{"ID":"700711b2-d578-5d7c-a17f-14165a5b3507","Path":"C:\\ProgramData\\docker\\windowsfilter\\08dd6f93c96c1ac6acd3d2e8b60697340c90efe651f805809dbe87b6bd26a853"},{"ID":"270de12a-461c-5b0c-8976-a48ae0de2063","Path":"C:\\ProgramData\\docker\\windowsfilter\\115de87074fadbc3c44fc33813257c566753843f8f4dd7656faa111620f71f11"},{"ID":"521250bb-4f30-5ac4-8fcd-b4cf45866627","Path":"C:\\ProgramData\\docker\\windowsfilter\\291e51f5f030d2a895740fae3f61e1333b7fae50a060788040c8d926d46dbe1c"},{"ID":"6dded7bf-8c1e-53bb-920e-631e78728316","Path":"C:\\ProgramData\\docker\\windowsfilter\\938e721c29d2f2d23a00bf83e5bc60d92f9534da409d0417f479bd5f06faa080"},{"ID":"90dec4e9-89fe-56ce-a3c2-2770e6ec362c","Path":"C:\\ProgramData\\docker\\windowsfilter\\d723ebeafd1791f80949f62cfc91a532cc5ed40acfec8e0f236afdbcd00bbff2"},{"ID":"94ac6066-b6f3-5038-9e1b-d5982fcefa00","Path":"C:\\ProgramData\\docker\\windowsfilter\\00d1bb6fc8abb630f921d3651b1222352510d5821779d8a53d994173a4ba1126"},{"ID":"037c6d16-5785-5bea-bab4-bc3f69362e0c","Path":"C:\\ProgramData\\docker\\windowsfilter\\c107cf79e8805e9ce6d81ec2a798bf4f1e3b9c60836a40025272374f719f2270"}],"ProcessorWeight":5000,"HostName":"ebos-webapi-test-2-78786968f4-xmvfw","MappedDirectories":[{"HostPath":"c:\\var\\lib\\kubelet\\pods\\c44f445c-272b-11ea-b9bc-ae0ece5532e1\\volumes\\kubernetes.io~secret\\default-token-n5tnc","ContainerPath":"c:\\var\\run\\secrets\\kubernetes.io\\serviceaccount","ReadOnly":true,"BandwidthMaximum":0,"IOPSMaximum":0,"CreateInUtilityVM":false}],"HvPartition":false,"NetworkSharedContainerName":"4c9bede623553673fde0da6e8dc92f9a55de1ff823a168a35623ad8128f83ecb"})
Normal Pulling 38m (x2 over 38m) kubelet, aksnpwin000000 Pulling image "progress.azurecr.io/eboswebapi:release-2019-11-11_16-41"
Normal Pulled 38m (x2 over 38m) kubelet, aksnpwin000000 Successfully pulled image "progress.azurecr.io/eboswebapi:release-2019-11-11_16-41"
Normal Created 38m (x2 over 38m) kubelet, aksnpwin000000 Created container ebos-webapi-test-2
Normal Started 38m kubelet, aksnpwin000000 Started container ebos-webapi-test-2
I get this error when i tried to deploy one deployment with ten replicas.
0/2 nodes are available: 1 Insufficient memory, 1 node(s) had taints that the pod didn't tolerate.
I don't understand why two node. Is the same node and just the same problem.
I have a lot of RAM (1GB) free.
How can i fix this error with out add another node.
I have in deployment yaml file this for resources:
limits:
cpu: 1000m
memory: 1000Mi
requests:
cpu: 100m
memory: 200Mi
Server:
Master:
CPU: 2
RAM: 2 - 1 Free
Slave:
CPU: 2
RAM: 2 - 1 Free
I think you have multiple issues here.
First to the format of the error message you get
0/2 nodes are available: 1 Insufficient memory, 1 node(s) had taints that the pod didn't tolerate.
The first thing is clear you have 2 nodes in total an could not schedule to any of them. Then comes a list of conditions which prevent the scheduling on that node. One node can be affected by multiple issues. For example, low memory and insufficient CPU. So, the numbers can add up to more than what you have on total nodes.
The second issue is that the requests you write into your YAML file apply per replica. If you instantiate the same pod with 100M Memory 5 times they need 500M in total. You want to run 10 pods which request each 200Mi memory. So, you need 2000Mi free memory.
Your error message already implies that there is not enough memory on one node. I would recommend you inspect both nodes via kubectl describe node <node-name> to find out how much free memory Kubernetes "sees" there. Kubernetes always blocks the full amount of memory a pod requests regardless how much this pod uses.
The taints in your error message tells that the other node, possibly the master, has a taint which is not tolerated by the deployment. For more about taints and tolerations see the documentation. In short find out which taint on the node prevents the scheduling and remove it via kubectl taint nodes <node-name> <taint-name>-.