Kubernetes continuously killing and recreating last pod - kubernetes

The last(3rd) container is continuously being delete and recreated by kubernetes. It goes from Running to Terminating state. The Kubernetes UI shows status as : 'Terminated: ExitCode:${state.terminated.exitCode}'
My deployment YAML:
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: openapi
spec:
scaleTargetRef:
kind: Deployment
name: openapi
minReplicas: 3
maxReplicas: 10
targetCPUUtilizationPercentage: 75
---
kind: Service
apiVersion: v1
metadata:
name: openapi
spec:
selector:
app: openapi
ports:
- name: http
protocol: TCP
port: 80
targetPort: 8080
- name: https
protocol: TCP
port: 443
targetPort: 8443
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: openapi
spec:
template:
metadata:
labels:
app: openapi
spec:
containers:
- name: openapi
image: us.gcr.io/PROJECT_ID/openapi:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
Portion of Output of kubectl get events -n namespace:
Pod Normal Created kubelet Created container
Pod Normal Started kubelet Started container
Pod Normal Killing kubelet Killing container with id docker://openapi:Need to kill Pod
ReplicaSet Normal SuccessfulCreate replicaset-controller (combined from similar events): Created pod: openapi-7db5f8d479-p7mcl
ReplicaSet Normal SuccessfulDelete replicaset-controller (combined from similar events): Deleted pod: openapi-7db5f8d479-pgmxf
HorizontalPodAutoscaler Normal SuccessfulRescale horizontal-pod-autoscaler New size: 2; reason: Current number of replicas above Spec.MaxReplicas
HorizontalPodAutoscaler Normal SuccessfulRescale horizontal-pod-autoscaler New size: 3; reason: Current number of replicas below Spec.MinReplicas
Deployment Normal ScalingReplicaSet deployment-controller Scaled up replica set openapi-7db5f8d479 to 3
Deployment Normal ScalingReplicaSet deployment-controller Scaled down replica set openapi-7db5f8d479 to 2
kubectl describe pod -n default openapi-7db5f8d479-2d2nm for a pod that spawned and was killed:
A different pod with a different unique id spawns each time after a pod gets killed by Kubernetes.
Name: openapi-7db5f8d479-2d2nm
Namespace: default
Node: gke-testproject-default-pool-28ce3836-t4hp/10.150.0.2
Start Time: Thu, 23 Nov 2017 11:50:17 +0000
Labels: app=openapi
pod-template-hash=3861948035
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"openapi-7db5f8d479","uid":"b7b3e48f-ceb2-11e7-afe7-42010a960003"...
kubernetes.io/limit-ranger=LimitRanger plugin set: cpu request for container openapi
Status: Terminating (expires Thu, 23 Nov 2017 11:51:04 +0000)
Termination Grace Period: 30s
IP:
Created By: ReplicaSet/openapi-7db5f8d479
Controlled By: ReplicaSet/openapi-7db5f8d479
Containers:
openapi:
Container ID: docker://93d2f1372a7ad004aaeb34b0bc9ee375b6ed48609f505b52495067dd0dcbb233
Image: us.gcr.io/testproject-175705/openapi:latest
Image ID: docker-pullable://us.gcr.io/testproject-175705/openapi#sha256:54b833548cbed32db36ba4808b33c87c15c4ecde673839c3922577f30b
Port: 8080/TCP
State: Terminated
Reason: Error
Exit Code: 143
Started: Thu, 23 Nov 2017 11:50:18 +0000
Finished: Thu, 23 Nov 2017 11:50:35 +0000
Ready: False
Restart Count: 0
Requests:
cpu: 100m
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-61k6c (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-61k6c:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-61k6c
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s
node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 21s default-scheduler Successfully assigned openapi-7db5f8d479-2d2nm to gke-testproject-default-pool-28ce3836-t4hp
Normal SuccessfulMountVolume 21s kubelet, gke-testproject-default-pool-28ce3836-t4hp MountVolume.SetUp succeeded for volume "default-token-61k6c"
Normal Pulling 21s kubelet, gke-testproject-default-pool-28ce3836-t4hp pulling image "us.gcr.io/testproject-175705/openapi:latest"
Normal Pulled 20s kubelet, gke-testproject-default-pool-28ce3836-t4hp Successfully pulled image "us.gcr.io/testproject-175705/openapi:latest"
Normal Created 20s kubelet, gke-testproject-default-pool-28ce3836-t4hp Created container
Normal Started 20s kubelet, gke-testproject-default-pool-28ce3836-t4hp Started container
Normal Killing 3s kubelet, gke-testproject-default-pool-28ce3836-t4hp Killing container with id docker://openapi:Need to kill Pod

Check the pod logs using the commands below:
kubectl get events -w -n namespace
and
kubectl describe pod -n namespace pod_name

Related

CrashLoopBackOff error when creating k38 replica set

I've created a replicaset on Kubernetes using a yaml file, while the replicaset is created - the pods are not starting .. giving CrashLoopBackOff error.
Please see the yaml file & the pod status below:
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: new-replica-set
labels:
app: new-replica-set
type: front-end
spec:
template:
metadata:
name: myimage
labels:
app: myimg-app
type: front-end
spec:
containers:
- name: my-busybody
image: busybox
replicas: 4
selector:
matchLabels:
type: front-end
Here is output, when list the pods:
new-replica-set-8v4l2 0/1 CrashLoopBackOff 10 (38s ago) 27m
new-replica-set-kd6nq 0/1 CrashLoopBackOff 10 (44s ago) 27m
new-replica-set-nkkns 0/1 CrashLoopBackOff 10 (21s ago) 27m
new-replica-set-psjcc 0/1 CrashLoopBackOff 10 (40s ago) 27m
output of describe command
$ kubectl describe pods new-replica-set-8v4l2
Name: new-replica-set-8v4l2
Namespace: default
Priority: 0
Node: minikube/192.168.49.2
Start Time: Wed, 03 Nov 2021 19:57:54 -0700
Labels: app=myimg-app
type=front-end
Annotations: <none>
Status: Running
IP: 172.17.0.14
IPs:
IP: 172.17.0.14
Controlled By: ReplicaSet/new-replica-set
Containers:
my-busybody:
Container ID: docker://67dec2d3a1e6d73fa4e67222e5d57fd980a1e6bf6593fbf3f275474e36956077
Image: busybox
Image ID: docker-pullable://busybox#sha256:15e927f78df2cc772b70713543d6b651e3cd8370abf86b2ea4644a9fba21107f
Port: <none>
Host Port: <none>
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 03 Nov 2021 22:12:32 -0700
Finished: Wed, 03 Nov 2021 22:12:32 -0700
Ready: False
Restart Count: 16
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lvnh6 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-lvnh6:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 138m default-scheduler Successfully assigned default/new-replica-set-8v4l2 to minikube
Normal Pulled 138m kubelet Successfully pulled image "busybox" in 4.009613585s
Normal Pulled 138m kubelet Successfully pulled image "busybox" in 4.339635544s
Normal Pulled 138m kubelet Successfully pulled image "busybox" in 2.293243043s
Normal Created 137m (x4 over 138m) kubelet Created container my-busybody
Normal Started 137m (x4 over 138m) kubelet Started container my-busybody
Normal Pulled 137m kubelet Successfully pulled image "busybox" in 2.344639501s
Normal Pulling 136m (x5 over 138m) kubelet Pulling image "busybox"
Normal Pulled 136m kubelet Successfully pulled image "busybox" in 1.114394958s
Warning BackOff 61s (x231 over 138m) kubelet Back-off restarting failed container
How do I fix this?
Also, what is the best way to debug these error?
busybox default to the docker command sh which opens a shell and because the container is neither not started with a terminal attached the sh process exits immediatly after container startup leading to the CrashLoopBackOff Status of your pods.
Try switching to an image that is intended to have a long running/always running process, e.g. nginx or define a command ( = docker entrypoint equivalent) and an argument ( = docker CMD equivalent), e.g.
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: new-replica-set
labels:
app: new-replica-set
type: front-end
spec:
template:
metadata:
name: myimage
labels:
app: myimg-app
type: front-end
spec:
containers:
- name: my-busybody
image: busybox
command: ["sh"]
args: ["-c", "while true; do echo Hello from busybox; sleep 100;done"]
replicas: 4
selector:
matchLabels:
type: front-end

minikube: how to add second container to the pod?

I am looking for help on how to add a second container to the existing pod (i mean one pod with two containers). The error is CrashLoopBackOff. I am using minikube. I request any assistance in resolving the issue will be appreciated.
wordpress.yaml
--------XXXX------------------
apiVersion: v1
kind: Service
metadata:
name: my-nginx-svc
labels:
app: nginx
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
- name: wordpress
image: wordpress
ports:
- containerPort: 80
#watch kubectl get pods
Every 2.0s: kubectl get pods cent8-minikube: Wed Jun 10 16:47:30 2020
NAME READY STATUS RESTARTS AGE
my-nginx-6b474476c4-9p4cn 1/1 Running 0 5h41m
my-nginx-6b474476c4-m2xkd 1/1 Running 0 5h41m
my-nginx-9f44b5996-744n5 1/2 CrashLoopBackOff 9 21m
my-nginx-9f44b5996-vl6g2 1/2 CrashLoopBackOff 9 22m
test-minikube-f4df69575-2sbl5 1/1 Running 0 26h
[root#cent8-minikube ~]# kubectl describe pod my-nginx-9f44b5996-vl6g2
Name: my-nginx-9f44b5996-vl6g2
Namespace: default
Priority: 0
Node: cent8-minikube/192.168.194.128
Start Time: Wed, 10 Jun 2020 16:25:14 -0700
Labels: app=nginx
pod-template-hash=9f44b5996
Annotations: <none>
Status: Running
IP: 172.17.0.8
IPs:
IP: 172.17.0.8
Controlled By: ReplicaSet/my-nginx-9f44b5996
Containers:
nginx:
Container ID: docker://5e4cfd4e726373916a105a95644a7a286966482e33eaaac986e44514aef86606
Image: nginx:1.14.2
Image ID: docker-pullable://nginx#sha256:f7988fb6c02e0ce69257d9bd9cf37ae20a60f1df7563c3a2a6abe24160306b8d
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Wed, 10 Jun 2020 16:25:16 -0700
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-lqnj8 (ro)
wordpress:
Container ID: docker://5e37277badca1658a10d4d826428f538a45d5e0eaecabd5e196f8b6ab5848ec7
Image: wordpress
Image ID: docker-pullable://wordpress#sha256:ff8be61894e74b6a005ab54ba73aa7084b6dbd11605f12ac383549763918bf09
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Wed, 10 Jun 2020 16:46:37 -0700
Finished: Wed, 10 Jun 2020 16:46:38 -0700
Ready: False
Restart Count: 9
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-lqnj8 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-lqnj8:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-lqnj8
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 23m default-scheduler Successfully assigned default/my-nginx-9f44b5996-vl6g2 to cent8-minikube
Normal Pulled 23m kubelet, cent8-minikube Container image "nginx:1.14.2" already present on machine
Normal Created 23m kubelet, cent8-minikube Created container nginx
Normal Started 23m kubelet, cent8-minikube Started container nginx
Normal Pulling 22m (x4 over 23m) kubelet, cent8-minikube Pulling image "wordpress"
Normal Pulled 22m (x4 over 23m) kubelet, cent8-minikube Successfully pulled image "wordpress"
Normal Created 22m (x4 over 23m) kubelet, cent8-minikube Created container wordpress
Normal Started 22m (x4 over 23m) kubelet, cent8-minikube Started container wordpress
Warning BackOff 3m35s (x95 over 23m) kubelet, cent8-minikube Back-off restarting failed container
[root#cent8-minikube ~]#
[root#cent8-minikube ~]# kubectl describe pod my-nginx-9f44b5996-744n5
Name: my-nginx-9f44b5996-744n5
Namespace: default
Priority: 0
Node: cent8-minikube/192.168.194.128
Start Time: Wed, 10 Jun 2020 16:25:33 -0700
Labels: app=nginx
pod-template-hash=9f44b5996
Annotations: <none>
Status: Running
IP: 172.17.0.10
IPs:
IP: 172.17.0.10
Controlled By: ReplicaSet/my-nginx-9f44b5996
Containers:
nginx:
Container ID: docker://9e3d6f0073e51eb475c2f2677fa413509f49a07c955d04b09417811d37ba8433
Image: nginx:1.14.2
Image ID: docker-pullable://nginx#sha256:f7988fb6c02e0ce69257d9bd9cf37ae20a60f1df7563c3a2a6abe24160306b8d
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Wed, 10 Jun 2020 16:25:35 -0700
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-lqnj8 (ro)
wordpress:
Container ID: docker://2ca8f22ab14b88973dca8d4d486f82a5b0d9bc7b84960882cff0a81afd744bf4
Image: wordpress
Image ID: docker-pullable://wordpress#sha256:ff8be61894e74b6a005ab54ba73aa7084b6dbd11605f12ac383549763918bf09
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Wed, 10 Jun 2020 16:46:41 -0700
Finished: Wed, 10 Jun 2020 16:46:42 -0700
Ready: False
Restart Count: 9
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-lqnj8 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-lqnj8:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-lqnj8
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 26m default-scheduler Successfully assigned default/my-nginx-9f44b5996-744n5 to cent8-minikube
Normal Pulled 26m kubelet, cent8-minikube Container image "nginx:1.14.2" already present on machine
Normal Created 26m kubelet, cent8-minikube Created container nginx
Normal Started 26m kubelet, cent8-minikube Started container nginx
Normal Pulling 25m (x4 over 26m) kubelet, cent8-minikube Pulling image "wordpress"
Normal Pulled 25m (x4 over 26m) kubelet, cent8-minikube Successfully pulled image "wordpress"
Normal Created 25m (x4 over 26m) kubelet, cent8-minikube Created container wordpress
Normal Started 25m (x4 over 26m) kubelet, cent8-minikube Started container wordpress
Warning BackOff 67s (x116 over 26m) kubelet, cent8-minikube Back-off restarting failed container
[root#cent8-minikube ~]#
``` [root#cent8-minikube ~]# kubectl logs my-nginx-9f44b5996-vl6g2 -c wordpress WordPress not found in /var/www/html - copying now... Complete! WordPress has been successfully copied to /var/www/html AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.17.0.8. Set the 'ServerName' directive globally to suppress this message (98)Address already in use: AH00072: make_sock: could not bind to address 0.0.0.0:80 no listening sockets available, shutting down AH00015: Unable to open logs
How to add a second container to the pod?
where to check the logs of the crash container or How to debug?
could you guide me, how to fix this error?
To check logs of a container in a pod
kubectl logs my-pod -c my-container
To check logs of previous instance of a container in a pod
kubectl logs my-pod -c my-container --previous
In your case this will translate to
kubectl logs my-nginx-9f44b5996-vl6g2 -c nginx
kubectl logs my-nginx-9f44b5996-vl6g2 -c wordpress
Wordpress is probably not a good idea to run with nginx in the same pod. Check this guide on multicontainer pod
Check this guide to run wordpress on kubernetes
The important part of your error is this:
Address already in use: AH00072: make_sock: could not bind to address 0.0.0.0:80 no listening sockets available, shutting down
If you run multiple containers in a single pod, they share a network namespace. Where the Service forwards to port 80 of the pod, it could reach any container. In this example, that means you can't have two containers in the same pod listening to the same port.
A better practice would be to split these two components into two separate Deployments, each with a matching Service. To do this:
Make two copies of your existing Deployment. In one, delete all mentions of Wordpress. In the other, delete the Nginx container and otherwise globally replace nginx with wordpress.
Make two copies of your existing Service. Change the second one to be type: ClusterIP, and globally replace nginx with wordpress.
In your Nginx proxy configuration, where you proxy_pass to the Wordpress container, change its backend to http://my-wordpress-svc/.
It would be very routine to wind up with 5 separate YAML files for this (two Deployments, two Services, one ConfigMap) and you can run kubectl apply -f on a directory to install them all in one shot.
While I agree that running separate applications in separate Pods, sometimes it's easier to keep them together. In the current scenario this could have been achieved as follows:
Wordpress is hard configure, so moving Nginx to a separate port is more feasible (if not totally straight forward)
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx
data:
default.conf.template: |
server {
listen ${NGINX_PORT};
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
env:
- name: NGINX_PORT
value: "81"
volumeMounts:
- mountPath: /etc/nginx/templates/
name: config
ports:
- containerPort: 81
- name: wordpress
image: wordpress
ports:
- containerPort: 80
volumes:
- name: config
configMap:
name: nginx

kubernetes pod stuck in waiting

When a pod gets stuck in a Waiting state, what can I do to find out why it's Waiting?
For instance, I have a deployment to AKS which uses ACI.
When I deploy the yaml file, a number of the pods will be stuck in a Waiting state. Running kubectl describe pod selenium121157nodechrome-7bf598579f-kqfqs returns;
State: Waiting
Reason: Waiting
Ready: False
Restart Count: 0
kubectl logs selenium121157nodechrome-7bf598579f-kqfqs returns nothing.
How can I find out what is the pod Waiting for?
Here's the yaml deployment;
apiVersion: apps/v1
kind: Deployment
metadata:
name: aci-helloworld2
spec:
replicas: 20
selector:
matchLabels:
app: aci-helloworld2
template:
metadata:
labels:
app: aci-helloworld2
spec:
containers:
- name: aci-helloworld
image: microsoft/aci-helloworld
ports:
- containerPort: 80
nodeSelector:
kubernetes.io/role: agent
beta.kubernetes.io/os: linux
type: virtual-kubelet
tolerations:
- key: virtual-kubelet.io/provider
operator: Exists
- key: azure.com/aci
effect: NoSchedule
Here's the output from a describe pod that's been Waiting for 5 minutes;
matt#Azure:~/2020$ kubectl describe pod aci-helloworld2-86b8d7866d-b9hgc
Name: aci-helloworld2-86b8d7866d-b9hgc
Namespace: default
Priority: 0
Node: virtual-node-aci-linux/
Labels: app=aci-helloworld2
pod-template-hash=86b8d7866d
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/aci-helloworld2-86b8d7866d
Containers:
aci-helloworld:
Container ID: aci://95919def19c28c2a51a806928030d84df4bc6b60656d026d19d0fd5e26e3cd86
Image: microsoft/aci-helloworld
Image ID:
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: Waiting
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hqrj8 (ro)
Volumes:
default-token-hqrj8:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hqrj8
Optional: false
QoS Class: BestEffort
Node-Selectors: beta.kubernetes.io/os=linux
kubernetes.io/role=agent
type=virtual-kubelet
Tolerations: azure.com/aci:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
virtual-kubelet.io/provider
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/aci-helloworld2-86b8d7866d-b9hgc to virtual-node-aci-linux
Based on the official documentation if your pod is in waiting state it means that it was scheduled on the node but it can't run on that machine with the image pointed out as the most common issue. You can try to run your image manually with docker pull and docker run and rule out the issues with image.
The information from kubectl describe <pod-name> should give you some information, especially the events section down to the bottom. Here`s an example how they can look like:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/testpod to cafe
Normal BackOff 50s (x6 over 2m16s) kubelet, cafe Back-off pulling image "busybox"
Normal Pulling 37s (x4 over 2m17s) kubelet, cafe Pulling image "busybox"
It could be also issue with your NodeSelector and Tolerations but again that would be shown in your events once you describe your pod.
Let me know if it helps and what are your outputs from describe pod.

Why an application deployment status is "Available:0" when service is deployed properly in minikube?

i am trying to deploy the back-end component of my application for testing REST API's. i have dockerized the components and created an image in minikube.i have created a yaml file for deploying and creating services. Now when i try to deploy it through sudo kubectl create -f frontend-deployment.yaml, it deploys without any error but when i check the status of deployments this is what is shown :
NAME READY UP-TO-DATE AVAILABLE AGE
back 0/3 3 0 2m57s
Interestingly the service corresponding to this deployment is available.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
back ClusterIP 10.98.73.249 <none> 8080/TCP 3m9s
i also tried to create deployment by running deplyment statemnts individually like sudo kubectl run back --image=back --port=8080 --image-pull-policy Never but the result was same.
Here is how my `deployment.yaml file looks like :
kind: Service
apiVersion: v1
metadata:
name: back
spec:
selector:
app: back
ports:
- protocol: TCP
port: 8080
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: back
spec:
selector:
matchLabels:
app: back
replicas: 3
template:
metadata:
labels:
app: back
spec:
containers:
- name: back
image: back
imagePullPolicy: Never
ports:
- containerPort: 8080
How can i make this deployment up and running as this causes internal server error on my front end side of application?
Description of pod back
Name: back-7fd9995747-nlqhq
Namespace: default
Priority: 0
Node: minikube/10.0.2.15
Start Time: Mon, 15 Jul 2019 12:49:52 +0200
Labels: pod-template-hash=7fd9995747
run=back
Annotations: <none>
Status: Running
IP: 172.17.0.7
Controlled By: ReplicaSet/back-7fd9995747
Containers:
back:
Container ID: docker://8a46e16c52be24b12831bb38d2088b8059947d099299d15755d77094b9cb5a8b
Image: back:latest
Image ID: docker://sha256:69218763696932578e199b9ab5fc2c3e9087f9482ac7e767db2f5939be98a534
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Mon, 15 Jul 2019 12:49:54 +0200
Finished: Mon, 15 Jul 2019 12:49:54 +0200
Ready: False
Restart Count: 1
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-c247f (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-c247f:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-c247f
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 6s default-scheduler Successfully assigned default/back-7fd9995747-nlqhq to minikube
Normal Pulled 4s (x2 over 5s) kubelet, minikube Container image "back:latest" already present on machine
Normal Created 4s (x2 over 5s) kubelet, minikube Created container back
Normal Started 4s (x2 over 5s) kubelet, minikube Started container back
Warning BackOff 2s (x2 over 3s) kubelet, minikube Back-off restarting failed container
As you can see zero of three Pods have Ready status:
NAME READY AVAILABLE
back 0/3 0
To find out what is going on you should check the underlying Pods:
$ kubectl get pods -l app=back
and then look at the Events in their description:
$ kubectl describe pod back-...

How can I deploy multiple deployments in Kubernetes cluster at the same time?

I have multiple .NET microservices in my architecture and for each one I have created a deployment object and now I attempt to deploy to the Azure container service running kubernetes. When I run a kubectl apply -f services.yml I always have a few failing pods. But if I run these deployments individually then they all work.
The error I receive in summary is:
failed to start container and The system cannot find the file specified.
The pods status shows as: CrashLoopBackOff
My services file looks like this:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: service1-deployment
labels:
app: service1
spec:
replicas: 1
selector:
matchLabels:
app: service1
template:
metadata:
labels:
app: service1
spec:
containers:
- name: service1
image: iolregistry.azurecr.io/panviva/doc:v1
command: ["service1.exe"]
imagePullSecrets:
- name: regsecret
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: service2-deployment
labels:
app: service2
spec:
replicas: 1
selector:
matchLabels:
app: service2
template:
metadata:
labels:
app: service2
spec:
containers:
- name: service2
image: iolregistry.azurecr.io/panviva/doc:v1
command: ["service2.exe"]
imagePullSecrets:
- name: regsecret
In reality I have many more deployments in the file but you get the Idea.
I have tried splitting these deployments into different files and calling kubectl apply -f service1.yml -f service2.yml but I still receive the same error. I believe it is something to do with the kubernetes starting multiple pods at once. How can I fix this?
EDIT:
Describing a failing pod yields the following result:
Name: getdocumentservice-deployment-528941145-cpt30
Namespace: default
Node: e527bacs9002/10.240.0.5
Start Time: Fri, 29 Dec 2017 05:02:31 +0000
Labels: app=getdocumentservice
pod-template-hash=528941145
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"getdocumentservice-deployment-528941145","uid":"793ff632-ec55-11...
Status: Running
IP:
Controlled By: ReplicaSet/getdocumentservice-deployment-528941145
Containers:
getdocumentservice:
Container ID: docker://774d7bd23ce3da64a747db4c3737123a56069de97c7b3c3cd11e898e3c9e0e42
Image: iolregistry.azurecr.io/panviva/doc:v1
Image ID: docker-pullable://iolregistry.azurecr.io/panviva/doc#sha256:1bc4f4840707c0174a6d9665828042b04045da2d30e77d96fa325c2f3ae245a6
Port: <none>
Command:
./modules/GetDocumentService/GetDocumentService.exe
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: ContainerCannotRun
Message: container 774d7bd23ce3da64a747db4c3737123a56069de97c7b3c3cd11e898e3c9e0e42 encountered an error during CreateProcess: failure in a Windows system call: The system cannot find the file specified. (0x2) extra info: {"ApplicationName":"","CommandLine":"./modules/GetDocumentService/GetDocumentService.exe","User":"","WorkingDirectory":"C:\\app\\m
odules","Environment":{"KUBERNETES_PORT":"tcp://10.0.0.1:443","KUBERNETES_PORT_443_TCP":"tcp://10.0.0.1:443","KUBERNETES_PORT_443_TCP_ADDR":"10.0.0.1","KUBERNETES_PORT_443_TCP_PORT":"443","KUBERNETES_PORT_443_TCP_PROTO":"tcp","KUBERNETES_SERVICE_HOST":"10.0.0.1","KUBERNETES_SERVICE_PORT":"443","KUBERNETES_SERVICE_PORT_HTTPS":"443"},"EmulateConsole":false,"Creat
eStdInPipe":true,"CreateStdOutPipe":true,"CreateStdErrPipe":true,"ConsoleSize":[0,0]}
Exit Code: 128
Started: Fri, 29 Dec 2017 05:18:41 +0000
Finished: Fri, 29 Dec 2017 05:18:41 +0000
Ready: False
Restart Count: 8
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-9l4dp (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-9l4dp:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-9l4dp
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 20m default-scheduler Successfully assigned getdocumentservice-deployment-528941145-cpt30 to e527bacs9002
Normal SuccessfulMountVolume 20m kubelet, e527bacs9002 MountVolume.SetUp succeeded for volume "default-token-9l4dp"
Normal Pulled 4m (x9 over 20m) kubelet, e527bacs9002 Container image "iolregistry.azurecr.io/panviva/doc:v1" already present on machine
Normal Created 4m (x9 over 20m) kubelet, e527bacs9002 Created container
Warning Failed 4m (x9 over 20m) kubelet, e527bacs9002 Error: failed to start container "getdocumentservice": Error response from daemon: {"message":"container getdocumentservice encountered an error during CreateProcess: failure in a Windows system call: The system cannot find the file specified. (0x2) extra info: {\"ApplicationName\":\"\"
,\"CommandLine\":\"./modules/GetDocumentService/GetDocumentService.exe\",\"User\":\"\",\"WorkingDirectory\":\"C:\\\\app\\\\modules\",\"Environment\":{\"KUBERNETES_PORT\":\"tcp://10.0.0.1:443\",\"KUBERNETES_PORT_443_TCP\":\"tcp://10.0.0.1:443\",\"KUBERNETES_PORT_443_TCP_ADDR\":\"10.0.0.1\",\"KUBERNETES_PORT_443_TCP_PORT\":\"443\",\"KUBERNETES_PORT_443_TCP_PROTO\
":\"tcp\",\"KUBERNETES_SERVICE_HOST\":\"10.0.0.1\",\"KUBERNETES_SERVICE_PORT\":\"443\",\"KUBERNETES_SERVICE_PORT_HTTPS\":\"443\"},\"EmulateConsole\":false,\"CreateStdInPipe\":true,\"CreateStdOutPipe\":true,\"CreateStdErrPipe\":true,\"ConsoleSize\":[0,0]}"}