Kubenetes Pod showing status "Completed" without any jobs - kubernetes

I'm hosting an Angular website that connects to a C#-backend inside a Kubernetes Cluster. When I use a certain function on the website that I can't describe in more detail, the pod shows status "Completed", then goes into "CrashLoopBackOff" and then restarts. The problem is, there are no jobs set up for this Pod (in fact, I didn't even know Jobs are a thing until one hour ago). So my main question would be: How can a Pod go into the "Completed" status without running any jobs?
My .yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-demo
namespace: my-namespace
labels:
app: my-demo
spec:
replicas: 1
template:
metadata:
name: my-demo-pod
labels:
app: my-demo
spec:
nodeSelector:
"beta.kubernetes.io/os": windows
containers:
- name: my-demo-container
image: myregistry.azurecr.io/my.demo:latest
imagePullPolicy: Always
resources:
limits:
cpu: 1
memory: 800M
requests:
cpu: .1
memory: 300M
ports:
- containerPort: 80
imagePullSecrets:
- name: my-registry-secret
selector:
matchLabels:
app: my-demo
---
apiVersion: v1
kind: Service
metadata:
name: my-demo-service
namespace: my-namespace
spec:
ports:
- protocol: TCP
port: 80
name: my-demo-port
selector:
app: my-demo

Completed status indicates that the application called by the cmd or ENTRYPOINT exited with a non-error (i.e. 0) status code. This Completed->CrashLoopBackoff->Running cycle usually indicates that the process called on the container start doesn't daemonize itself and exits, which Kubernetes sees as the process 'completing', hence the status.
Check that your ENTRYPOINT in your Dockerfile or your cmd in your pod template are calling the right process (with the appropriate flags) for the process to be daemonized. You can also check the logs for the previous pod (i.e. using kubectl logs --previous) to see what output the application gave

Related

Labels in Deployment Spec & template

In the below yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
labels:
app: my-nginx # Line 6
spec: # Line 7
replicas: 2
selector:
matchLabels:
app: my-nginx # line 11
template:
metadata:
labels:
app: my-nginx # Line 15
spec: # Line 16
containers:
- name: my-nginx
image: nginx:alpine
ports:
- containerPort: 80
resources:
limits:
memory: "128Mi" #128 MB
cpu: "200m" #200 millicpu (.2 cpu or 20% of the cpu)
Deployment is given a label(app: nginx) at Line 6.
Deployment spec at Line 7 uses the Pod spec mentioned in Line 16
What is the purpose of selector field with matchLabels?
What is the purpose of template field with labels?
Tried to add comments to explain the role of labels:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
labels:
app: my-nginx # LABEL-A: <--this label is to manage the deployment itself. this may be used to filter the deployment based on this label.
spec:
replicas: 2
selector:
matchLabels:
app: my-nginx #LABEL-B: <-- field defines how the Deployment finds which Pods to manage.
template:
metadata:
labels:
app: my-nginx #LABEL-C: <--this is the label of the pod, this must be same as LABEL-B
spec:
containers:
- name: my-nginx
image: nginx:alpine
ports:
- containerPort: 80
resources:
limits:
memory: "128Mi" #128 MB
cpu: "200m" #200 millicpu (.2 cpu or 20% of the cpu)
LABEL-A: <--this label is to manage the deployment itself. this may be used to filter the deployment based on this label. Example usage of LABEL-A is for deployment management, such as filtering.
k get deployments.apps -L app=my-nginx
LABEL-B: <-- There must be some place where we tell replication controller to manage the pods. This field defines how the Deployment finds which Pods to manage. Based on these labels of the pod, replication controller ensure they are ready.
LABEL-C: <--this is the label of the pod, which LABEL-B use to monitor. this must be same as LABEL-B

Graceful scaledown of stateful apps in Kubernetes

I have a stateful application deployed in Kubernetes cluster. Now the challenge is how do I scale down the cluster in a graceful way so that each pod while terminating (during scale down) completes it’s pending tasks and then gracefully shuts-down. The scenario is similar to what is explained below but in my case the pods terminating will have few inflight tasks to be processed.
https://medium.com/#marko.luksa/graceful-scaledown-of-stateful-apps-in-kubernetes-2205fc556ba9 1
Do we have an official feature support for this from kubernetes api.
Kubernetes version: v1.11.0
Host OS: linux/amd64
CRI version: Docker 1.13.1
UPDATE :
Possible Solution - While performing a statefulset scale-down the preStop hook for the terminating pod(s) will send a message notification to a queue with the meta-data details of the resp. task(s) to be completed. Afterwards use a K8 Job to complete the tasks. Please do comment if the same is a recommended approach from K8 perspective.
Thanks In Advance!
Regards,
Balu
Your pod will be scaled down only after the in-progress job is completed. You may additionally configure the lifecycle in the deployment manifest with prestop attribute which will gracefully stop your application. This is one of the best practices to follow. Please refer this for detailed explanation and syntax.
Updated Answer
This is the yaml I tried to deploy on my local and tried generating the load to raise the cpu utilization and trigger the hpa.
Deployment.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: default
name: whoami
labels:
app: whoami
spec:
replicas: 1
selector:
matchLabels:
app: whoami
template:
metadata:
labels:
app: whoami
spec:
containers:
- name: whoami
image: containous/whoami
resources:
requests:
cpu: 30m
limits:
cpu: 40m
ports:
- name: web
containerPort: 80
lifecycle:
preStop:
exec:
command:
- /bin/sh
- echo "Starting Sleep"; date; sleep 600; echo "Pod will be terminated now"
---
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: whoami
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: whoami
minReplicas: 1
maxReplicas: 3
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization: 40
# - type: Resource
# resource:
# name: memory
# targetAverageUtilization: 10
---
apiVersion: v1
kind: Service
metadata:
name: whoami-service
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: whoami
Once the pod is deployed, execute the below command which will generate the load.
kubectl run -i --tty load-generator --image=busybox /bin/sh
while true; do wget -q -O- http://whoami-service.default.svc.cluster.local; done
Once the replicas are created, I stopped the load and the pods are terminated after 600 seconds. This scenario worked for me. I believe this would be the similar case for statefulset as well. Hope this helps.

Kubernetes : Deploy only in one node-pool

I'm currently creating a Kubernetes cluster for a production environment.
In my cluster, I have 2 node-pool, let's call them api-pool and web-pool
In my api-pool, I have 2 nodes with 4CPU and 15Gb of RAM each.
I'm trying to deploy 8 replicas of my api in my api-pool, each replicas should have 1CPU and 3.5Gi of RAM.
My api.deployment.yaml looks something like this :
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-dev
spec:
replicas: 8
selector:
matchLabels:
app: my-api
template:
metadata:
labels:
app: my-api
spec:
containers:
- name: api-docker
image: //MY_IMAGE
imagePullPolicy: Always
envFrom:
- configMapRef:
name: api-dev-env
- secretRef:
name: api-dev-secret
ports:
- containerPort: 80
resources:
requests:
cpu: "1"
memory: "3.5Gi"
But my problem is that Kubernetes is deploying the pods on nodes on my web-pool as well as in my api-pool but I want those pods to be deployed only in my api-pool.
I tried to label my nodes of the api-pool to use a selector that matches labels but it doesn't work and I'm not sure it's supposed to work that way.
How can I precise to K8s to deploy those 8 replicas only in my api-pool ?
You can use a nodeselector which is the simplest recommended form of node selection constraint.
label the nodes of api-pool with pool=api
kubectl label nodes nodename pool=api
Add nodeSelector in pod spec.
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-dev
spec:
replicas: 8
selector:
matchLabels:
app: my-api
template:
metadata:
labels:
app: my-api
spec:
containers:
- name: api-docker
image: //MY_IMAGE
imagePullPolicy: Always
envFrom:
- configMapRef:
name: api-dev-env
- secretRef:
name: api-dev-secret
ports:
- containerPort: 80
resources:
requests:
cpu: "1"
memory: "3.5Gi"
nodeSelector:
pool: api
For mode advanced use cases you can use node affinity.

How to set dynamic IP to property file?

I had deployed 2 pods which needed to talk to another pod (let say Pod A).
Pod A requires Ip address of services of deployed pods.So i need to set those IP address in config property file needed for pod A.
As Ip address are dynamic i.e if pod crashed it get changed.So need to set it dynamically.
Currently I deployed 2 pods and do
kubectl get ep
and set those Ip address in config property file and build Dockerfile and push it and use that image for deployment.
This is my deplyment and svc file in which image djtijare/a2ipricing refers to config file
apiVersion: v1
kind: Service
metadata:
name: spring-boot-demo-pricing
spec:
ports:
- name: spring-boot-pricing
port: 8084
targetPort: 8084
selector:
app: spring-boot-demo-pricing
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: spring-boot-demo-pricing
spec:
replicas: 1
template:
metadata:
labels:
app: spring-boot-demo-pricing
spec:
containers:
- name: spring-boot-demo-pricing
image: djtijare/a2ipricing:v1
imagePullPolicy: IfNotPresent
# envFrom:
#- configMapRef:
# name: spring-boot-demo-config-map
resources:
requests:
cpu: 100m
memory: 1Gi
ports:
- containerPort: 8084
nodeSelector:
disktype: ssd
So How to set IP's of those 2 pods dynamically in config file and build and push docker image.
I think you should think about using Headless services.
Sometimes you don’t need or want load-balancing and a single service IP. In this case, you can create what are termed “headless” Services, by explicitly specifying "None" for the cluster IP (.spec.clusterIP).
You can use a headless Service to interface with other service discovery mechanisms, without being tied to Kubernetes’ implementation. For example, you could implement a custom [Operator]( be built upon this API.
For such Services, a cluster IP is not allocated, kube-proxy does not handle these services, and there is no load balancing or proxying done by the platform for them. How DNS is automatically configured depends on whether the service has selectors defined.
For your example if you set service to spec.clusterIP = None you could nslookup -type=A spring-boot-demo-pricing which will show you IPs of pods attached to this service.
/ # nslookup -type=A spring-boot-demo-pricing
Server: 10.11.240.10
Address: 10.11.240.10:53
Name: spring-boot-demo-pricing.default.svc.cluster.local
Address: 10.8.2.20
Name: spring-boot-demo-pricing.default.svc.cluster.local
Address: 10.8.1.12
Name: spring-boot-demo-pricing.default.svc.cluster.local
Address: 10.8.1.13
And here are the yaml I've used:
apiVersion: v1
kind: Service
metadata:
name: spring-boot-demo-pricing
labels:
app: spring-boot-demo-pricing
spec:
ports:
- name: spring-boot-pricing
port: 8084
targetPort: 8084
clusterIP: None
selector:
app: spring-boot-demo-pricing
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: spring-boot-demo-pricing
labels:
app: spring-boot-demo-pricing
spec:
replicas: 3
selector:
matchLabels:
app: spring-boot-demo-pricing
template:
metadata:
labels:
app: spring-boot-demo-pricing
spec:
containers:
- name: spring-boot-demo-pricing
image: djtijare/a2ipricing:v1
imagePullPolicy: IfNotPresent
# envFrom:
#- configMapRef:
# name: spring-boot-demo-config-map
resources:
requests:
cpu: 100m
memory: 1Gi
ports:
- containerPort: 8084

kubernetes "n" independent pods with identity 1 to n

Requirement 1 (routes):
I need to be able to route to "n" independent Kubernetes deployments. Such as:
http://building-1-building
http://building-2-building
http://building-3-building
... n ...
Each building is receiving traffic that is independent of the others.
Requirement 2 (independence):
If pod for building-2-deployment dies, I would like only building-2-deployment to be restarted and the other n-1 deployments to be not affected.
Requirement 3 (kill then replace):
If pod for building-2-deployment is unhealthy, I would like it to be killed then a new one created. Not a replacement created, then the sick is killed.
When I update the image and issue a "kubectl apply -f building.yaml", I would like each deployment to be shut down then a new one started with the new SW. In other words, not create a second then kill the first.
Requirement 4 (yaml): This application is created and updated with a yaml file so that it's repeatable and archivable.
kubectl create -f building.yaml
kubectl apply -f building.yaml
Partial Solution:
The following yaml creates routes (requirement 1), operates each deployment independently (requirement 2), but fails on to kill before starting a replacement (requirement 3).
This partial solution is a little verbose as each deployment is replicated "n" time where only the "n" is changed.
I would appreciate a suggested to solve all 3 requirements.
apiVersion: v1
kind: Service
metadata:
name: building-1-service # http://building-1-service
spec:
ports:
- port: 80
targetPort: 80
type: NodePort
selector:
app: building-1-pod #matches name of pod being created by deployment
---
apiVersion: v1
kind: Service
metadata:
name: building-2-service # http://building-2-service
spec:
ports:
- port: 80
targetPort: 80
type: NodePort
selector:
app: building-2-pod #matches name of pod being created by deployment
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: building-1-deployment # name of the deployment
spec:
replicas: 1
selector:
matchLabels:
app: building-1-pod # matches name of pod being created
template:
metadata:
labels:
app: building-1-pod # name of pod, matches name in deployment and route "location /building_1/" in nginx.conf
spec:
containers:
- name: building-container # name of docker container
image: us.gcr.io//proj-12345/building:2018_03_19_19_45
resources:
limits:
cpu: "1"
requests:
cpu: "10m"
ports:
- containerPort: 80
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: building-2-deployment # name of the deployment
spec:
replicas: 1
selector:
matchLabels:
app: building-2-pod # matches name of pod being created
template:
metadata:
labels:
app: building-2-pod # name of pod, matches name in deployment and route "location /building_2/" in nginx.conf
spec:
containers:
- name: building-container # name of docker container
image: us.gcr.io//proj-12345/building:2018_03_19_19_45
resources:
limits:
cpu: "1"
requests:
cpu: "10m"
ports:
- containerPort: 80
You are in the luck. This is exactly what StatefulSets are for.