Labels in Deployment Spec & template - kubernetes

In the below yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
labels:
app: my-nginx # Line 6
spec: # Line 7
replicas: 2
selector:
matchLabels:
app: my-nginx # line 11
template:
metadata:
labels:
app: my-nginx # Line 15
spec: # Line 16
containers:
- name: my-nginx
image: nginx:alpine
ports:
- containerPort: 80
resources:
limits:
memory: "128Mi" #128 MB
cpu: "200m" #200 millicpu (.2 cpu or 20% of the cpu)
Deployment is given a label(app: nginx) at Line 6.
Deployment spec at Line 7 uses the Pod spec mentioned in Line 16
What is the purpose of selector field with matchLabels?
What is the purpose of template field with labels?

Tried to add comments to explain the role of labels:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
labels:
app: my-nginx # LABEL-A: <--this label is to manage the deployment itself. this may be used to filter the deployment based on this label.
spec:
replicas: 2
selector:
matchLabels:
app: my-nginx #LABEL-B: <-- field defines how the Deployment finds which Pods to manage.
template:
metadata:
labels:
app: my-nginx #LABEL-C: <--this is the label of the pod, this must be same as LABEL-B
spec:
containers:
- name: my-nginx
image: nginx:alpine
ports:
- containerPort: 80
resources:
limits:
memory: "128Mi" #128 MB
cpu: "200m" #200 millicpu (.2 cpu or 20% of the cpu)
LABEL-A: <--this label is to manage the deployment itself. this may be used to filter the deployment based on this label. Example usage of LABEL-A is for deployment management, such as filtering.
k get deployments.apps -L app=my-nginx
LABEL-B: <-- There must be some place where we tell replication controller to manage the pods. This field defines how the Deployment finds which Pods to manage. Based on these labels of the pod, replication controller ensure they are ready.
LABEL-C: <--this is the label of the pod, which LABEL-B use to monitor. this must be same as LABEL-B

Related

Horizontal Pod Autoscaler: which the exact value does HPA take?

i have the fowllowing manifests:
The app:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hpa-demo-deployment
labels:
app: hpa-nginx
spec:
replicas: 1
selector:
matchLabels:
app: hpa-nginx
template:
metadata:
labels:
app: hpa-nginx
spec:
containers:
- name: hpa-nginx
image: stacksimplify/kubenginx:1.0.0
ports:
- containerPort: 80
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "500Mi"
cpu: "200m"
---
apiVersion: v1
kind: Service
metadata:
name: hpa-demo-service-nginx
labels:
app: hpa-nginx
spec:
type: LoadBalancer
selector:
app: hpa-nginx
ports:
- port: 80
targetPort: 80
and its HPA:
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: hpa-demo-declarative
spec:
maxReplicas: 10 # define max replica count
minReplicas: 1 # define min replica count
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: hpa-demo-deployment
targetCPUUtilizationPercentage: 20 # target CPU utilization
Notice in HPA, the target CPU is set to 20%
My question: which 20% the HPA takes ? is it requests.cpu (ie: 100m) ? or limits.cpu (ie: 200m) ? or something else ?
Thank you!
Its based off of the resources.requests.cpu.
For per-pod resource metrics (like CPU), the controller fetches the metrics from the resource metrics API for each Pod targeted by the HorizontalPodAutoscaler. Then, if a target utilization value is set, the controller calculates the utilization value as a percentage of the equivalent resource request on the containers in each Pod
https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#how-does-a-horizontalpodautoscaler-work

How does k8s correlate ReplicaSet and Pods?

Say I have the following k8s config file with 2 identical deployments only different by name.
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
app: hello-world
name: hello-world-deployment-1
spec:
replicas: 1
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-world-2
image: rancher/hello-world:v0.1.2
resources:
limits:
cpu: "1"
memory: 100M
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
app: hello-world
name: hello-world-deployment-2
spec:
replicas: 1
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-world-2
image: rancher/hello-world:v0.1.2
resources:
limits:
cpu: "1"
memory: 100M
As I understand, k8s correlate ReplicaSets and Pods by their label. Therefore, with the above configuration, I guess there will be some problem or k8s will forbid this configuration.
However, it turns out everything is fine. Apart from the labels, are there something else k8s is using to correlate ReplicaSet and Pods?

Kubernetes : Deploy only in one node-pool

I'm currently creating a Kubernetes cluster for a production environment.
In my cluster, I have 2 node-pool, let's call them api-pool and web-pool
In my api-pool, I have 2 nodes with 4CPU and 15Gb of RAM each.
I'm trying to deploy 8 replicas of my api in my api-pool, each replicas should have 1CPU and 3.5Gi of RAM.
My api.deployment.yaml looks something like this :
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-dev
spec:
replicas: 8
selector:
matchLabels:
app: my-api
template:
metadata:
labels:
app: my-api
spec:
containers:
- name: api-docker
image: //MY_IMAGE
imagePullPolicy: Always
envFrom:
- configMapRef:
name: api-dev-env
- secretRef:
name: api-dev-secret
ports:
- containerPort: 80
resources:
requests:
cpu: "1"
memory: "3.5Gi"
But my problem is that Kubernetes is deploying the pods on nodes on my web-pool as well as in my api-pool but I want those pods to be deployed only in my api-pool.
I tried to label my nodes of the api-pool to use a selector that matches labels but it doesn't work and I'm not sure it's supposed to work that way.
How can I precise to K8s to deploy those 8 replicas only in my api-pool ?
You can use a nodeselector which is the simplest recommended form of node selection constraint.
label the nodes of api-pool with pool=api
kubectl label nodes nodename pool=api
Add nodeSelector in pod spec.
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-dev
spec:
replicas: 8
selector:
matchLabels:
app: my-api
template:
metadata:
labels:
app: my-api
spec:
containers:
- name: api-docker
image: //MY_IMAGE
imagePullPolicy: Always
envFrom:
- configMapRef:
name: api-dev-env
- secretRef:
name: api-dev-secret
ports:
- containerPort: 80
resources:
requests:
cpu: "1"
memory: "3.5Gi"
nodeSelector:
pool: api
For mode advanced use cases you can use node affinity.

Kubenetes Pod showing status "Completed" without any jobs

I'm hosting an Angular website that connects to a C#-backend inside a Kubernetes Cluster. When I use a certain function on the website that I can't describe in more detail, the pod shows status "Completed", then goes into "CrashLoopBackOff" and then restarts. The problem is, there are no jobs set up for this Pod (in fact, I didn't even know Jobs are a thing until one hour ago). So my main question would be: How can a Pod go into the "Completed" status without running any jobs?
My .yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-demo
namespace: my-namespace
labels:
app: my-demo
spec:
replicas: 1
template:
metadata:
name: my-demo-pod
labels:
app: my-demo
spec:
nodeSelector:
"beta.kubernetes.io/os": windows
containers:
- name: my-demo-container
image: myregistry.azurecr.io/my.demo:latest
imagePullPolicy: Always
resources:
limits:
cpu: 1
memory: 800M
requests:
cpu: .1
memory: 300M
ports:
- containerPort: 80
imagePullSecrets:
- name: my-registry-secret
selector:
matchLabels:
app: my-demo
---
apiVersion: v1
kind: Service
metadata:
name: my-demo-service
namespace: my-namespace
spec:
ports:
- protocol: TCP
port: 80
name: my-demo-port
selector:
app: my-demo
Completed status indicates that the application called by the cmd or ENTRYPOINT exited with a non-error (i.e. 0) status code. This Completed->CrashLoopBackoff->Running cycle usually indicates that the process called on the container start doesn't daemonize itself and exits, which Kubernetes sees as the process 'completing', hence the status.
Check that your ENTRYPOINT in your Dockerfile or your cmd in your pod template are calling the right process (with the appropriate flags) for the process to be daemonized. You can also check the logs for the previous pod (i.e. using kubectl logs --previous) to see what output the application gave

kubernetes "n" independent pods with identity 1 to n

Requirement 1 (routes):
I need to be able to route to "n" independent Kubernetes deployments. Such as:
http://building-1-building
http://building-2-building
http://building-3-building
... n ...
Each building is receiving traffic that is independent of the others.
Requirement 2 (independence):
If pod for building-2-deployment dies, I would like only building-2-deployment to be restarted and the other n-1 deployments to be not affected.
Requirement 3 (kill then replace):
If pod for building-2-deployment is unhealthy, I would like it to be killed then a new one created. Not a replacement created, then the sick is killed.
When I update the image and issue a "kubectl apply -f building.yaml", I would like each deployment to be shut down then a new one started with the new SW. In other words, not create a second then kill the first.
Requirement 4 (yaml): This application is created and updated with a yaml file so that it's repeatable and archivable.
kubectl create -f building.yaml
kubectl apply -f building.yaml
Partial Solution:
The following yaml creates routes (requirement 1), operates each deployment independently (requirement 2), but fails on to kill before starting a replacement (requirement 3).
This partial solution is a little verbose as each deployment is replicated "n" time where only the "n" is changed.
I would appreciate a suggested to solve all 3 requirements.
apiVersion: v1
kind: Service
metadata:
name: building-1-service # http://building-1-service
spec:
ports:
- port: 80
targetPort: 80
type: NodePort
selector:
app: building-1-pod #matches name of pod being created by deployment
---
apiVersion: v1
kind: Service
metadata:
name: building-2-service # http://building-2-service
spec:
ports:
- port: 80
targetPort: 80
type: NodePort
selector:
app: building-2-pod #matches name of pod being created by deployment
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: building-1-deployment # name of the deployment
spec:
replicas: 1
selector:
matchLabels:
app: building-1-pod # matches name of pod being created
template:
metadata:
labels:
app: building-1-pod # name of pod, matches name in deployment and route "location /building_1/" in nginx.conf
spec:
containers:
- name: building-container # name of docker container
image: us.gcr.io//proj-12345/building:2018_03_19_19_45
resources:
limits:
cpu: "1"
requests:
cpu: "10m"
ports:
- containerPort: 80
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: building-2-deployment # name of the deployment
spec:
replicas: 1
selector:
matchLabels:
app: building-2-pod # matches name of pod being created
template:
metadata:
labels:
app: building-2-pod # name of pod, matches name in deployment and route "location /building_2/" in nginx.conf
spec:
containers:
- name: building-container # name of docker container
image: us.gcr.io//proj-12345/building:2018_03_19_19_45
resources:
limits:
cpu: "1"
requests:
cpu: "10m"
ports:
- containerPort: 80
You are in the luck. This is exactly what StatefulSets are for.