How to trigger a kubernetes/openshift job restart when ever a specific pod in the cluster will restart? - kubernetes

For example, I have a pod running a server in it and I have a job in my cluster that is doing some yaml patching on the server deployment.
Is there a way we can set up some kind of trigger or anything that will rerun the job when ever the respective deployment change happens?

You can add your job spec into the deployment as initContainer like that:
apiVersion: apps/v1
kind: Deployment
metadata:
name: example
spec:
replicas: 1
selector:
matchLabels:
app: example
template:
metadata:
labels:
app: example
spec:
initContainers:
- name: init
image: centos:7
command:
- "bin/bash"
- "-c"
- "do something useful"
containers:
- name: nginx
image: nginx
In this case every time you rollout the deployment, job defined in initContainers will run.

Related

StatefulSet: Longer rolling update lead Version mismatching

Application is deployed on K8s using StatefulSet because of stateful in nature. There is around 250+ pods are running and HPA has been implemented on it too that can scale upto 400 pods.
When new deployment occurs, it takes longer time (~ 10-15m) to update all pods in Rolling Update fashion.
Problem: End user get response from 2 version of pods until all pods are replaced with new revision.
I am googling for an architecture where overall deployment time can be reduced and getting the best possible solutions to use BLUE/GREEN strategy but it has bunch of impact with integrated services like monitoring, logging, telemetry etc because of 2 naming conventions.
Ideally I am looking for a solutions like maxSurge for Deployment in which firstly new pods are created and then traffic are shifted to it at a time but in case of StatefulSet, it won't support maxSurge with RollingUpdate strategy & controller will delete and recreate each Pod in the StatefulSet based on ordinal index from bigger to smaller.
The solution is to do a partitioning rolling update along with a canary deployment.
Let’s suppose we have the statefulset workload defined by the following yaml file:
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
version: "1.20"
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
version: "1.20"
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx # Label selector that determines which Pods belong to the StatefulSet
# Must match spec: template: metadata: labels
serviceName: "nginx"
replicas: 3
template:
metadata:
labels:
app: nginx # Pod template's label selector
version: "1.20"
spec:
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: nginx:1.20
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
You could patch the statefulset to create a partition, and change the image and version label for the remaining pods: (In this case, since there are only 3 pods, the last one will be the one that will change its image.)
$ kubectl patch statefulset web -p '{"spec":{"updateStrategy":{"type":"RollingUpdate","rollingUpdate":{"partition":2}}}}'
$ kubectl patch statefulset web --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"nginx:1.21"}]'
$ kubectl patch statefulset web --type='json' -p='[{"op": "replace", "path": "/spec/template/metadata/labels/version", "value":"1.21"}]'
At this point, you have a pod with the new image and version label ready to use, but since the version label is different, the traffic is still going to the other two pods. If you change the version in the yaml file and apply the new configuration, the rollout will be transparent, since there is already a pod ready to migrate the traffic:
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
version: "1.21"
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
version: "1.21"
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx # Label selector that determines which Pods belong to the StatefulSet
# Must match spec: template: metadata: labels
serviceName: "nginx"
replicas: 3
template:
metadata:
labels:
app: nginx # Pod template's label selector
version: "1.21"
spec:
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: nginx:1.21
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
$ kubectl apply -f file-name.yaml
Once traffic is migrated to the pod containing the new image and version label, you should patch again the statefulset and remove the partition with the command kubectl patch statefulset web -p '{"spec":{"updateStrategy":{"type":"RollingUpdate","rollingUpdate":{"partition":0}}}}'
Note: You will need to be very careful with the size of the partition, since the remaining pods will handle the whole traffic for some time.

how to use "kubectl apply -f <file.yaml> --force=true" making an impact over a deployed container EXEC console?

I am trying to redeploy the exact same existing image, but after changing a secret in the Azure Vault. Since it is the same image that's why kubectl apply doesn't deploy it. I tried to make the deploy happen by adding a --force=true option. Now the deploy took place and the new secret value is visible in the dashboard config map, but not in the API container kubectl exec console prompt in the environment.
Below is one of the 3 deploy manifest (YAML file) for the service:
apiVersion: apps/v1
kind: Deployment
metadata:
name: tube-api-deployment
namespace: tube
spec:
selector:
matchLabels:
app: tube-api-app
replicas: 3
template:
metadata:
labels:
app: tube-api-app
spec:
containers:
- name: tube-api
image: ReplaceImageName
ports:
- name: tube-api
containerPort: 80
envFrom:
- configMapRef:
name: tube-config-map
imagePullSecrets:
- name: ReplaceRegistrySecret
---
apiVersion: v1
kind: Service
metadata:
name: api-service
namespace: tube
spec:
ports:
- name: api-k8s-port
protocol: TCP
port: 8082
targetPort: 3000
selector:
app: tube-api-app
I think it is not happening because when we update a ConfigMap, the files in all the volumes referencing it are updated. It’s then up to the pod container process to detect that they’ve been changed and reload them. Currently, there is no built-in way to signal an application when a new version of a ConfigMap is deployed. It is up to the application (or some helper script) to look for the config files to change and reload them.

How to kubernetes "kubectl apply" does not update existing deployments

I have a .NET-core web application. This is deployed to an Azure Container Registry. I deploy this to my Azure Kubernetes Service using
kubectl apply -f testdeployment.yaml
with the yaml-file below
apiVersion: apps/v1
kind: Deployment
metadata:
name: myweb
spec:
replicas: 1
selector:
matchLabels:
app: myweb
template:
metadata:
labels:
app: myweb
spec:
containers:
- name: myweb
image: mycontainerregistry.azurecr.io/myweb:latest
ports:
- containerPort: 80
imagePullSecrets:
- name: my-registry-key
This works splendid, but when I change some code, push new code to container and run the
kubectl apply -f testdeployment
again, the AKS/website does not get updated, until I remove the deployment with
kubectl remove deployment myweb
What should I do to make it overwrite whatever is deployed? I would like to add something in my yaml-file. (Im trying to use this for continuous delivery in Azure DevOps).
I believe what you are looking for is imagePullPolicy. The default is ifNotPresent which means that the latest version will not be pulled.
https://kubernetes.io/docs/concepts/containers/images/
apiVersion: apps/v1
kind: Deployment
metadata:
name: myweb
spec:
replicas: 1
selector:
matchLabels:
app: myweb
template:
metadata:
labels:
app: myweb
spec:
containers:
- name: myweb
image: mycontainerregistry.azurecr.io/myweb
imagePullPolicy: Always
ports:
- containerPort: 80
imagePullSecrets:
- name: my-registry-key
To ensure that the pod is recreated, rather run:
kubectl delete -f testdeployment && kubectl apply -f testdeployment
kubectl does not see any changes in your deployment yaml file, so it will not make any changes. That's one of the problems using the latest tag.
Tag your image to some incremental version or build number and replace latest with that tag in your CI pipeline (for example with envsubst or similar). This way kubectl knows the image has changed. And you also know what version of the image is running. The latest tag could be any image version.
Simplified example for Azure DevOps:
# <snippet>
image: mycontainerregistry.azurecr.io/myweb:${TAG}
# </snippet>
Pipeline YAML:
stages:
- stage: Build
jobs:
- job: Build
variables:
- name: TAG
value: $(Build.BuildId)
steps:
- script: |
envsubst '${TAG}' < deployment-template.yaml > deployment.yaml
displayName: Replace Environment Variables
Alternatively you could also use another tool like Replace Tokens (different syntax: #{TAG}#).
First delete the deployment config file by running below command on the relative path of the deployment file.
kubectl delete -f .\deployment-file-name.yaml
earlier I used to get
deployment.apps/deployment-file-name unchanged
meaning the deployment file remains cached.
It happens while you're fixing some errors / typos on the deployment YAML & the config got cached once the error got cleared.
Only a kubectl delete -f .\deployment-file-name.yaml could remove the cache.
Later you can do the deployment by
kubectl apply -f .\deployment-file-name.yaml
Sample yaml file as follows :
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-file-name
spec:
replicas: 1
selector:
matchLabels:
app: myservicename
template:
metadata:
labels:
app: platformservice
spec:
containers:
- name: platformservice
image: /platformservice:latest

What will be custom scheduler name in kubernetes

I have ran service account and pod for custom scheduler.so what will be my custom scheduler name ?it will be pod name or service name or anything else.
Generally, you define your scheduler name while writing the scheduler itself. Then you create a docker container for scheduler and ran that scheduler as deployment in kubernetes.
Now that scheduler will schedule your pods (based on how you write your scheduling).
You should watch the following talk of Kelsey Hightower on how to write custom scheduler and use it
https://www.youtube.com/watch?v=IYcL0Un1io0
Here is the toy scheduler source code, you can refer
https://github.com/kelseyhightower/scheduler
Hope this gives you brief idea.
EDIT:
The kelsey hightower's scheduler (link mentioned above) has to be deployed in following way:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: scheduler
name: scheduler
spec:
replicas: 1
template:
metadata:
labels:
app: scheduler
name: scheduler
spec:
containers:
- name: scheduler
image: kelseyhightower/scheduler:0.4.0
- name: kubectl
image: kelseyhightower/kubectl:1.3.4
args:
- "proxy"
Then whenever you deploy new pods with that scheduler you need to provide `schedulerName' in yaml file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
spec:
replicas: 1
template:
metadata:
labels:
app: nginx
name: nginx
spec:
schedulerName: hightower
containers:
- name: nginx
image: "nginx:1.11.1-alpine"
resources:
requests:
cpu: "500m"
memory: "128M"
That schedulerName should be the name of the scheduler define in your code.

Pod naming in google cluster

I created a deployment like this:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: scs-db-sink
spec:
selector:
matchLabels:
app: scs-db-sink
replicas: 1
template:
metadata:
labels:
app: scs-db-sink
spec:
nodeSelector:
cloud.google.com/gke-nodepool: service-pool
containers:
- name: scs-db-sink
image: 'IMAGE_NAME'
imagePullPolicy: Always
ports:
- containerPort: 1068
kubectl get pods shows me that the pod is running:
scs-db-sink-74c4b6cd6b-tchm9 1/1 Running 0 16m
Question:
How can I setup the pod name to be scs-db-sink-0 and increase to scs-db-sink-1 when I scale up?
Thanks
Deployments pods are named as <replicaset-name>-<random-suffix> where replicaset name is <deployment-name>-<random-suffix>. Here, replicaset is created automatically by deployment. So, you can't achieve your expected name with deployment.
However, you can use Statefulset in this case. Statefulset's pods are named as you specified. Check about Statefulset here.