How to kubernetes "kubectl apply" does not update existing deployments - kubernetes

I have a .NET-core web application. This is deployed to an Azure Container Registry. I deploy this to my Azure Kubernetes Service using
kubectl apply -f testdeployment.yaml
with the yaml-file below
apiVersion: apps/v1
kind: Deployment
metadata:
name: myweb
spec:
replicas: 1
selector:
matchLabels:
app: myweb
template:
metadata:
labels:
app: myweb
spec:
containers:
- name: myweb
image: mycontainerregistry.azurecr.io/myweb:latest
ports:
- containerPort: 80
imagePullSecrets:
- name: my-registry-key
This works splendid, but when I change some code, push new code to container and run the
kubectl apply -f testdeployment
again, the AKS/website does not get updated, until I remove the deployment with
kubectl remove deployment myweb
What should I do to make it overwrite whatever is deployed? I would like to add something in my yaml-file. (Im trying to use this for continuous delivery in Azure DevOps).

I believe what you are looking for is imagePullPolicy. The default is ifNotPresent which means that the latest version will not be pulled.
https://kubernetes.io/docs/concepts/containers/images/
apiVersion: apps/v1
kind: Deployment
metadata:
name: myweb
spec:
replicas: 1
selector:
matchLabels:
app: myweb
template:
metadata:
labels:
app: myweb
spec:
containers:
- name: myweb
image: mycontainerregistry.azurecr.io/myweb
imagePullPolicy: Always
ports:
- containerPort: 80
imagePullSecrets:
- name: my-registry-key
To ensure that the pod is recreated, rather run:
kubectl delete -f testdeployment && kubectl apply -f testdeployment

kubectl does not see any changes in your deployment yaml file, so it will not make any changes. That's one of the problems using the latest tag.
Tag your image to some incremental version or build number and replace latest with that tag in your CI pipeline (for example with envsubst or similar). This way kubectl knows the image has changed. And you also know what version of the image is running. The latest tag could be any image version.
Simplified example for Azure DevOps:
# <snippet>
image: mycontainerregistry.azurecr.io/myweb:${TAG}
# </snippet>
Pipeline YAML:
stages:
- stage: Build
jobs:
- job: Build
variables:
- name: TAG
value: $(Build.BuildId)
steps:
- script: |
envsubst '${TAG}' < deployment-template.yaml > deployment.yaml
displayName: Replace Environment Variables
Alternatively you could also use another tool like Replace Tokens (different syntax: #{TAG}#).

First delete the deployment config file by running below command on the relative path of the deployment file.
kubectl delete -f .\deployment-file-name.yaml
earlier I used to get
deployment.apps/deployment-file-name unchanged
meaning the deployment file remains cached.
It happens while you're fixing some errors / typos on the deployment YAML & the config got cached once the error got cleared.
Only a kubectl delete -f .\deployment-file-name.yaml could remove the cache.
Later you can do the deployment by
kubectl apply -f .\deployment-file-name.yaml
Sample yaml file as follows :
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-file-name
spec:
replicas: 1
selector:
matchLabels:
app: myservicename
template:
metadata:
labels:
app: platformservice
spec:
containers:
- name: platformservice
image: /platformservice:latest

Related

How to trigger a kubernetes/openshift job restart when ever a specific pod in the cluster will restart?

For example, I have a pod running a server in it and I have a job in my cluster that is doing some yaml patching on the server deployment.
Is there a way we can set up some kind of trigger or anything that will rerun the job when ever the respective deployment change happens?
You can add your job spec into the deployment as initContainer like that:
apiVersion: apps/v1
kind: Deployment
metadata:
name: example
spec:
replicas: 1
selector:
matchLabels:
app: example
template:
metadata:
labels:
app: example
spec:
initContainers:
- name: init
image: centos:7
command:
- "bin/bash"
- "-c"
- "do something useful"
containers:
- name: nginx
image: nginx
In this case every time you rollout the deployment, job defined in initContainers will run.

how to use "kubectl apply -f <file.yaml> --force=true" making an impact over a deployed container EXEC console?

I am trying to redeploy the exact same existing image, but after changing a secret in the Azure Vault. Since it is the same image that's why kubectl apply doesn't deploy it. I tried to make the deploy happen by adding a --force=true option. Now the deploy took place and the new secret value is visible in the dashboard config map, but not in the API container kubectl exec console prompt in the environment.
Below is one of the 3 deploy manifest (YAML file) for the service:
apiVersion: apps/v1
kind: Deployment
metadata:
name: tube-api-deployment
namespace: tube
spec:
selector:
matchLabels:
app: tube-api-app
replicas: 3
template:
metadata:
labels:
app: tube-api-app
spec:
containers:
- name: tube-api
image: ReplaceImageName
ports:
- name: tube-api
containerPort: 80
envFrom:
- configMapRef:
name: tube-config-map
imagePullSecrets:
- name: ReplaceRegistrySecret
---
apiVersion: v1
kind: Service
metadata:
name: api-service
namespace: tube
spec:
ports:
- name: api-k8s-port
protocol: TCP
port: 8082
targetPort: 3000
selector:
app: tube-api-app
I think it is not happening because when we update a ConfigMap, the files in all the volumes referencing it are updated. It’s then up to the pod container process to detect that they’ve been changed and reload them. Currently, there is no built-in way to signal an application when a new version of a ConfigMap is deployed. It is up to the application (or some helper script) to look for the config files to change and reload them.

Kubernetes cluster not pulling images created by Skaffold

This is my skaffold file:
apiVersion: skaffold/v1
kind: Config
metadata:
name: app-skaffold
build:
artifacts:
- image: myappservice
context: api-server
deploy:
helm:
releases:
- name: myapp
chartPath: chart/myapp
And in my Helm templates folder I have only one manifest.
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
labels:
app: my-app
spec:
replicas: 2
selector:
matchLabels:
app: myapp-pod
template:
metadata:
labels:
app: myapp-pod
spec:
containers:
- name: apiserver
image: myappservice
ports:
- containerPort: 5050
env:
- name: a-key
valueFrom:
secretKeyRef:
name: secret-key
key: secret-key-value
But everytime I run:
$ skaffold dev
And check my pods' status with $ kubectl get pods, I get ErrImagePull Statuses.
This started since I decided to add Helm to the stack because it was working using only kubectl.
In my deploy section in my skaffold.yaml file, I had:
deploy:
kubectl:
manifests:
- ./k8s-manifests/*.yaml
And it was working fine, the only thing I did was to move the manifest file into the templates folder of my Helm chart and change the skaffold.yaml file as shown above.
What am I missing?
I ran into a registry issue where all my images suddenly disappeared after an ingress config change and skaffold (1.0.0) couldn't load anything. The only way I could fix it was by deleting my entire cluster and re-creating it again.
This probably won't help, but it's worth a shot.

unable to recognize "deployment.yml": yaml: line 3: mapping values are not allowed in this context

I have setup a gitlab CI/CD pipeline, which builds and deploys docker images to kubernetes. I'm using yaml based deployment to kubernetes. When i run the pipeline the gitlab-runner always throws "unable to recognize yaml line 3: mapping values are not allowed in this context", but when i run it directly using kubectl create -f deployment.yaml, it runs correctly.
Here's my first few lines of the yml file. I have already validated the yml formatting. The error is thrown at line 3.
apiVersion: v1
kind: Service
metadata:
labels:
app: configserver
name: configserver
spec:
ports:
- name: http
port: 8888
selector:
app: configserver
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: configserver
name: configserver
spec:
replicas: 1
selector:
matchLabels:
app: configserver
template:
metadata:
creationTimestamp: null
labels:
app: configserver
spec:
containers:
- image: config-server:latest
name: configserver
ports:
- containerPort: 8888
resources: {}
restartPolicy: Always
It this something to do with gitlab?
Thanks.
EDIT:
Here's the relevant part of my .gitlab-ci.yml
stages:
- build
- deploy
build:
stage: build
script:
- mvn clean install -DskipTests
- docker-compose -f docker-compose-istio.yml build
- docker-compose -f docker-compose-istio.yml push
deploy:
stage: deploy
script:
- kubectl apply -f itp-ms-deploy.yml
- kubectl apply -f itp-ms-gateway.yml
- kubectl apply -f itp-ms-autoscale.yml
when: manual
only:
- master

How to write kubernetes deployment to get the latest image built using GCP cloudbuild

I am trying to do the CI/CD with GCP cloudbuild.
I have a k8s cluster ready in GCP. check the deployment manifest bellow.
I have a cloudbuild.yaml ready to build new image and push it to registry and command to change the deployment image. check the cloudbuild yaml bellow.
Previously, I used to push the image using the TAG latest for the docker image and use the same tag in deployment but it didn't pull the latest image so Now I have changed it to use the TAG $COMMIT_SHA. Now, I am not able to figure out the way to pass the new image with TAG based on commit_sha to the deployment.
nginx-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mynginx
spec:
replicas: 3
minReadySeconds: 50
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: gcr.io/foods-io/cloudbuildtest-image:latest
name: nginx
ports:
- containerPort: 80
cloudbuild.yaml
steps:
#step1
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build', '-t', 'gcr.io/$PROJECT_ID/cloudbuildtest-image:$COMMIT_SHA', '.' ]
#step 2
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/$PROJECT_ID/cloudbuildtest-image:$COMMIT_SHA']
#STEP-3
- name: 'gcr.io/cloud-builders/kubectl'
args: ['set', 'image', 'deployment/mynginx', 'nginx=gcr.io/foods-io/cloudbuildtest-image:$COMMIT_SHA']
env:
- 'CLOUDSDK_COMPUTE_ZONE=us-central1-a'
- 'CLOUDSDK_CONTAINER_CLUSTER=cloudbuild-test'
images:
- 'gcr.io/$PROJECT_ID/cloudbuildtest-image'
Note: I repeat previously I was using the latest tag to the image and
as is the same in deployment I expected to pull the new image with my
3rd steps in cloudbuild but that didn't so I made the above changes in
TAG but now wondering how do I make changes to deployment manifest. Is
using the helm only solution here?
You need a step to replace the tag in your deployment.yaml, one way to do it is to use an environment variable and use envsubst to replace it.
Change deployment.yaml:
- image: gcr.io/foods-io/cloudbuildtest-image:$COMMIT_SHA
Use some bash script to replace the variable (using the ubuntu step for example):
envsubst '$COMMIT_SHA' < deployment.yaml > nginx-deployment.yaml
Alternative using sed:
sed -e 's/$COMMIT_SHA/'"$COMMIT_SHA"'/g' deployment.yaml > /workspace/nginx-deployment.yaml