Helm upgrade doesn't pull new container - kubernetes

I Build a simple NodeJS API, pushed the Docker Image to a repo and deployed it to my k8s with Helm install (works perfectly fine).
The pullPolicy is Always.
Now I want to update the source code and deploy the updated version of my app. I bumped the version in all files, built and pushed the new Docker image und tried helm upgrade but it seems like nothing happened.
With helm list I can see that revision was deployed but the changes to source code were not deployed.
watch kubectl get pods also shows that no new pods were created the way you expect it with kubectl --apply...
What did I do wrong?

Helm will roll out changes to kubernetes objects only if there are changes to roll out. If you use :latest there is no change to be applied to the deployment file, ergo no pods will rolling update. To keep using latest, you need to add something (ie. label with sha / version) that will change and cause deployment to get updated by helm. Also keep in mind that you will usualy need ImagePullPolicy: Always as well.

Possible workaround:
spec:
template:
metadata:
labels:
date: "{{ now | unixEpoch }}"
Add it to your Deployment or StatefulSet yaml

It's worth noting that there's nothing special about the 'latest' tag. In other words it doesn't mean what we would normally think, i.e. "the most recent version".
It's just string of characters from a container standpoint. It could be anything, like "blahblah".
The runtime (docker or kubernetes) will just look to see if it has an image with that tag and only get the new image if that tag doesn't exist.
Given that "latest" doesn't actually mean anything, best practice if you want to be updating images constantly, is to use the actual version of the code itself as an image tag. And then when deploying, have your infrastructure specifically deploy the newest version using the correct tag.

The way I solved this in the deployment script in .gitlab.yaml, you can do similar in any of your deployment scripts.
export SAME_SHA=$(helm get values service-name | grep SHA | wc -l)
if [ SAME_SHA -eq 1] then helm uninstall service-name; fi
helm upgrade --install service-name -f service-values.yml .
This may not be the best approach for production as you may end up uninstall a live service, but for me, production sha are never the same so this works.

Related

Helm rollback to previous build is not reflecting in deployments [duplicate]

I Build a simple NodeJS API, pushed the Docker Image to a repo and deployed it to my k8s with Helm install (works perfectly fine).
The pullPolicy is Always.
Now I want to update the source code and deploy the updated version of my app. I bumped the version in all files, built and pushed the new Docker image und tried helm upgrade but it seems like nothing happened.
With helm list I can see that revision was deployed but the changes to source code were not deployed.
watch kubectl get pods also shows that no new pods were created the way you expect it with kubectl --apply...
What did I do wrong?
Helm will roll out changes to kubernetes objects only if there are changes to roll out. If you use :latest there is no change to be applied to the deployment file, ergo no pods will rolling update. To keep using latest, you need to add something (ie. label with sha / version) that will change and cause deployment to get updated by helm. Also keep in mind that you will usualy need ImagePullPolicy: Always as well.
Possible workaround:
spec:
template:
metadata:
labels:
date: "{{ now | unixEpoch }}"
Add it to your Deployment or StatefulSet yaml
It's worth noting that there's nothing special about the 'latest' tag. In other words it doesn't mean what we would normally think, i.e. "the most recent version".
It's just string of characters from a container standpoint. It could be anything, like "blahblah".
The runtime (docker or kubernetes) will just look to see if it has an image with that tag and only get the new image if that tag doesn't exist.
Given that "latest" doesn't actually mean anything, best practice if you want to be updating images constantly, is to use the actual version of the code itself as an image tag. And then when deploying, have your infrastructure specifically deploy the newest version using the correct tag.
The way I solved this in the deployment script in .gitlab.yaml, you can do similar in any of your deployment scripts.
export SAME_SHA=$(helm get values service-name | grep SHA | wc -l)
if [ SAME_SHA -eq 1] then helm uninstall service-name; fi
helm upgrade --install service-name -f service-values.yml .
This may not be the best approach for production as you may end up uninstall a live service, but for me, production sha are never the same so this works.

Using kubectl roll outs to update my images, but need to also keep my deployment object in version control

In My CICD, I am:
generating a new image with a unique tag. foo:dev-1339 and pushing it to my image repo (ECR).
Then I am using a rolling update to update my deployment.
kubectl rolling-update frontend --image=foo:dev-1339
But I have a conflict here.
What if I also need to update some part of my deployment object as stored in a deployment.yaml file. Lets say harden a health check or add a parameter?
Then when I re apply my deployment object as a whole it will not be in sync with the current replica set, the tag will get reverted and I will lose that image update as it exists in the cluster.
How do I avoid this race condition?
A typical solution here is to use a templating layer like Helm or Kustomize.
In Helm, you'd keep your Kubernetes YAML specifications in a directory structure called a chart, but with optional templating. You can specify things like
image: myname/myapp:{{ .Values.tag | default "latest" }}
and then deploy the chart with
helm install myapp --name myapp --set tag=20191211.01
Helm keeps track of these values (in Secret objects in the cluster) so they don't get tracked in source control. You could check in a YAML-format file with settings and use helm install -f to reference that file instead.
In Kustomize, your CI tool would need to create a kustomize.yaml file for per-deployment settings, but then could set
images:
- name: myname/myapp
newTag: 20191211.01
If you trust your CI tool to commit to source control then it can check this modified file in as part of its deployment sequence.
Imperative vs Declarative workflow
There is two fundamental ways of using kubectl for applying changes to your cluster. The Imperative way, when you do commands is a good way for experimentation and development environment. kubectl rolling-updated is an example of an imperative command. See Managing Kubernetes using Imperative Commands.
For a production environment, it is recommended to use a Declarative workflow, by editing manifest-files, store them in a Git-repository. Automatically start a CICD work when you commit or merge. kubectl apply -f <file> or more interesting kubectl apply -k <file> is an example of this workflow. See Declarative Management using Config files or more interesting Declarative Management using Kustomize
CICD for building image and deployment
Building an artifact from source code, including a container image may be done in a CICD pipeline. Managing application config and applying it to the Kubernetes cluster may also be done in a CICD pipeline. You may want to automatize it all, e.g. for doing Continuous Deployment and combine both pipelines to a single long pipeline. This is a more complicated setup and there is no single answer on how to do it. When the build-parts is done, it may trigger an update of the image field in the app configuration repository to trigger the configuration-pipeline.
Unfortunately there is no solution, either from the command line or through the yaml files
As per the doc here, "...a Deployment is a higher-level controller that automates rolling updates of applications declaratively, and therefore is recommended" over the use of Replication Controllers and kubectl rolling-update. Updating the image of a Deployment will trigger Deployment's rollout.
An approach could be to update the Deployment configuration yaml (or json) under version control in the source repo and apply the changed Deployment configuration from the version control to the cluster.

Helm on Minikube: update local image

I have Minikube (v1.1.0) running locally with Helm (v2.13.1) initialized and connected the local docker daemon with Minikube running eval $(minikube docker-env). In the code base of my application I created a chart with helm create chart. The first few lines of ./chart/values.yml I changed to:
image:
repository: app-development
tag: latest
pullPolicy: Never
I build the image locally and install/upgrade the chart with Helm:
docker build . -t app-development
helm upgrade --install example ./chart
Now, this works perfect the first time, but if I make changes to the application I would like to run the above two commands to upgrade the image. Is there any way to get this working?
workaround
To get the expected behaviour I can delete the chart from Minikube and install it again:
docker build . -t app-development
helm del --purge example
helm install example ./chart
When you make a change like this, Kubernetes is looking for some change in the Deployment object. If it sees that you want 1 Pod running app-development:latest, and it already has 1 Pod running an image named app-development:latest, then it's in the right state and it doesn't need to do anything (even if the local image that has that tag has changed).
The canonical advice here is to never use the :latest tag with Kubernetes. Every time you build an image, use a distinct tag (a time stamp or the current source control commit ID are easy unique things). With Helm it's easy enough to inject this based on a value you pass in:
image: app-development:{{ .Values.tag | default "latest" }}
This sort of build sequence would look a little more like
TAG=$(date +%Y%m%d-%H%m%S)
docker build -t "app-development:$TAG" .
helm upgrade --install --set "tag=$TAG"
If you're actively developing your component you may find it easier to try to separate out "hacking on code" from "deploying into Kubernetes" as much as you can. Some amount of this tends to be inevitable, but Kubernetes really isn't designed to be a live development environment.
One way you could solve this problem is using minikube and cloud code from google. When you initialize cloud code in your project, it creates skaffold yaml at root location. You can put helm chart for same project in the same code base. Go ahead and edit this configuration to match folder location for the helm chart:
deploy: helm:
releases:
- name: <chart_name>
chartPath: <folder path relative to this file>
now when you click on cloud code at the bottom of your visual code editor (or any editor), it should give you following options:
[1]: https://i.stack.imgur.com/vXK4U.png
Select "Run on Kubernetes" from the list.
Only changes you'll have to do in your helm chart is to read image url from Skaffold yaml using profile.
profiles:
- name: prod
deploy:
helm:
releases:
- name: <helm_chart_name>
chartPath: helm
skipBuildDependencies: true
artifactOverrides:
image: <url_production_image_url>
This will read image from configured url whereas in local, it should read from docker daemon. Cloud code also provide hot update / deployment when you make any changes to any file though.No need to always mention image tag while testing it locally. Once you're good with the code, update the image with latest version number which should trigger deployment in your integration / dev environment.

How to bind kubernetes resource to helm release

If I run kubectl apply -f <some statefulset>.yaml separately, is there a way to bind the stateful set to a previous helm release? (eg by specifying some tags in the yaml file)
As far as I know - you cannot do it.
Yes, you can always create resources via templates before installing the Helm chart.
However, I have never seen a solution for your question.

Can I modify container's environment variables without restarting pod using kubernetes

I have a running pod and I want to change one of it's container's environment variable and made it work immediately. Can I achieve that? If I can, how to do that?
Simply put and in kube terms, you can not.
Environment for linux process is established on process startup, and there are certainly no kube tools that can achieve such goal.
For example, if you make a change to your Deployment (I assume you use it to create pods) it will roll the underlying pods.
Now, that said, there is a really hacky solution reported under Is there a way to change the environment variables of another process in Unix? that involves using GDB
Also, remember that even if you could do that, there is still application logic that would need to watch for such changes instead of, as it usually is now, just evaluate configuration from envs during startup.
This worked with me
kubectl set env RESOURCE/NAME KEY_1=VAL_1 ... KEY_N=VAL_N
check the official documentation here
Another approach for runtime pods you can get into the Pod command line and change the variables in the runtime
RUN kubectl exec -it <pod_name> -- /bin/bash
Then
Run export VAR1=VAL1 && export VAR2=VAL2 && your_cmd
I'm not aware of any way to do it and I can't think of real world scenario where this makes too much sense.
Usually you have to restart a process for it to notice the changed environment variables and the easiest way to do that is restart the pod.
The solution closest to what seem to want is to create a deployment and then use kubectl edit (kubectl edit deploy/name) to modify it's environment variables. A new pod is started and the old one is terminated after you save.
Kubernetes is designed in such a way that any changes to the pod should be redeployed through the config. If you go messing with pods that have already been deployed you can end up with weird clusters that are hard to debug.
If you really want to you can run additional commands in your running pod using kubectl exec, but this is only recommended for debug purposes.
kubectl exec -it <pod_name> export VARIABLENAME=<thing>
If you are using Helm 3> according to the documentation:
Automatically Roll Deployments
Often times ConfigMaps or Secrets are
injected as configuration files in containers or there are other
external dependencies changes that require rolling pods. Depending on
the application a restart may be required should those be updated with
a subsequent helm upgrade, but if the deployment spec itself didn't
change the application keeps running with the old configuration
resulting in an inconsistent deployment.
The sha256sum function can be used to ensure a deployment's annotation
section is updated if another file changes:
kind: Deployment
spec:
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
[...]
In the event you always want
to roll your deployment, you can use a similar annotation step as
above, instead replacing with a random string so it always changes and
causes the deployment to roll:
kind: Deployment
spec:
template:
metadata:
annotations:
rollme: {{ randAlphaNum 5 | quote }}
[...]
Both of these methods allow your Deployment to leverage the built in update strategy
logic to avoid taking downtime.
NOTE: In the past we recommended using the --recreate-pods flag as
another option. This flag has been marked as deprecated in Helm 3 in
favor of the more declarative method above.
It is hard to change from outside. But it is easy to change from inside. Your App running in the pod can change it. Just oppose an Api to change environment variable.
You can use configmap with volumes to update environment variables on the go..
Refer: https://itnext.io/how-to-automatically-update-your-kubernetes-app-configuration-d750e0ca79ab