Kubernetes + Helm - only restart pods if new version/change - kubernetes

Whenever I run my basic deploy command, everything is redeployed in my environment. Is there any way to tell Helm to only apply things if there were changes made or is this just the way it works?
I'm running:
helm upgrade --atomic MyInstall . -f CustomEnvironmentData.yaml
I didn't see anything in the Helm Upgrade documentation that seemed to indicate this capability.
I don't want to bounce my whole evironment unless I have to.

There's no way to tell Helm to do this, but also no need. If you submit an object to the Kubernetes API server that exactly matches something that's already there, generally nothing will happen.
For example, say you have a Deployment object that specifies image: my/image:{{ .Values.tag }} and replicas: 3. You submit this once with tag: 20200904.01. Now you run the helm upgrade command you show, with that tag value unchanged in the CustomEnvironmentData.yaml file. This will in fact trigger the deployment controller inside Kubernetes. That sees that it wants 3 pods to exist with the image my/image:20200904.01. Those 3 pods already exist, so it does nothing.
(This is essentially the same as the "don't use the latest tag" advice: if you try to set image: my/image:latest, and redeploy your Deployment with this tag, since the Deployment spec is unchanged Kubernetes won't do anything, even if the version of the image in the registry has changed.)

You should probably use helm diff upgrade
https://github.com/databus23/helm-diff
$ helm diff upgrade - h
Show a diff explaining what a helm upgrade would change.
This fetches the currently deployed version of a release
and compares it to a chart plus values.
This can be used visualize what changes a helm upgrade will
perform.
Usage:
diff upgrade[flags] [RELEASE] [CHART]
Examples:
helm diff upgrade my-release stable / postgresql--values values.yaml
Flags:
-h, --help help for upgrade
--detailed - exitcode return a non - zero exit code when there are changes
--post - renderer string the path to an executable to be used for post rendering. If it exists in $PATH, the binary will be used, otherwise it will try to look for the executable at the given path
--reset - values reset the values to the ones built into the chart and merge in any new values
--reuse - values reuse the last release's values and merge in any new values
--set stringArray set values on the command line(can specify multiple or separate values with commas: key1 = val1, key2 = val2)
--suppress stringArray allows suppression of the values listed in the diff output
- q, --suppress - secrets suppress secrets in the output
- f, --values valueFiles specify values in a YAML file(can specify multiple)(default[])
--version string specify the exact chart version to use.If this is not specified, the latest version is used
Global Flags:
--no - color remove colors from the output

Related

Add key value pair to helm values.yaml file

I have a use case to write a randomly generated password back to values.yaml file.
I don't want to use install or upgrade command, having a hard time reading through documentation.
Apart from going through the helm way, I tried updating values.yaml file through yq command but it is removing the comments in the file.

upgrade helm chart upon patching one of its objects

Do we need to explicitly upgrade helm chart "Release-1" when we do patch a particular object separately , for eg. Cron job "CJ1"?
In my case, I have patched cron job to run every min.
I did not however upgrade the helm chart that deployed the cron job.
"Kubectl get cj CJ1 -o yaml " although shows that the changes have been made from older schedule to the new schedule :- "* * * * *".
However the job is now not running at "* * * * *"
When you say patch I presume you are referring to editing the object with kubectl edit ... or in any other way that apply the change without going through helm upgrade ?
Generally speaking if you follow DevOps and GitOps best practices any change you do should go through git (be version controlled). If you patch an object separately/manually then your code no longer represent what you have deployed, so the next time you upgrade the chart you are going to get the version without the patch (loose your changes).
So if you want to keep the changes that you have applied separately/manually then Yes ... change your code, then upgrade the chart.
If in the long term it doesn't matter and you are just playing around ... then you don't have to do anything as the change you want is already in Kubernetes.

Upgrade with Helm meet failure seems as configmap limitation exceed

I was using the helm upgrade -i xxx myfoo to install/upgrade myfoo,
I follow all the standard doc, but this failure ALWAYS happend as soon as I upgrade at the 7th upgration!
When I began to upgrade at the 7th time, it told me below failure:
Error: UPGRADE FAILED: ConfigMap "myfoo.v7" is invalid: []: Too long: must have at most 1048576 characters
This is really frustrating! Why do this happen?
Thank you for your help, I now understand what's going on.
I added some bigger file in chart/ dir, which cause the helm package size exceed 1M.
after remove these bigger files, it works back to normal. And I know why there is .helmignore file now, it is used to tell helm not include them in final helm package file(*.tgz)

Passing long configuration file to Kubernetes

I like the work methology of Kuberenetes, use self-contained image and pass the configuration in a ConfigMap, as a volume.
Now this worked great until I tried to do this thing with Liquibase container, The SQL is very long ~1.5K lines, and Kubernetes rejects it as too long.
Error from Kubernetes:
The ConfigMap "liquibase-test-content" is invalid: metadata.annotations: Too long: must have at most 262144 characters
I thought of passing the .sql files as a hostPath, but as I understand these hostPath's content is probably not going to be there
Is there any other way to pass configuration from the K8s directory to pods? Thanks.
The error you are seeing is not about the size of the actual ConfigMap contents, but about the size of the last-applied-configuration annotation that kubectl apply automatically creates on each apply. If you use kubectl create -f foo.yaml instead of kubectl apply -f foo.yaml, it should work.
Please note that in doing this you will lose the ability to use kubectl diff and do incremental updates (without replacing the whole object) with kubectl apply.
Since 1.18 you can use server-side apply to circumvent the problem.
kubectl apply --server-side=true -f foo.yml
where server-side=true runs the apply command on the server instead of the client.
This will properly show conflicts with other actors, including client-side apply and thus fail:
Apply failed with 4 conflicts: conflicts with "kubectl-client-side-apply" using apiextensions.k8s.io/v1:
- .status.conditions
- .status.storedVersions
- .status.acceptedNames.kind
- .status.acceptedNames.plural
Please review the fields above--they currently have other managers. Here
are the ways you can resolve this warning:
* If you intend to manage all of these fields, please re-run the apply
command with the `--force-conflicts` flag.
* If you do not intend to manage all of the fields, please edit your
manifest to remove references to the fields that should keep their
current managers.
* You may co-own fields by updating your manifest to match the existing
value; in this case, you'll become the manager if the other manager(s)
stop managing the field (remove it from their configuration).
See http://k8s.io/docs/reference/using-api/api-concepts/#conflicts
If the changes are intended you can simple use the first option:
kubectl apply --server-side=true -force-conflicts -f foo.yml
You can use an init container for this. Essentially, put the .sql files on GitHub or S3 or really any location you can read from and populate a directory with it. The semantics of the init container guarantee that the Liquibase container will only be launched after the config files have been downloaded.

What is the best way of creating Helm Charts in a single repository for different deployment environments?

We are using Helm Charts for deploying a service in several environments on Kubernetes cluster. Now for each environment there are a list of variables like the database url, docker image tag etc. What is the most obvious and correct way of defining Helm related values.yaml in such case where all the Helm template files remain same for all the environment except for some parameters as stated above.
One way to do this would be using multiple value files, which helm now allows. Assume you have the following values files:
values1.yaml:
image:
repository: myimage
tag: 1.3
values2.yaml
image:
pullPolicy: Always
These can both be used on command line with helm as:
$ helm install -f values1.yaml,values2.yaml <mychart>
In this case, these values will be merged into
image:
repository: myimage
tag: 1.3
pullPolicy: Always
You can see the values that will be used by giving the "--dry-run --debug" options to the "helm install" command.
Order is important. If the same value appears in both files, the values from values2.yaml will take precedent, as it was specified last. Each chart also comes with a values file. Those values will be used for anything not specified in your own values file, as if it were first in the list of values files you provided.
In your case, you could specify all the common settings in values1.yaml and override them as necessary with values2.yaml.