I have a ConfigMap in my chart, and it has a different name in each release. When release 3 is deployed, there's only my-cm-bea2 exists.
Release reversion ConfigMap name
1 my-cm-3287
2 my-cm-475f
3 my-cm-bea2
How can I preserve the last 2 versions of ConfigMap, e.g., when release 3 is deployed, keep both my-cm-bea2 and my-cm-475f and then delete my-cm-3287.
Related
Is it a good practice to add helm release revision labels to pods?
helm-revision: {{ .Release.Revision }}
Here is a list of standard labels that helm recommends (revision label is not included here): https://helm.sh/docs/chart_best_practices/labels/#standard-labels
It is evident from this issue of another chart that adding revision labels to the pod will cause it to restart every time the chart is deployed: https://github.com/Azure/secrets-store-csi-driver-provider-azure/issues/121
I have a helm chart for an app that has 2 components as 2 deployments in the chart. The requirement is to use that label in the init container of app component B so that it waits for the pods of app component A (of the current helm release revision only, avoiding the failed pods of the previous revision) to get ready.
Are there any other caveats about adding this label to a pod?
I have a unique scenario where I have to deploy two different deployments, I have created a helm chart wherein programmatically I change a part of deployment at run time (while applying charts as overrides).
My Helm Chart is very simple it consist of a namespace and a deployment.
Now, when I apply the Helm Chart first time - which on runtime overrides values of first deployment and say it has attribute_name it substitute to value Var_A as expected . It works well. It creates namespace and creates deployment with attribute_name having value Var_A. So good so far .....
...but next when I apply the Helm Chart to deploy my second deployment which needs attribute_name to be Var_B it does not get applied, reason Helm complains that namespace already exists (rightly so).
I am wondering how to implement this solution ?
Would I need a new HelmChart just for namespace and other HelmChart for deployments? Any recommendations?
Are there any problems when using helm 2 and helm 3 in parallel - on the same cluster?
Reason behind is, that the Terraform helm provider is still NOT available for helm 3. But with another application we'd like to proceed with helm 3.
Have you maybe tried this? Or did you run into some problems?
Helm 2 and Helm 3 can be installed concurrently to manage the same cluster. This works when Helm 2 uses ConfigMaps for storage as Helm 3 uses Secrets for storage. There is however a conflict when Helm 2 uses Secrets for storage and stores the release in the same namespace as the release. The conflict occurs because Helm 3 uses different tags and ownership for the secret objects that Helm 2 does. It can therefore try to create a release that it thinks does not exist but will fail then because Helm 2 already has a secret with that name in that namespace.
Additionally, Helm 2 can be migrated to enable Helm 3 to manage releases previously handled by Helm 2 ref. https://github.com/helm/helm-2to3. This also works when Helm 2 uses ConfigMaps for storage as Helm 3 uses Secrets for storage. There is a conflict again however when using secrets because of the same naming convention.
A possible solution around this would be for Helm 3 to use a different naming convention for release versions.
There is no problem using them in parallel. However, you need to treat them somehow like separate tools, meaning that Helm 3 won't list (or anyhow manage) your releases from Helm 2 and vice versa.
I have a set of yaml files which are of different Kinds like
1 PVC
1 PV (The above PVC claims this PV)
1 Service
1 StatefulSet Object (The above Service is for this Stateful Set
1 Config Map (The above Stateful set uses this config map
Does the Install order of these objects matter to bring up an application using these?
If you do kubectl apply -f dir on a directory containing all of those files then it should work, at least if you have the latest version as there have been bugs raised and addressed in this area.
However, there are some dependencies which aren't hard dependencies and for which there is discussion. For this reason some are choosing to order the resources themselves or use a deployment tool like helm which deploys resources in a certain order.
Does the k8s support gated launch or called gray release ? for example, I deploy a nginx service in k8s which version is 1.10.2 with replica = 10, then I want to upgrade the service to 1.11.5, I modify the deployment and the I use the kubectl rollout status deployment nginx, I find all of the 10 pods has been set to 1.11.5,so how can I reach the status --- 2 pods with verion is 1.11.5 and 8 pods remain old 1.10.2?
This pattern is referred to as canary deployments in the documentation. (See the page linked)
In short:
add a differentiating label say track: stable to your pods in the deployment (do this once and roll it out)
make a copy of the deployment file, name it foo-canary (make sure you do change the name in the file)
change that label to track: canary
change to replicas: 2
change the image or whatever else you need to and deploy it
when satisfied with the result change the original deployment and roll it out