Provided I have deployment defined in Helm Chart with subcharts.
Does "helm install --atomic ..." rollback the deployment of all charts and subcharts in case any of the chart/subchart deployment fails.
In other words is the whole deployment including subcharts atomic?
Related
I have a custom application helm chart with an ingress object which is deployed in production.
Now I need to migrate the ingress source code object from the helm chart to terraform to give control over the object to another team.
Technically no problem with accepting a downtime.
But I want to keep the ingress object from being undeployed by the helm chart during deployment as there is a letsencrypt certificate attached to it.
So is there a possibility to tell helm to keep the ingress object when I remove the ingress in the source of the helm chart during helm upgrade?
found the answer myself in the helm anntotations. https://helm.sh/docs/howto/charts_tips_and_tricks/#tell-helm-not-to-uninstall-a-resource
That mean's you deploy the ingress again via helm chart with the annotation "helm.sh/resource-policy": keep.
Then you remove the ingress from the helm chart and redeploy it.
Now the ingress is still deployed in kubernetes but not anymore under control of the helm release.
Next step is to model/code the ingress in terraform and import the resource via terraform import.
Last step is to test with terraform plan if the imported resource corresponds completely with the coded ingress in terraform
That's it.
You can just keep the helm chart as it is and add details into the terraform, I think it will work.
Terraform will run the plan and apply the helm release and if you set helm config to roll out, in that case, if No changes there no update will get applied to resources like ingress, deployment etc.
With terraform, you can use the Helm provider: https://registry.terraform.io/providers/hashicorp/helm/latest/docs
I am trying to use Helm 3 to install Kubeflow 1.3 with Istio 1.9 on Kubernetes 1.16. Kubeflow does not provide official Helm chart so I figured it out by myself.
But Helm does not guarantee order. Pods of other deployments and statefulsets could be up before Istio mutating webhook and istiod are up. For example, if A pod is up earlier without istio-proxy, B pod is up later with a istio-proxy, they cannot communicate with each other.
Are there any simple best practices so I can work this out as expected each time I deploy? That is say, make sure my installation with Helm is atomic?
Thank you in advance.
UPDATE:
I tried for three ways:
mark resources as pre-install, post-install, etc.
using subcharts
decouple one chart into several charts
And I adopted the third. The issue of the first is that helm hook is designed for Job, a resource could be marked as helm hook but it would not be deleted when using helm uninstall since a resource cannot hold two helm hooks at the same time(key conflict in annotations). The issue of the second is that helm installs subcharts and charts at the same time, helm call hooks of subcharts and charts at the same time as well.
Helm does not guarantee order.
Not completely. Helm collects all of the resources in a given Chart and it's dependencies, groups them by resource type, and then installs them in the following order:
Namespace
NetworkPolicy
ResourceQuota
LimitRange
PodSecurityPolicy
PodDisruptionBudget
ServiceAccount
Secret
SecretList
ConfigMap
StorageClass
PersistentVolume
PersistentVolumeClaim
CustomResourceDefinition
ClusterRole
ClusterRoleList
ClusterRoleBinding
ClusterRoleBindingList
Role
RoleList
RoleBinding
RoleBindingList
Service
DaemonSet
Pod
ReplicationController
ReplicaSet
Deployment
HorizontalPodAutoscaler
StatefulSet
Job
CronJob
Ingress
APIService
Additionally:
That is say, make sure my installation with Helm is atomic
you should to know that:
Helm does not wait until all of the resources are running before it exits.
You generally have no control over the order if you are using Helm. You can try to use Init Containers to validate your pods to check if they have all dependencies before they run. You can read more about it here. Another workaround will be to install a health check to make sure everything is okay. If not, it will restart until it is successful.
See also:
this article about checking your helm deployments.
question Helm Subchart order of execution in an umbrella chart with good explanation
this question
related topic on github
Using elastic official Helm chart, to deploy an elasticsearch cluster.
I had to create some k8s objects such as a NetworkPolicy in addition of the Helm values.yaml file.
I am wondering if it's possible to "attach" this object to the Helm release, so I can delete it when doing helm delete?
No, you can't.
Helm uses $ helm get manifest <release-name> -n <namespace> to list the items to delete. So, unless your objects are part of that list, you won't be able to delete them with $ helm delete.
I am using spinnaker to deploy helm charts, using the stages Bake(Manifest) for creating the artifact and Deploy(Manifest) for deploying the chart.
Here i didn’t find out any option for the release name of helm install in the spinnaker stages. Even I spinned-up one helm pod in k8s cluster and tried to list out the releases. Even after successful helm chart deployment with spinnaker also, i didn't see any release name.
How to control the helm release name by using above spinnaker stages?
Spinnaker doesn't install helm charts using standard helm commands helm install/upgrade.
It takes helm chart as input, bake manifest stage transforms the chart to a single manifest file, then it applies the manifest directly to k8s in deploy manifest stage.
So, to answer your question. You can't control standard helm release name or helm chart version as k8s cluster doesn't have context of that helm chart.
I deploy my Helm deploys to isolated namespaces.
Deleting a namespace deletes all the resources in it - except the Helm deployment.
Deleting a Helm deployment deletes all resource in it - except the namespace.
I have to do this which seems redundant:
helm del `helm ls NAMESPACE --short` --purge
kubectl delete namespace NAMESPACE
I'd rather just delete my namespace and have the Helm deploy also purged - is this possible?
Deleting a namespace deletes all the resources in it- except the helm deployment
This can't be (deleting a namespace implies deleting everything in it, there aren't any exceptions), and must means that the state representing Helm's concept of a deployment doesn't live in that namespace. Helm stores these as config maps in the TILLER_NAMESPACE. See here and here.
It's not surprising that if you create some resources with helm and then go "under the hood" and delete those resources directly via kubectl, Helm's state of the world won't result in that deployment disappearing.
Deleting a helm deployment deletes all resource in it- except the namespace
That sounds like expected behaviour. Presumably you created the namespace out of band with kubectl, it's not part of your Helm deployment. So deleting the Helm deployment wouldn't delete that namespace.
If you kubectl create namespace NS and helm install CHART --namespace NS then it's not surprising that to clean up, you need to helm delete the release and then kubectl delete the namespace.
The only way I could imagine to do that would be for the Helm chart itself to both create a namespace and create all subsequent namespace-scoped resources within that namespace. Here is an example that appears to do such a thing.
There is a PR created to cleanup all resources deployed from helm. follow the link --> https://github.com/helm/helm/issues/1464
hopefully in the future release it will be addressed