I deploy my Helm deploys to isolated namespaces.
Deleting a namespace deletes all the resources in it - except the Helm deployment.
Deleting a Helm deployment deletes all resource in it - except the namespace.
I have to do this which seems redundant:
helm del `helm ls NAMESPACE --short` --purge
kubectl delete namespace NAMESPACE
I'd rather just delete my namespace and have the Helm deploy also purged - is this possible?
Deleting a namespace deletes all the resources in it- except the helm deployment
This can't be (deleting a namespace implies deleting everything in it, there aren't any exceptions), and must means that the state representing Helm's concept of a deployment doesn't live in that namespace. Helm stores these as config maps in the TILLER_NAMESPACE. See here and here.
It's not surprising that if you create some resources with helm and then go "under the hood" and delete those resources directly via kubectl, Helm's state of the world won't result in that deployment disappearing.
Deleting a helm deployment deletes all resource in it- except the namespace
That sounds like expected behaviour. Presumably you created the namespace out of band with kubectl, it's not part of your Helm deployment. So deleting the Helm deployment wouldn't delete that namespace.
If you kubectl create namespace NS and helm install CHART --namespace NS then it's not surprising that to clean up, you need to helm delete the release and then kubectl delete the namespace.
The only way I could imagine to do that would be for the Helm chart itself to both create a namespace and create all subsequent namespace-scoped resources within that namespace. Here is an example that appears to do such a thing.
There is a PR created to cleanup all resources deployed from helm. follow the link --> https://github.com/helm/helm/issues/1464
hopefully in the future release it will be addressed
Related
I am trying to use Helm 3 to install Kubeflow 1.3 with Istio 1.9 on Kubernetes 1.16. Kubeflow does not provide official Helm chart so I figured it out by myself.
But Helm does not guarantee order. Pods of other deployments and statefulsets could be up before Istio mutating webhook and istiod are up. For example, if A pod is up earlier without istio-proxy, B pod is up later with a istio-proxy, they cannot communicate with each other.
Are there any simple best practices so I can work this out as expected each time I deploy? That is say, make sure my installation with Helm is atomic?
Thank you in advance.
UPDATE:
I tried for three ways:
mark resources as pre-install, post-install, etc.
using subcharts
decouple one chart into several charts
And I adopted the third. The issue of the first is that helm hook is designed for Job, a resource could be marked as helm hook but it would not be deleted when using helm uninstall since a resource cannot hold two helm hooks at the same time(key conflict in annotations). The issue of the second is that helm installs subcharts and charts at the same time, helm call hooks of subcharts and charts at the same time as well.
Helm does not guarantee order.
Not completely. Helm collects all of the resources in a given Chart and it's dependencies, groups them by resource type, and then installs them in the following order:
Namespace
NetworkPolicy
ResourceQuota
LimitRange
PodSecurityPolicy
PodDisruptionBudget
ServiceAccount
Secret
SecretList
ConfigMap
StorageClass
PersistentVolume
PersistentVolumeClaim
CustomResourceDefinition
ClusterRole
ClusterRoleList
ClusterRoleBinding
ClusterRoleBindingList
Role
RoleList
RoleBinding
RoleBindingList
Service
DaemonSet
Pod
ReplicationController
ReplicaSet
Deployment
HorizontalPodAutoscaler
StatefulSet
Job
CronJob
Ingress
APIService
Additionally:
That is say, make sure my installation with Helm is atomic
you should to know that:
Helm does not wait until all of the resources are running before it exits.
You generally have no control over the order if you are using Helm. You can try to use Init Containers to validate your pods to check if they have all dependencies before they run. You can read more about it here. Another workaround will be to install a health check to make sure everything is okay. If not, it will restart until it is successful.
See also:
this article about checking your helm deployments.
question Helm Subchart order of execution in an umbrella chart with good explanation
this question
related topic on github
Using elastic official Helm chart, to deploy an elasticsearch cluster.
I had to create some k8s objects such as a NetworkPolicy in addition of the Helm values.yaml file.
I am wondering if it's possible to "attach" this object to the Helm release, so I can delete it when doing helm delete?
No, you can't.
Helm uses $ helm get manifest <release-name> -n <namespace> to list the items to delete. So, unless your objects are part of that list, you won't be able to delete them with $ helm delete.
We use kustomize to create a unique configMap for our Deployments whenever a new change to configMap data is made. We now are left with a number of old configMaps which are no longer in use by any Pods - I can find them in Rancher, but that's a pain - how can I automate cleaning up those configMaps that are no longer used by any Pods?
I've tried running: kubectl get configmaps --namespace mynamespace --output=json
I was hoping to see a reverse reference to the Pod that's using it - but I can't find the right info in there.
If your configmaps can be identified using a label, you can just use the --prune flag to get rid of the dangling resources. If you'd add it to your deployment pipelines the orphaned resources should slowly be cleaned from the cluster.
See this comment for how people are using this in conjunction with kustomize.
I am trying to delete all the resources of a deployed micro service like Deployments, services, PVC's and PV's associated with a helm release using the "jx step helm delete {release_name} -n {db-backup}".
This deletes all the resources except the PV's. And I am unable to use the traditional kubernetes commands in the Jenkinsfile to delete the PV's.
Please let me know the jx command to delete PV's.
I have integrated gitlab with Kubernetes cluster which is hosted on AWS. Currently it builds the code from gitlab to the default namespace. I have created two namespaces in kubernetes one for production and one for development. What are the steps if I want that to be deployed in a dev or a production namespace. Do I need to make changes at the gitlab level or on the kubernetes level.
This is done at the kubernetes level. Whether you're using helm or kubectl, you can specify the desired namespace in the command.
As in:
kubectl create -f deployment.yaml --namespace <desired-namespace>
helm install stable/gitlab-ce --namespace <desired-namespace>
Alternatively, you can just change your current namespace to the desired namespace and install as you did before. By default, helm charts or kuberenetes yaml files will install into your current namespace unless specified otherwise.