Helm delete and reinstall deployment. Wait or not to wait? - kubernetes

I have situation where I am deploying some chart, let's call it "myChart".
Let's suppose I have a pipeline, where I am doing below:
helm delete myChart_1.2 -n <myNamespace>
An right after I am installing new one:
helm delete myChart_1.3 -n <myNamespace>
Does Kubernetes or maybe Helm knows that all the resources should be deleted first and then install new?
For instance there might be some PVC and PV that are still not deleted. Is there any problem with that, should I add some waits before deployment?

Helm delete (aka. uninstall) should remove the objects managed in a given deployment, before exiting.
Still, when the command returns: you could be left with resources in a Terminating state, pending actual deletion.
Usually, we could find about PVC, that may still be attached to a running container.
Or objects such as ReplicaSet or Pods -- most likely, your Helm chart installs Deployments, DaemonSets, StatefulSets, ... top-level objects may appear to be deleted, while their child objects are still being terminated.
Although this shouldn't be an issue for Helm, assuming that your application is installed using a generated name, and as long as your chart is able to create multiple instances of a same application, in a same cluster/namespace, ... without them overlapping ( => if all resources managed through Helm have unique names, which is not always the case ).
If your chart is hosted on a public repository, let us know what to check. And if you're not one of the maintainer for that chart: beware that Helm charts could go from amazing to very bad, depending on who's contributing, what use cases have been met so far, ...

Kubernetes (and Helm by extension) will never clean up PVCs that have been created as part of StatefulSets. This is intentional (see relevant documentation) to avoid accidental loss of data.
Therefore, if you do have PVCs created from StatefulSets in your chart and if your pipeline re-installs your Helm chart under the same name, ensure that PVCs are deleted explicitly after running "helm delete", e.g. with a separate "kubectl delete" command.

Related

Helm deploy does not update/remove code that was changed prior on Cronjobs or Deployments

I am new to the K8s realm but enjoy this service. I do have two issues which I was expecting Helm to be able to update.
For instance, my question revolves around when I do a helm upgrade. Let's say I have code where I have a volumeMount of a directory in Deployments or Cronjobs. Whenever I do the update in my YAML file to remove the mount, it still exists after Helm's completion.
So what I do is edit the YAML file of let's say deployment.yaml or cronjobs.yaml in the CLI k8s (I use k9s) for deployment.yaml. However, cronjobs.yaml is the hard one since I have to delete it and run helm update again. (Please let me know if there is a way of updating Cronjobs a better way because if I update Deployments the pods are updated but with Cronjobs they don't.)
Would appreciate any help with my questions on how best to work with k8s.

For how long should I keep the storage driver Secret in my cluster?

I'm using helm 3.4.2 for upgrade my charts to my AKS cluster and I saw that every time I deploy something new, it creates a new secret called sh.helm.v... This is the first time I'm using helm.
I was reading the doc and found that at version 3.x helm is using secrets to store driver as default. Cool, but every time I deploy it creates a new secret and I'm now sure if this is the best to keep it all in my cluster.
Soo, should I keep then all in my cluster? Like, every time I deploy some thing, it creates a secret and live there
or
Can I remove the last before? Like, deploy v5 now and erase v1, v2, v3 and keep the v4 and v5 for some reason. If it's ok to do it, does anyone has a clue for how to do it? Using a bash ou kubectl?
Thanks a lot!
So yes, There are few major changes in Helm3, comparing to Helm2.
Secrets are now used as the default storage driver
In Helm 3, Secrets are now used as the default storage driver. Helm 2 used ConfigMaps by default to store release information. In
Helm 2.7.0, a new storage backend that uses Secrets for storing
release information was implemented, and it is now the default
starting in Helm 3.
Also
Release Names are now scoped to the Namespace
In Helm 3, information about a particular release is now stored in the
same namespace as the release itself. With this greater alignment to
native cluster namespaces, the helm list command no longer lists all
releases by default. Instead, it will list only the releases in the
namespace of your current kubernetes context (i.e. the namespace shown
when you run kubectl config view --minify). It also means you must
supply the --all-namespaces flag to helm list to get behaviour similar
to Helm 2.
Soo, should I keep then all in my cluster? Like, every time I deploy
some thing, it creates a secret and live there or
Can I remove the last before?
I dont thinks its a good practice to remove anything manually. If it is not mandatory necessary - sure better not touch them. However, you can delete unused ones, if you sure you will not need old revisions in the future.
#To check all secretes were created by helm:
kubectl get secret -l "owner=helm" --all-namespaces
#To delete revision you can simply remove appropriate secret..
kubectl delete secret -n <namespace> <secret-name>
Btw(just FYI), taking into an account the fact Helm3 is scoped to namespaces - you can simply delete deployment by deleting its corresponding namespace
And the last remark, maybe it would be useful for: you can pass --history-max to helm upgrade to
limit the maximum number of revisions saved per release. Use 0 for no
limit (default 10)

Add Sidecar container to running pod(s)

I have helm deployment scripts for a vendor application which we are operating. For logging solution, I need to add a sidecar container for fluentbit to push the logs to aggregated log server (splunk in this case).
Now to define this sidecar container, I want to avoid changing vendor defined deployment scripts. Instead i want some alternative way to attach the sidecar container to the running pod(s).
So far I have understood that sidecar container can be defined inside the same deployment script (deployment configuration).
Answering the question in the comments:
thanks #david. This has to be done before the deployment. I was wondering if I could attach a sidecar container to an already deployed (running) pod.
You can't attach the additional container to a running Pod. You can update (patch) the resource definition. This will force the resource to be recreated with new specification.
There is a github issue about this feature which was closed with the following comment:
After discussing the goals of SIG Node, the clear consensus is that the containers list in the pod spec should remain immutable. #27140 will be better addressed by kubernetes/community#649, which allows running an ephemeral debugging container in an existing pod. This will not be implemented.
-- Github.com: Kubernetes: Issues: Allow containers to be added to a running pod
Answering the part of the post:
Now to define this sidecar container, I want to avoid changing vendor defined deployment scripts. Instead i want some alternative way to attach the sidecar container to the running pod(s).
Below I've included two methods to add a sidecar to a Deployment. Both of those methods will reload the Pods to match new specification:
Use $ kubectl patch
Edit the Helm Chart and use $ helm upgrade
In both cases, I encourage you to check how Kubernetes handles updates of its resources. You can read more by following below links:
Kubernetes.io: Docs: Tutorials: Kubernetes Basics: Update: Update
Medium.com: Platformer blog: Enable rolling updates in Kubernetes with zero downtime
Use $ kubectl patch
The way to completely avoid editing the Helm charts would be to use:
$ kubectl patch
This method will "patch" the existing Deployment/StatefulSet/Daemonset and add the sidecar. The downside of this method is that it's not automated like Helm and you would need to create a "patch" for every resource (each Deployment/Statefulset/Daemonset etc.). In case of any updates from other sources like Helm, this "patch" would be overridden.
Documentation about updating API objects in place:
Kubernetes.io: Docs: Tasks: Manage Kubernetes objects: Update api object kubectl patch
Edit the Helm Chart and use $ helm upgrade
This method will require editing the Helm charts. The changes made like adding a sidecar will persist through the updates. After making the changes you will need to use the $ helm upgrade RELEASE_NAME CHART.
You can read more about it here:
Helm.sh: Docs: Helm: Helm upgrade
A kubernetes ressource is immutable, as mention by dawid-kruk . Therefore modifing the pod description will cause the containers to restart.
You can modify the pod using the kubectl patch command, don't forget to reapply the. Patch as necessary.
Alternatively The two following options will inject the sidecar without having to modify/fork upstream chart or mangling deployed ressources.
#1 mutating admission controller
A mutating admission controller (webhook) can modify ressources see https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/
You can use a generic framework like opa.
Or a specific webhook like fluentd-sidecar-injector (not tested)
#2 support arbitrary sidecar in helm
You could submit a feature request to the chart mainter to supooort arbitrary sidecar injection, like in Prometheus, see https://stackoverflow.com/a/62910122/1260896

Restart Pod when secrets gets updated

We are using secret as environment variables on pod, but every time we have updated on secrets, we are redeploying the pods to take changes effect. We are looking for a mechanism where Pods get restarted automatically whenever secrets gets updated. Any help on this?
Thanks in advance.
There are many ways to handle this.
First, use Deployment instead of "naked" Pods that are not managed. The Deployment will create new Pods for you, when the Pod template is changed.
Second, to manage Secrets may be a bit tricky. It would be great if you can use a setup where you can use Kustomize SecretGenerator - then each new Secret will get its unique name. In addition, that unique name is reflected to the Deployment automatically - and your pods will automatically be recreated when a Secret is changed - this match your origin problem. When Secret and Deployment is handled this way, you apply the changes with:
kubectl apply -k <folder>
If you mount your secrets to pod it will get updated automatically you don't have to restart your pod as mentioned here
Other approaches are staker reloader which can reload your deployments based on configs, secrets etc
There are multiple ways of doing this:
Simply restart the pod
this can be done manually, or,
you could use an operator provided by VMware carvel kapp controller (documentation), using kapp controller you can reload the secrets/ configmap without needing to restart the pods (which effectively runs helm template <package> on a periodic basis and applies the changes if it founds any differences in helm template), check out my design for reloading the log level without needing to restart the pod.
Using service bindings https://servicebinding.io/

Accidentally deleted Kubernetes namespace

I have a Kubernetes cluster on google cloud. I accidentally deleted a namespace which had a few pods running in it. Luckily, the pods are still running, but the namespace is in terminations state.
Is there a way to restore it back to active state? If not, what would the fate of my pods running in this namespace be?
Thanks
A few interesting articles about backing up and restoring Kubernetes cluster using various tools:
https://medium.com/#pmvk/kubernetes-backups-and-recovery-efc33180e89d
https://blog.kubernauts.io/backup-and-restore-of-kubernetes-applications-using-heptios-velero-with-restic-and-rook-ceph-as-2e8df15b1487
https://www.digitalocean.com/community/tutorials/how-to-back-up-and-restore-a-kubernetes-cluster-on-digitalocean-using-heptio-ark
https://www.revolgy.com/blog/kubernetes-in-production-snapshotting-cluster-state
I guess they may be useful rather in future than in your current situation. If you don't have any backup, unfortunately there isn't much you can do.
Please notice that in all of those articles they use namespace deletion to simulate disaster scenario so you can imagine what are the consequences of such operation. However the results may not be seen immediately and you may see your pods running for some time but eventually namespace deletion removes all kubernetes cluster resources in a given namespace including LoadBalancers or PersistentVolumes. It may take some time. Some resource may not be deleted because it is still used by another resource (e.g. PersistentVolume by running Pod).
You can try and run this script to dump all your resources that are still available to yaml files however some modification may be needed as you will not be able to list objects belonging to deleted namespace anymore. You may need to add --all-namespaces flag to list them.
You may also try to dump any resource which is still available manually. If you still can see some resources like Pods, Deployments etc. and you can run on them kubectl get you may try to save their definition to a yaml file:
kubectl get deployment nginx-deployment -o yaml > deployment_backup.yaml
Once you have your resources backed up you should be able to recreate your cluster more easily.
backup most resource configuration reguarly:
kubectl get all --all-namespaces -o yaml > all-deploy-resources.yaml
but this is not includes all resources.
another ways
by ark/velero:
https://github.com/vmware-tanzu/velero (Backup and migrate Kubernetes applications and their persistent volumes https://velero.io)