Delete initialized hashicorp's vault - hashicorp-vault

I am trying to learn hashicorp's vault and is pretty much unaware with dev vs prod server mode and I accidentally initialized a prod server mode with
vault operator init and discarded the unseal keys and root token.
I installed the vault with minikube through helm charts. I tried deleting the helm chart and even the helm repo then proceeds to reinstalling. Despite doing so, the vault is still initialized. I attempted deleting opt/vault/data as what another stack overflow post suggests but it has no content anyways thus I'm back with the initial problem.
How do i reinitialize the vault?

I would check if you have a leftover PVC from the old installation. Just delete it and reinstall again.

Related

Helm stuck on "another operation is in progress" but no other releases or installs exist

I know title sounds like "did you even try to google", but helm gives me the typical: another operation (install/upgrade/rollback) is in progress
What I can't figure out, is there's no actual releases anywhere that are actually in progress.
I've run helm list --all --all-namespaces and the list is just blank. Same with running helm history against any namespace I can think of. Nothing, all just blank. I've even deleted the namespace and everything in it that the app was initially installed in, and it still is broken.
I've also found answers to delete secrets, which I have, and it doesn't help.
Is there some way to hard reset helm's state? Because all the answers I find on this topic involve rolling back, uninstalling, or deleting stuck releases, and none exist on this entire cluster.
Helm is v3.8.1 if that helps. Thanks for any help on this, it's driving me crazy.
I ended up figuring this out. The pipeline running helm executed from a gitlab runner that runs on one cluster, but uses a kubernetes context to target my desired cluster. At some point, the kubernetes context wasn't loaded correctly, and a bad install went to the host cluster the runner lived on.
While I was still targeting a different cluster, the helm command saw a bad install on the cluster local to it, so a helm list --all didn't see anything on the target clusetr.

How to pass configuration via argocd and crossplane

We are trying to create an environment using crossplane and argocd. Once Crossplane generates the database and saves the credentials to a secret on the management cluster. After we are deploying the credentials from management cluster to our destination cluster to a secret.
Now we need to pass the credentials from secret a to secret B which the application knows about. The issue starts when argo do not use helm install but template thus lookup function don't work. We thought about using vault as a middle man but we are not sure how to load values from secret to vault.
Anyway if you encounter such an issue or have some sort of a solution we'll be very happy to hear.
Thank you
You need to commit the (encrypted) secrets somewhere for ArgoCD to pick them up. That is the whole point of GitOps.
Alternatively you can try using https://argo-cd.readthedocs.io/en/stable/user-guide/parameters/ but this is considered a temporary workaround

For how long should I keep the storage driver Secret in my cluster?

I'm using helm 3.4.2 for upgrade my charts to my AKS cluster and I saw that every time I deploy something new, it creates a new secret called sh.helm.v... This is the first time I'm using helm.
I was reading the doc and found that at version 3.x helm is using secrets to store driver as default. Cool, but every time I deploy it creates a new secret and I'm now sure if this is the best to keep it all in my cluster.
Soo, should I keep then all in my cluster? Like, every time I deploy some thing, it creates a secret and live there
or
Can I remove the last before? Like, deploy v5 now and erase v1, v2, v3 and keep the v4 and v5 for some reason. If it's ok to do it, does anyone has a clue for how to do it? Using a bash ou kubectl?
Thanks a lot!
So yes, There are few major changes in Helm3, comparing to Helm2.
Secrets are now used as the default storage driver
In Helm 3, Secrets are now used as the default storage driver. Helm 2 used ConfigMaps by default to store release information. In
Helm 2.7.0, a new storage backend that uses Secrets for storing
release information was implemented, and it is now the default
starting in Helm 3.
Also
Release Names are now scoped to the Namespace
In Helm 3, information about a particular release is now stored in the
same namespace as the release itself. With this greater alignment to
native cluster namespaces, the helm list command no longer lists all
releases by default. Instead, it will list only the releases in the
namespace of your current kubernetes context (i.e. the namespace shown
when you run kubectl config view --minify). It also means you must
supply the --all-namespaces flag to helm list to get behaviour similar
to Helm 2.
Soo, should I keep then all in my cluster? Like, every time I deploy
some thing, it creates a secret and live there or
Can I remove the last before?
I dont thinks its a good practice to remove anything manually. If it is not mandatory necessary - sure better not touch them. However, you can delete unused ones, if you sure you will not need old revisions in the future.
#To check all secretes were created by helm:
kubectl get secret -l "owner=helm" --all-namespaces
#To delete revision you can simply remove appropriate secret..
kubectl delete secret -n <namespace> <secret-name>
Btw(just FYI), taking into an account the fact Helm3 is scoped to namespaces - you can simply delete deployment by deleting its corresponding namespace
And the last remark, maybe it would be useful for: you can pass --history-max to helm upgrade to
limit the maximum number of revisions saved per release. Use 0 for no
limit (default 10)

How to restart Kubernetes pod when a secret is updated in Hashicorp Vault?

Have successfully implemented Vault with Kubernetes and applications running in K8s are getting their environment variables from Hashicorp vault. Everything is great! But, want to take a step forward and want to restart the pod whenever a change is made to the secret in the Vault, as of now, we have to restart the pod manually to reset environment variables whenever we make changes to Vault secret. How this can be achieved? Have heard about confd but not sure how it can be implemented!
Use reloader https://github.com/stakater/Reloader. We found it quite useful in our cluster. It does a rolling update hence you can change config with zero downtime too. Also if you made some errors in configmap you can easily do a rollback.
A couple ideas, depending on how much effort you want to put into it:
Just restart the pod every so often. A hacky way to do this is with a liveness probe, like this answer. Drawback is you can't use the liveness probe as a real health check without additional scripting.
Create an operator that polls Vault for changes and instructs Kubernetes to restart the pod when a change is detected. Not sure if Vault has an events API that you could use for that.
https://www.vaultproject.io/docs/agent/template#renewals-and-updating-secrets
If a secret or token isn't renewable or leased, Vault Agent will fetch the secret every 5 minutes. This is not configurable. Non-renewable secrets include (but not limited to) KV Version 2.

Azure DevOps > Helm > Azure Kubernetes Deployment - Deletes Azure File share when deployment is deleted

TL;DR
My pods mounted Azure file shares are (inconsistently) being deleted by either Kubernetes / Helm when deleting a deployment.
Explanation
I've recently transitioned to using Helm for deploying Kubernetes objects on my Azure Kubernetes Cluster via the DevOps release pipeline.
I've started to see some unexpected behaviour in relation to the Azure File Shares that I mount to my Pods (as Persistent Volumes with associated Persistent Volume Claims and a Storage Class) as part of the deployment.
Whilst I've been finalising my deployment, I've been pushing out the deployment via the Azure Devops release pipeline using the built in Helm tasks, which have been working fine. When I've wanted to fix / improve the process I've then either manually deleted the objects on the Kubernetes Dashboard (UI), or used Powershell (command line) to delete the deployment.
For example:
helm delete myapp-prod-73
helm del --purge myapp-prod-73
Not every time, but more frequently, I'm seeing the underlying Azure File Shares also being deleted as I'm working through this process. There's very little around the web on this, but I've also seen an article outlining similar issues over at: https://winterdom.com/2018/07/26/kubernetes-azureFile-dynamic-volumes-deleting.
Has anyone in the community come across this issue?
Credit goes to https://twitter.com/tomasrestrepo here on pointing me in the right direction (the author of the article I mentioned above).
The behaviour here was a consequence of having the Reclaim Policy on the Storage Class & Persistent Volume set to "Delete". When switching over to Helm, I began following their commands to Delete / Purge the releases as I was testing. What I didn't realise, was that deleting the release would also mean that Helm / K8s would also reach out and delete the underlying Volume (in this case an Azure Fileshare). This is documented over at: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#delete
I'll leave this Q & A here for anyone else that misses this subtly with the way in which the Storage Classes, Persistent Volumes (PVs) & underlying storage operates under K8s / Helm.
Note: I think this issue was made slightly more obscure by the fact I was manually creating the Azure Fileshare (through the Azure Portal) and trying to mount that as a static volume (as per https://learn.microsoft.com/en-us/azure/aks/azure-files-volume) within my Helm Chart, but that the underlying volume wasn't immediately being deleted when the release was deleted (sometimes an hour later?).