Are there any problems using helm 2 and helm 3 in parallel? - kubernetes

Are there any problems when using helm 2 and helm 3 in parallel - on the same cluster?
Reason behind is, that the Terraform helm provider is still NOT available for helm 3. But with another application we'd like to proceed with helm 3.
Have you maybe tried this? Or did you run into some problems?

Helm 2 and Helm 3 can be installed concurrently to manage the same cluster. This works when Helm 2 uses ConfigMaps for storage as Helm 3 uses Secrets for storage. There is however a conflict when Helm 2 uses Secrets for storage and stores the release in the same namespace as the release. The conflict occurs because Helm 3 uses different tags and ownership for the secret objects that Helm 2 does. It can therefore try to create a release that it thinks does not exist but will fail then because Helm 2 already has a secret with that name in that namespace.
Additionally, Helm 2 can be migrated to enable Helm 3 to manage releases previously handled by Helm 2 ref. https://github.com/helm/helm-2to3. This also works when Helm 2 uses ConfigMaps for storage as Helm 3 uses Secrets for storage. There is a conflict again however when using secrets because of the same naming convention.
A possible solution around this would be for Helm 3 to use a different naming convention for release versions.

There is no problem using them in parallel. However, you need to treat them somehow like separate tools, meaning that Helm 3 won't list (or anyhow manage) your releases from Helm 2 and vice versa.

Related

How to share a software product having multiple microservices and with specific Kubernetes configurations

I am new to Kubernetes. I have a doubt.
Say I have a software product which has 5 Microservices. It is deployed on my Kubernetes cluster (OCP Cluster) and it is working fine. Each microservice has Kubernetes configuration files on cluster - deployment, service, configmap yaml files etc.
Suppose I want to sell the product to customers and they will setup in their own prod Kubernetes cluster (May be in OCP or Amazon EKS etc)
Which is the appropriate way to give the product to the customers?
Way1 - Sending all the yaml files (deployment.yaml, service.yaml, configmap.yaml) for each microservice. As total 5 Microservices so 5 deployment.yaml, 5 service.yaml, configMap.yaml files etc. Upon receiving those files they will manually setup the files in their Kubernetes cluster and then all the pods will come up.
Way2 - For each microservice we will give the Kubernetes Helm chart. So total 5 Helm Charts for 5 Microservices. Upon receiving the charts they will install all the yaml files with helm install.
Way3 - Or any other better way?
Note- Private Docker Image Repository access will be given to the customers for pulling the microservice images.
Way 1: No. There is tooling to do this. You shouldn't need to send yaml files around.
Way 2: This is the best way but with some edits. Create 5 helm charts that manage the separate components of your product. Then create a 6th chart that depends on the other 5. This way you only have to give 1 "product" chart to the consumer and that chart then pulls everything it needs to run. This is usually how its done. See projects like loki-stack from grafana. They have a helm chart called 'loki-stack' and that chart has dependencies on grafana, loki, Prometheus, promtail etc. Your consumers will then just helm install my-product and helm will take care of getting the 5 service charts.
Way 3: There are many ways to do this but they are all kind of specific to implementations. For example you can use OAM + KubeVela. However if a consumer doesn't use OAM + KubeVela its a problem. IMO Helm is the standard approach fow now until more people start using stuff like KubeVela

How to run multiple hazelcast cluster in one deployment?

I'm deploying hazelcast on k8s using the helmchart on github currently on revision 5.3.2.
How would one go about running two clusters, say dev_cache and qa_cache in one helm deployment each with different members? Is that possible?
I see the fields
hazelcast:
javaOptions:
existingConfigMap: xxx
and
configurationFiles: #any additional Hazelcast configuration files
in the values.yaml but am unable to find any documentation on how to use them.
In one Helm deployment, you always run one Hazelcast cluster. You need to run Helm command twice to create 2 separate Hazelcast clusters.

Update helm chart values for different environments

I have helm charts created for a microservice that I have built and everything is working as expected. Now, I have created a new k8s namespace and I want to try to deploy the same helm charts as my old namespace. Although, I have just one value that I need it different while everything else remain the same.
Do I have to create another values.yaml for the new namespace and copy everything over and update the one field I want updated ? Or is there any other way ? I do not want to use the --set method of passing updates to the command line.
David suggested the right way. You can use different values.yaml where you can specify the namespace you want to deploy the chart:
$ helm install -f another-namespace-values.yaml <my-release> .
It's also entirely possible to launch helm chart with multiple values.
For more reading please check values section of helm docs.

For how long should I keep the storage driver Secret in my cluster?

I'm using helm 3.4.2 for upgrade my charts to my AKS cluster and I saw that every time I deploy something new, it creates a new secret called sh.helm.v... This is the first time I'm using helm.
I was reading the doc and found that at version 3.x helm is using secrets to store driver as default. Cool, but every time I deploy it creates a new secret and I'm now sure if this is the best to keep it all in my cluster.
Soo, should I keep then all in my cluster? Like, every time I deploy some thing, it creates a secret and live there
or
Can I remove the last before? Like, deploy v5 now and erase v1, v2, v3 and keep the v4 and v5 for some reason. If it's ok to do it, does anyone has a clue for how to do it? Using a bash ou kubectl?
Thanks a lot!
So yes, There are few major changes in Helm3, comparing to Helm2.
Secrets are now used as the default storage driver
In Helm 3, Secrets are now used as the default storage driver. Helm 2 used ConfigMaps by default to store release information. In
Helm 2.7.0, a new storage backend that uses Secrets for storing
release information was implemented, and it is now the default
starting in Helm 3.
Also
Release Names are now scoped to the Namespace
In Helm 3, information about a particular release is now stored in the
same namespace as the release itself. With this greater alignment to
native cluster namespaces, the helm list command no longer lists all
releases by default. Instead, it will list only the releases in the
namespace of your current kubernetes context (i.e. the namespace shown
when you run kubectl config view --minify). It also means you must
supply the --all-namespaces flag to helm list to get behaviour similar
to Helm 2.
Soo, should I keep then all in my cluster? Like, every time I deploy
some thing, it creates a secret and live there or
Can I remove the last before?
I dont thinks its a good practice to remove anything manually. If it is not mandatory necessary - sure better not touch them. However, you can delete unused ones, if you sure you will not need old revisions in the future.
#To check all secretes were created by helm:
kubectl get secret -l "owner=helm" --all-namespaces
#To delete revision you can simply remove appropriate secret..
kubectl delete secret -n <namespace> <secret-name>
Btw(just FYI), taking into an account the fact Helm3 is scoped to namespaces - you can simply delete deployment by deleting its corresponding namespace
And the last remark, maybe it would be useful for: you can pass --history-max to helm upgrade to
limit the maximum number of revisions saved per release. Use 0 for no
limit (default 10)

Dynamically refresh pods on secrets update on kubernetes while using helm chart

I am creating deployment,service manifest files using helm charts, also secrets by helm but separately not with deployments and service.
secretes are being loaded as env variables on pod level.
we are looking to refresh or restart PODs when we update secrets with new content.
Kubernetes does not itself support this feature at the moment and there is feature in the works (https://github.com/kubernetes/kubernetes/issues/22368).
You can use custom solution available to achieve the same and one of the popular ones include Reloader.