Why many official helm charts include Passwords/Credentials in Values.yaml? - kubernetes

I noticed that many official charts hosted on Google or Bitnami repositories are including Username/Password/Credentials in the Values file, without giving you the ability to use an existing secret.
Can you helm me understand what would be the reason behind such approach?
As far as I know, including clear credentials in helm charts is not a best-practice.

Because it's quicker and it will work out of the box.
If you try to use the already existing secret, it has to be in the same namespace as the deployment. If it's not then it has to be copied and validate if it was moved correctly.
You should not rely on unchanged charts in production environment, as #switchboard.op mentioned.
I think most apps that are being rewritten for GoogleCloudPlatform/click-to-deploy are using secrets.

I think the maintainers expect you to override those default values when you create a release for something that's worth protecting. You can do this with your own values file or with the --set runtime flag.

I think it's not a good practice, however as mentioned by #Crou, it's quick. In general, I think each Helm Chart should give an option to define your credentials as secrets.
We do it in the Hazelcast Helm Charts by giving this option. I think that's the best-practice for Helm Charts.

Related

How to manage external Helm Chart dependencies

I'm very curious how I should combine my own Helm chart with a set of contributed charts, i.e. the ones of BitNami.
So say I have my own chart that defines my backend application, how do I add a dependency to the contributed MySQL chart (instead of manually crafting it myself).
Another use case would be to add ArgoCD to my own custom chart. How do I do this in a declarative manner, without running the needed commands on the cluster manually?
I'm aware of the helm repo add and helm install for 3rd party charts, but these commands do not lend them well for CI/CD because they are not idempotent. I would like my chart to be declarative, so a single helm install also installs all listed dependencies. And the weird thing is that I cannot find anything about this topic online.
Would love your feedback!

Helm Chart: How do I install dependencies first?

I've been developing a prototype chart that depends on some custom resource definitions that are defined in one of the child charts.
To be more specific, I'm trying to create the resources defined in the strimzi-kafka-operator within my helm chart and would like the dependency to be explicitly installed first. I followed the helm documentation and added the following to my Chart.yaml
dependencies:
- name: strimzi-kafka-operator
version: 0.16.2
repository: https://strimzi.io/charts/
I ran:
$ helm dep up ./prototype-chart
$ helm install ./prototype-chart
> Error: unable to build Kubernetes objects from release manifest: unable to recognize "": no matches for kind "KafkaTopic" in version "kafka.strimzi.io/v1beta1"
which shows that it's trying to deploy my chart before my dependency. What is the correct way to install dependencies first and then my parent chart?
(For reference, here is the question I opened on GitHub directly with Strimzi where they informed me they aren't sure how to use their helm as a dependency:
https://github.com/strimzi/strimzi-kafka-operator/issues/2552
)
Regarding CRD's: the fact that Helm by default won't manage those1 is a feature, not a bug. It will still install them if not present; but it won't modify or delete existing CRD's. The previous version of Helm (v2) does, but (speaking from experience) that can get you into all sorts of trouble if you're not careful. Quoting from the link you referenced:
There is not support at this time for upgrading or deleting CRDs using Helm. This was an explicit decision after much community discussion due to the danger for unintentional data loss. [...] One of the distinct disadvantages of the crd-install method used in Helm 2 was the inability to properly validate charts due to changing API availability (a CRD is actually adding another available API to your Kubernetes cluster). If a chart installed a CRD, helm no longer had a valid set of API versions to work against. [...] With the new crds method of CRD installation, we now ensure that Helm has completely valid information about the current state of the cluster.
The idea here is that Helm should operate only at the level of release data (adding/removing deployments, storage, etc.); but with CRD's, you're actually modifying an extension to the Kubernetes API itself, potentially inadvertently breaking other releases that use the same definitions. Consider if you're on a team that has a "library" of CRDs shared between several charts, and you want to uninstall one — formerly, With v2, Helm would happily let you modify or even delete those at will, with no checks on if/how they were used in other releases. Changes to CRDs are changes to your control plane / core API, and should be treated as such — you're modifying global resources.
In short: with v3, Helm positions itself more as a "developer" tool to define, template, and manage releases; CRDs, however, are meant to be managed independently e.g. by a "cluster administrator". At the end of the day, it's a win for all sides, since developers can setup/teardown deployments at will, with confidence that it's not going to break functionality elsewhere... and whoever's on call won't have to deal with alerts if/when you accidentally delete/modify a CRD and break things in production :)
See also the extensive discussion here for more context behind this decision.
Hope this helps!

What is the right way to manage changes in kubernetes manifests?

I've been using terraform for a while and I really like it. I also set up Atlantis so that my team could have a "GitOps" flow. This is my current process:
Add or remove resources from Terraform files
Push changes to GitHub and create a pull request
Atlantis picks up changes and creates a terraform plan
When the PR is approved, Atlantis applies the changes
I recently found myself needing to set up a few managed Kubernetes clusters using Amazon EKS. While Terraform is capable of creating most of the basic infrastructure, it falls short when setting up some of the k8s resources (no support for gateways or ingress, no support for alpha/beta features, etc). So instead I've been relying on a manual approach using kubectl:
Add the resource to an existing file or create a new file
Add a line to a makefile that runs the appropriate command (kubectl apply or create) on the new file
If I'm using a helm chart, add a line with helm template and then kubectl apply (I didn't really like using tiller, and helm3 is getting rid of it anyway)
If I want to delete a resource, I do it manually with kubectl delete
This process feels nowhere near as clean as what we're doing in Terraform. There are several key problems:
There's no real dry-run. Using kubectl --dry-run or kubectl diff doesn't really work, it's only a client-side diff. Server-side diff functions are currently in alpha
There's no state file. If I delete stuff from the manifests, I have to remember to also delete it from the cluster manually.
No clear way to achieve gitops. I've looked at Weaveworks Flux but that seems to be geared more towards deploying applications.
The makefile is getting more and more complicated. It doesn't feel like this is scaleable.
I should acknowledge that I'm fairly new to Kubernetes, so might be overlooking something obvious.
Is there a way for me to achieve a process similar to what I have in Terraform, within the Kubernetes universe?
This is more of an opinion question so I'll answer with an opinion. If you like to manage configuration you can try some of these tools:
If you want to use existing YAML files (configurations) and use something at a higher level you can try kustomize.
If you want to manage Kubernetes configurations using Jsonnet you should take a look at Ksonnet. Keep in mind that Ksonnet will not be supported in the future.
If you want to just automatically do a helm update in an automated way, there is not a tool there yet. You will have to build something at this point to orchestrate everything. For example, we ended up creating an in house tool that does this.

Create custom helm charts

I'm using helm charts to create deploy micro services, by executing helm create it creates basic chart with deployment, services and ingress but I have few other configurations such as horizontal pod autoscaler, pod disruption budget.
what I do currently copy the yaml and change accordingly, but this takes lot of time and I don't see this as a (correct way/best practice) to do it.
helm create <chartname>
I want to know how you can create helm charts and have your extra configurations as well.
Bitnami's guide to creating your first helm chart describes helm create as "the best way to get started" and says that "if you already have definitions for your application, all you need to do is replace the generated YAML files for your own". The approach is also suggested in the official helm docs and the chart developer guide. So you are acting on best advice.
It would be cool if there were a wizard you could use to take existing kubernetes yaml files and make a helm chart from them. One tool like this that is currently available is chartify. It is listed on helm's related projects page (and I couldn't see any others that would be relevant).
You can try using Move2Kube. You will have to put all your yamls (if the source is kubernetes yamls) or other source artifacts in a directory (say src) and do move2kube translate -s src/.
In the wizard that comes up, you can choose helm instead of yamls and it will create a helm chart for you.

Is there a concept of inheritance for Kubernetes deployments?

Is there a way to create a tree of inheritance for Kubernetes deployments? I have a number of deployments which are similar but not identical. They share many ENV vars but not all. They all use the same image.
For example, I have a dev deployment which is configured almost identical to a production deployment but has env vars pointing to a different database backend. I have a celery deployment which is configured the same as the production deployment, however, it has a different run command.
Helm is what many people are using for this. It let's you create templates for kubernetes descriptors and pass parameters in to generate descriptors from the templates.
There are other tools out there which can be used to generate variations on kubernetes deployment descriptors by injecting parameters into templates. Ansible is also popular. But Helm is closely connected with the Kubernetes CNCF and community and there's a good selection of official charts available.
EDIT: If the aim is to enable different deployments (e.g. for dev and prod) using a single docker image then that's probably best handled with a single chart. You can create different values files for each deployment and supply the chosen values file to helm install with the --values parameter. If there are parts of the chart that are only sometimes applicable then they can be wrapped in if conditions to turn them on/off.
On the subject of inheritance specifically, there's an example in the helm documention of how to take another chart as a parent/dependency and override its values and I created a chart earlier that you can see in github that includes several other charts and overrides parts of all of them via the values.yml. It also shares some config between the included charts with globals. If you're looking to use a parent to reduce duplication rather than join multiple apps then it is possible to create a base/wrapper chart but it may turn out to be better to just duplicate config.
EDIT (180119): The alternative of Kustomize may soon become available in kubectl
You may also want to check Kustomize. It provides some support to write your yaml manifests in hierarchical form, so that you don't have to repeat yourself.