I am trying to set up a helmfile deployment for my local kubernetes cluster which is running using 'kind' (a lightweight alternative to minikube). I have charts set up for my app which are all deploying correctly, however I require an nginx-ingress controller. Luckily 'kind' provides one, which I am currently applying with the command:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml
It seems perverse that I should have everything else set up to deploy at the touch of a button, but still have to 'remember' (and also train my colleagues to remember...) to run this additional command.
I realise I could copy and paste and create my own version, but I would like to keep up to date with any changes made at source. Is it possible to create a chart that makes a reference to an external template?
I am looking at solutions using either helm or helmfile.
Your linked YAML file seems to have been generated from the ingress-nginx chart.
Subchart
You can include ingress-nginx as a subchart in Helm 3 by adding it as a dependency to your own chart. In Helm 3, this is done with the dependencies field in Chart.yaml, e.g.:
apiVersion: v2
name: my-chart
version: 0.1.0
dependencies:
- name: ingress-nginx
version: ~4.0.6
repository: https://kubernetes.github.io/ingress-nginx
condition: ingress-nginx.enabled
This may be problematic, however, if you need to install multiple versions of your own chart in the same cluster. To handle this, you'd need to consider the implications of multiple Ingress controllers.
Chart
Ingress controllers are capable of handling ingresses from various releases across multiple namespaces. Therefore, I would recommend maintaining ingress-nginx separately from your own releases that depend on it. This would mean installing ingress-nginx like you already are or as a separate chart (guide).
If you go this route, there are tools that help make it easier for devs to take a hands-off approach for setting up their K8s environments. Some popular ones include Skaffold, DevSpace, Tilt, and Helmfile.
Related
I have a few services running on a kubernetes cluster, and I use Helm Chart where I placed all my services. However, I was asked to transfer Helm Charts into Helmfile.
If I use
helmfile import myrepo/mychart
helmfile sync
Will it redeploy and substitute existing running pods or It will deploy just deploy additional services mentioned
Helm uses a packaging format called charts. A chart is a collection of files that describe a related set of Kubernetes resources. A single chart might be used to deploy something simple, like a memcached pod, or something complex, like a full web app stack with HTTP servers, databases, caches, and so on.
Helmfile is a declarative spec for deploying helm charts. It lets you...
Keep a directory of chart value files and maintain changes in version control.
Apply CI/CD to configuration changes.
Periodically sync to avoid skew in environments.
To avoid upgrades for each iteration of helm, the helmfile executable delegates to helm - as a result, helm must be installed.
Like #DavidMaze suggested, use helm diff command first to determine the changes and then use helm sync command for applying them.
We have several resources deployed as part of a helm (v3) chart. Some time ago, I made changes to resources deployed by that helm chart manually, via kubectl. This caused some drift between the values in the yaml resources deployed by the helm release (as show by helm get values <release>) and what is actually deployed in the cluster
Example: kubectl describe deployment <deployment> shows an updated image that was manually applied via a kubectl re-apply. Whereas helm show values <release> shows the original image used by helm for said deployment.
I realize that I should have performed a helm upgrade with a modified values.yaml file to execute the image change, but I am wondering if there is a way for me to sync the state of the values I manually updated with the values in the helm release. The goal is to create a new default values.yaml that reflect the current state of the cluster resources.
Thanks!
This is a community wiki answer posted for better visibility. Feel free to expand it.
According to the Helm issue 2730 this feature will not be added in the Helm, as it is outside of the scope of the project.
It looks like there is no existing tool right from the Helm, that would help to port/adapt the life kubernetes resource back into existing or new helm charts/releases.
Based on this, you can use one of the following options:
As suggested by #David Maze. The Helm Diff Plugin will show you the difference between the chart output and the cluster, but then you need to manually update values.yaml and templates.
The helm-adopt plugin is a helm plugin to adopt existing k8s resources into a new generated helm chart.
Currently, I have 2 Helm Charts - Chart A, and Chart B. Both Chart A and Chart B have the same dependency on a Redis instance, as defined in the Chart.yaml file:
dependencies:
- name: redis
version: 1.1.21
repository: https://kubernetes-charts.storage.googleapis.com/
I have also overwritten Redis's name since applying the 2 Charts consecutively results in 2 Redis instances, as such:
redis:
fullnameOverride: "redis"
When I try to install Chart A and then Chart B I get the following error:
Error: rendered manifests contain a resource that already exists. Unable to continue with install: existing resource conflict: kind: PersistentVolumeClaim, namespace: default, name: redis
I was left with the impression that 2 charts with identical dependencies would use the same instance if it's already present?
When you install a chart using Helm, it generally expects each release to have its own self-contained set of Kubernetes objects. In the basic example you show, I'd expect to see Kubernetes Service objects named something like
release-a-application-a
release-a-redis
release-b-application-b
release-b-redis
There is a general convention that objects are named starting with {{ .Release.Name }}, so the two Redises are separate.
This is actually an expected setup. A typical rule of building microservices is that each service contains its own isolated storage, and that services never share storage with each other. This Helm pattern supports that, and there's not really a disadvantage to having this setup.
If you really want the two charts to share a single Redis installation, you can write an "umbrella" chart that doesn't do anything on its own but depends on the two application charts. The chart would have a Chart.yaml file and (in Helm 2) a requirements.yaml file that references the two other charts, but not a templates directory of its own. That would cause Helm to conclude that a single Redis could support both applications, and you'd wind up with something like
umbrella-application-a
umbrella-application-b
umbrella-redis
(In my experience you usually don't want this – you do want a separate Redis per application – and so trying to manage multiple installations using an umbrella chart doesn't work especially well.)
Unfortunately, Helm can't handle multiple resources with the same name or in other word there isn't any share resource capability.
You can follow This issue
I think you can use kustomize template to use share resources. There is a really good kustomize vs helm article.
I am just wondering if anyone has figured out a declarative way to have helm charts installed/configured as part of a cluster initiation and that could be checked into source control. Using Kuberenetes I have very much gotten used to the "everything as code" type of workflow and I realize that installing and configuring helm is based mostly on imperative workflows via the CLI.
The reason I am asking is because currently we have our cluster in development and will be recreating it in production. Most of our configuration has been done declaratively via the deployment.yaml file. However we have spent a significant amount of time installing and configuring certain helm charts (e.g. prometheus, grafana etc.)
There a tools like helmfile or helmsman which allow you to declare to be installed Helm releases as code.
Here is an example from a helmfile.yaml doing so:
releases:
# Published chart example
- name: promnorbacxubuntu # name of this release
namespace: prometheus # target namespace
chart: stable/prometheus # the chart being installed to create this release, referenced by `repository/chart` syntax
set: # values (--set)
- name: rbac.create
value: false
Running helmfile charts will then ensure that all listed releases are installed
My team had a similar kind of problem and we solved it with Operators. And the best part about of Operators is that there are 3 kinds and one of them is Helm based.
So you could use a Helm Based Operator , create an associated CRD and then declare your configurations there. Those configurations are then ported directly to the Helm chart without you, as the user, having to do anything.
So I am using the helm chart stable/traefik to deploy a reverse proxy to my cluster. I need to customise it beyond what is possible with the variables I can set for the template.
I want to enable the dashboard service while not creating an ingress for it (I set up OpenVPN to access the traefik dashboard only via VPN).
Both dashboard-ingress.yaml and dashboard-service.yaml conditionally include the ingress or the respective service based on the same variable {{- if .Values.dashboard.enabled }}
From my experience I would fork the helm chart and push the customised version to my own repository.
Is there a way to add that customization but keep the original helm chart from the stable repository?
You don't necessarily have to push to your own repository as you could take the source code and include the chart in your own as source. For example, if you dig into the gitlab chart in their charts dependencies they've included multiple other charts as source their, not packaged .tgz files. That enables you to make changes in the chart within your own source (much as the gitlab guys have). You could get the source using helm fetch stable/traefik --untar
However, including the chart as source is still quite close to forking. If you want to upgrade to get fixes then you still have to reapply your changes. I believe your only other option is to raise the issue on the official chart repo. Perhaps for your case you could suggest to the maintainers that the ingress be included only when .Values.dashboard.enabled and a separate ingress condition is met.