Why does helm need a cluster-side component (tiller)? - kubernetes

I understand that helm consists of a client-side component (the helm CLI) and a cluster-side component (tiller). The docs say that tiller is responsible for building and managing releases. But why does this need to be done from the cluster? Why can't helm build and manage releases from the client, and then simply push resources to kubernetes?

Tiller can also be run on the client side as mentioned in the Helm documentation here. The documentation refers to it as Running Tiller Locally.
But, as mentioned in the same documentation it's mainly for the sake of development. Had been thinking about it and not exactly sure why only for development and not for production.

There where a lot of limitations with running client side only, as mentioned in this thread https://github.com/helm/helm/issues/2722.
But helm v3 will be a complete rewrite with no server side component.

Related

Ngnix Ingress Controller with for Long term support

I see many different Nginx implementation.
I see some post saying stable/nginx-ingresschart is deprecated, move toingress-nginx/nginx-ingress` chat.
This project https://github.com/kubernetes/ingress-nginx/releases has two Nginx images NGINX: 0.34.1 & ingress-nginx-2.16.0 what is the difference between these two images.
Which Nignx Helm Chat to use for Long term support.
Thanks
SR
The ingress-nginx-2.x.x helm chart uses the nginx-x.x.x container. You don't normally need to reference the container image directly when using helm chart as that is set in the default values.
Helm itself moved a major version recently, from 2 -> 3 which caused a lot of changes to how helm repos are structured which is why you see the "deprecated" message in the old Helm 2 stable repo.
I don't believe the ingress-nginx project has an LTS release strategy. Just use a latest 2.X release or n-1 if you want to protect yourself from unexpected changes which get thrown in occasionally.
NGINX (the company) do provide their own alternative NGINX kubernetes-ingress project if you are looking for commercial support.

How to access helm programmatically

I'd like to access cluster deployed Helm charts programmatically to make web interface which will allow manual chart manipulation.
I found pyhelm but it supports only Helm 2. I looked on npm, but nothing there. I wrote a bash script but if I try to use it's output I get just a string really so it's not really useful.
I'd like to access cluster deployed Helm charts programmatically to make web interface which will allow manual chart manipulation.
Helm 3 is different than previous versions in that it is a client only tool, similar to e.g. Kustomize. This means that helm charts only exists on the client (and in chart repositories) but is then transformed to a kubernetes manifest during deployment. So only Kubernetes objects exists in the cluster.
Kubernetes API is a REST API so you can access and get Kubernetes objects using a http client. Kubernetes object manifests is available in JSON and Yaml formats.
If you are OK to use Go then you can use the Helm 3 Go API.
If you want to use Python, I guess you'll have to wait for the Helm v3 support of pyhelm, there is already an issue addressing this.
reached this as we also need an npm package to deploy helm3 charts programmatically (sorta whitelabel app with a gui to manage the instances).
Only thing I could find was an old discontinued package from microsoft for helm v2 https://github.com/microsoft/helm-web-api/tree/master/on-demand-micro-services-deployment-k8s
I dont think using k8s API would work, as some charts can get fairly complex in terms of k8s resources, so I got some inspiration and I think I will develop my own package as a wrapper to the helm cli commands, using -o json param for easier handling of the CLI output

Helm charts vs ansible-playbook vs k8s operator in system installation

I have a big and fairly complex system for install into the k8s cluster.
60 microservices and 10 helm charts installed to 5 namespaces.
Currently, we run 5 helm install/upgrade commands with a pause of 30 seconds between commands. However, this strategy incurs a serious load on nodes due to the fact that we pull docker images and start applications. We have a long and not clear execution time(timeline) that often results in timeouts of components such as consul, Elasticsearch, and applications that depend on the aforementioned components.
I would like to hear opinions about ways to turn this situation around. First, here is our approach so far:
Write the script that controls installation by helm charts.
Write an ansible-playbook that runs Helm charts and controls the installation status of components.
Write an ansible-playbook install components (either using Jinja2 templates or Golang templates)
Write the k8s operator that installs components and controls the system status.
To answer my own question, I created an installation that can be used as a quick solution to fairly complex installations.
The solution relies on Ansible as an installation orchestrator and Helm as a package manager.
You can browse my github repo contains the code.
There's a lot of ways of doing this. But you can use the kubernetes api directly. You can create any tech server such as Spring Boot, NodeJS, etc that controls the creation of the Kubernetes objects that you want.
This way, basically, you'll be doing a customized Helm API, but the main difference is that you'll customize in your way with your needs.

How to expose helm to kubernetes deployment which is with golang application

I am writing a Golang application which is more like automating the helm install, so I would like to know how to expose helm to your Kubernetes deployment or any API that creates helm object which can communicate with the tiller directly for the instruction, please describe the answer with a piece of code. thanks
I have been trying with the package https://godoc.org/k8s.io/helm/pkg/helm but does not really know what are the parameters that we need to pass when creating helm client
Not to discourage you, but I thought I should point out that Helm is nearing a v3 release, which will entirely remove tiller, and hence the client will likely change also.
Here are some relevant links:
Helm v3.0.0-beta.3 release notes
Helm v3 Beta 1 Released blog post
Hope this helps.

Packaging a kubernetes based application

We have multiple(20+) services running inside docker containers which are being managed using Kubernetes. These services include databases, streaming pipelines and custom applications. We want to make this product available as an on-premises solution so that it can be easily installed, like a one-click installation sort of thing, hiding all the complexity of the infrastructure.
What would be the best way of doing this? Currently we have scripts managing this but as we move into production there will be frequent upgrades and it will become more and more complex to manage all the dependencies.
I am currently looking into helm and am wondering if I am exploring in the right direction. Any guidance will be really helpful to me. Thanks.
Helm seems like the way to go, but what you need to think about in my opinion is more about how will you deliver updates to your software. For example, will you provide a single 'version' of your whole stack, that translates into particular composition of infra setup and microservice versions, or will you allow your customers to upgrade single microservices as they are released. You can have one huge helm chart for everything, or you can use, like I do in most cases, an "umbrella" chart. It contains subcharts for all microservices etc.
My usual setup contains a subchart for every service, then services names are correctly namespaced, so they can be referenced within as .Release.Name-subchart[-optional]. Also, when I need to upgrade, I just upgraed whole chart with something like --reuse-values --set subchart.image.tag=v1.x.x which gives granular control over each service version. I also gate each subcharts resources with if .Values.enabled so I can individualy enabe/diable each subcharts resources.
The ugly side of this, is that if you do want to release single service upgrade, you still need to run the whole umbrella chart, leaving more surface for some kind of error, but on the other hand it gives this capability to deploy whole solution in one command (the default tags are :latest so clean install will always install latest versions published, and then get updated with tagged releases)