I'd like to access cluster deployed Helm charts programmatically to make web interface which will allow manual chart manipulation.
I found pyhelm but it supports only Helm 2. I looked on npm, but nothing there. I wrote a bash script but if I try to use it's output I get just a string really so it's not really useful.
I'd like to access cluster deployed Helm charts programmatically to make web interface which will allow manual chart manipulation.
Helm 3 is different than previous versions in that it is a client only tool, similar to e.g. Kustomize. This means that helm charts only exists on the client (and in chart repositories) but is then transformed to a kubernetes manifest during deployment. So only Kubernetes objects exists in the cluster.
Kubernetes API is a REST API so you can access and get Kubernetes objects using a http client. Kubernetes object manifests is available in JSON and Yaml formats.
If you are OK to use Go then you can use the Helm 3 Go API.
If you want to use Python, I guess you'll have to wait for the Helm v3 support of pyhelm, there is already an issue addressing this.
reached this as we also need an npm package to deploy helm3 charts programmatically (sorta whitelabel app with a gui to manage the instances).
Only thing I could find was an old discontinued package from microsoft for helm v2 https://github.com/microsoft/helm-web-api/tree/master/on-demand-micro-services-deployment-k8s
I dont think using k8s API would work, as some charts can get fairly complex in terms of k8s resources, so I got some inspiration and I think I will develop my own package as a wrapper to the helm cli commands, using -o json param for easier handling of the CLI output
Related
The only way I am running into is using curl command as per the docs: https://docs.spring.io/spring-cloud-dataflow/docs/2.7.1/reference/htmlsingle/#resources-app-registry-post
This uses a curl command to hit the api. Which I can develop a script for, but I would like to set this up within the helm charts so that these tasks and applications are created when the helm chart is deployed. Any ideas?
Please check Spring Cloud Data Flow, Helm Installation, Register prebuilt applications, it says:
Applications can be registered individually using the app register functionality or as a group using the app import functionality.
So, I guess you always need to start the app using Helm Chart and only later register the applications using app or REST Endpoint.
I am interested in generating docker-compose.yaml files from Helm charts. Is there a good way or tool to do this?
I realize that this is in the opposite direction from what most people are doing. Why I want to do this:
Our production systems run Kubernetes via Helm charts. We've got a full blown k8s and Helm setup already; no need to use a tool like Kompose to get us there. The question is how to convert Helm to docker-compose, not the other way around.
We want our Helm charts to be the single authoritative source of container configuration. They are able to describe a superset of what docker-compose can.
Running a set of services using Helm on a development machine is more time and resource consuming than running the same set of services via docker-compose. We do not want to slow development down by having engineers run using Helm/k8s.
We do not want to maintain two sets of configurations.
Can anybody recommend how to do this, or suggest a different solution to the time/resources issue encountered on development machines?
I am writing a Golang application which is more like automating the helm install, so I would like to know how to expose helm to your Kubernetes deployment or any API that creates helm object which can communicate with the tiller directly for the instruction, please describe the answer with a piece of code. thanks
I have been trying with the package https://godoc.org/k8s.io/helm/pkg/helm but does not really know what are the parameters that we need to pass when creating helm client
Not to discourage you, but I thought I should point out that Helm is nearing a v3 release, which will entirely remove tiller, and hence the client will likely change also.
Here are some relevant links:
Helm v3.0.0-beta.3 release notes
Helm v3 Beta 1 Released blog post
Hope this helps.
I’m currently using kubernetes and I came across of helm.
Let’s say I don’t like the idea of “infecting” my kubernetes cluster with a process that is not related to my applications but I would gladly accept it if it could be beneficial.
So I made some researches but I still can’t find anything I can’t easily do by using my yaml descriptor and kubectl so for now I can’t find an use except,maybe, for the environizing.
For example (taking it from guides I read:
you can easily install application, eg. helm install nginx —> I add an nginx image to my deployment descriptor, done
repositories -> I have docker ones (where I pull my images from)
you can easily helm rollback in case of release failure-> I just change the image version to the previous one in my kubernetes descriptor, easy
What bothers me is that, at level of commands, I do pretty much the same effort (helm update->kubectl apply).
In exchange for that I have a lot of boilerplate because of keeping the directory structure helm wants and I feel like missing the control I have with plain deployment descriptors ...what am I missing?
It is totally understandable your question. For small and simple deploys the benefits is not actually that great. But when the deploy of something is very complex Helm helps a lot.
Think that you have a couple squads that develop microservice for some company. If you can make a Chart that works for most of them, the deploy of each microservices would differ only by the image and the resources required. This way you get an standardized deployment and easier to all developers.
Another use case is deploying applications which requires a lot of moving parts. For example, if you want to deploy a Grafana server on Kubernetes you're probably going to need at least a Deployment and a Configmap, then you would need a service that matches this deployment. And if you want to expose it to the internet you need an ingress too.
One relatively simple application, would require 4 different YAMLs that you would to manually configure and make sure everything is correct instead you could do a simple helm install and reuse the configuration that someone has already made, sometimes even the company who created the Application.
There are a lot of other use cases, but these two are the ones that I would say are the most common.
Here's three suggestions of ways Helm can be useful:
Your continuous deployment system somewhat routinely produces new builds and wants to send them to the Kubernetes cluster. You can use templating to specify the image name and tag in a deployment, and so helm upgrade ... --set tag=201907211931 to request a specific tag.
You might have various service-specific controls like the log level or external database hostnames. The Helm values mechanism gives a uniform way to specify them, without having to know the details of the Kubernetes YAML files.
There is a repository of pre-packaged application charts, so if you want replicated PostgreSQL with in-cluster persistent storage, that's already built for you and you can just depend on it, rather than figuring out the right combination of StatefulSets and PersistentVolumeClaims yourself.
You can combine these in interesting (and potentially complex) ways: use an in-cluster database for developer testing but use a cloud-hosted and backed-up database for production, for example, and compute the database host name based on what combination of settings are provided.
There are, of course, alternative ways to do all of these things. Kustomize in particular can change the image value fairly straightforwardly, and is notable for having been included in the kubectl tool since Kubernetes 1.14 (see also Declarative Management of Kubernetes Objects Using Kustomize in the Kubernetes documentation). The "operator" pattern gives an alternate path to install software in your cluster, but even more so than Helm you're trusting an arbitrary program with API access.
I understand that helm consists of a client-side component (the helm CLI) and a cluster-side component (tiller). The docs say that tiller is responsible for building and managing releases. But why does this need to be done from the cluster? Why can't helm build and manage releases from the client, and then simply push resources to kubernetes?
Tiller can also be run on the client side as mentioned in the Helm documentation here. The documentation refers to it as Running Tiller Locally.
But, as mentioned in the same documentation it's mainly for the sake of development. Had been thinking about it and not exactly sure why only for development and not for production.
There where a lot of limitations with running client side only, as mentioned in this thread https://github.com/helm/helm/issues/2722.
But helm v3 will be a complete rewrite with no server side component.