I have a chart that is installing a pod to kubernetes. since that Helm allows us to set values in a single chart, i decided to create a reusable chart that allows me to create multiple pod with the same chart configuration.
I'm trying to create about 10,000 pods, and using helm install is easiest way since that im reusing the chart config. I was wondering how can i improve the performance of helm install?
I tried to scale tiller-deploy to about 4, but only one of the pods that are processing the helm requests.
Example script to create 10,000 pods
created = has_created(`helm status #{$name} 2>&1`)
if !created
`helm install --name=#{$name} --set start=#{$start} --set end=#{$until} --set key=#{$key} ./chart`
p "deployed #{$name} release"
end
Thanks
Your bottleneck is not tiller, it’s the way that you’re starting the process. What about to run this process in background or use a modern language to create this in a thread?
You could try to have a single chart that you install that has a long requirements list of your 10,000 pods with different variables passed, that way helm can send a single install command and tiller can take care of the rest. This might be a bit faster as you limit the communication between helm and tiller.
Related
I thought it to be a simple task but can't find a solution. The goal here is to have the full output of values of an Helm Chart actually installed and used in a k8s cluster. Helm is installed locally so it's easy to do a helm get values but what if I want these values extracted periodically and sent to a third place by a pod running in the same cluster? Like exporting in json and save them in a DB or something.
Can I write a go/python script that run in a pod with Helm installed? Are there some API?
I have a helm chart containing Kubernetes job but unfortunately helm upgrade won't work because the image name is immutable so logically I need to do a delete and install but I will loose my set values.yaml if they were customised in the first place.
How can I keep the values before deleting the chart and use them for new install to simulate an upgrade? I couldn't find anything in documentations or here.
Thanks
EDIT:
First you need to get your previous values with helm get values <release-name>
So you could redirect the values to a file with:
helm get values <release-name> -o yaml > values.yaml
And then do a helm install again
I have helm charts created for a microservice that I have built and everything is working as expected. Now, I have created a new k8s namespace and I want to try to deploy the same helm charts as my old namespace. Although, I have just one value that I need it different while everything else remain the same.
Do I have to create another values.yaml for the new namespace and copy everything over and update the one field I want updated ? Or is there any other way ? I do not want to use the --set method of passing updates to the command line.
David suggested the right way. You can use different values.yaml where you can specify the namespace you want to deploy the chart:
$ helm install -f another-namespace-values.yaml <my-release> .
It's also entirely possible to launch helm chart with multiple values.
For more reading please check values section of helm docs.
I am new to Helm Kubernetes. I am currently using a list of bash commands to create a local Minikube cluster with many containers installed. In order to alleviate the manual burden we were thinking of creating an (umbrella) Helm Chart to execute the whole list of commands.
Between the commands that I would need to run in the Chart there are few (cleanup) kubectl deletes, i.e. :
kubectl delete all,configmap --all -n system --force --grace-period=0
and also some helm installs, i.e.:
helm repo add bitnami https://charts.bitnami.com/bitnami && \
helm install postgres bitnami/postgresql --set postgresqlPassword=test,postgresqlDatabase=test && \
Question1: is it possible to include kubectl command in my Helm Chart?
Question2: is it possible to add a dependency from a Chart only remotely available? I.e. the dependency from postgres above.
Question3: If you think Helm is not the correct tool for doing this, what would you suggest instead?
Thank you
You can't embed imperative kubectl commands in a Helm chart. An installed Helm chart keeps track of a specific set of Kubernetes resources it owns; you can helm delete the release, and that will delete that specific set of things. Similarly, if you have an installed Helm chart, you can helm upgrade it, and the new chart contents will replace the old ones.
For the workflow you describe – you're maintaining a developer environment based on Minikube, and you want to be able to start clean – there are two good approaches to take:
helm delete the release(s) that are already there, which will uninstall their managed Kubernetes resources; or
minikube delete the whole "cluster" (as a single container or VM), and then minikube start a new empty "cluster".
I have a helm chart which pulls a lot of images from various registries and deploys a lot of pods. It runs a lot of k8s jobs before getting the pods up.
Overall, helm install command takes huge time, so usually my helm install will also have --timeout 3600.
Like: helm install --name will also have --timeout 3600
Can I embed this --timeout 1800 to the helm chat itself? (rather than providing a command line parameter)