I have a helm chart which pulls a lot of images from various registries and deploys a lot of pods. It runs a lot of k8s jobs before getting the pods up.
Overall, helm install command takes huge time, so usually my helm install will also have --timeout 3600.
Like: helm install --name will also have --timeout 3600
Can I embed this --timeout 1800 to the helm chat itself? (rather than providing a command line parameter)
Related
helm install airflow . --namespace airflow -f my_values.yaml -f my_other_values.yaml
I executed the command from above but had to interrupt it, cannot re-execute it because gives me the error:
Error: cannot re-use a name that is still in use
How can I fix it?
Thank you
Either helm uninstall the existing release
helm uninstall airflow
helm install airflow . -n airflow -f values.dev.yaml ...
or use helm upgrade to replace it with a new one
helm upgrade airflow . -n airflow -f values.dev.yaml ...
Both will have almost the same effect. You can helm rollback the upgrade but the uninstall discards that history.
Mechanically, helm install and helm upgrade just send Kubernetes manifests to the cluster, and from there the cluster takes responsibility for actually doing the work. Unless the chart has time-consuming hook jobs, it's actually possible that your current installation is fine and you don't need to do any of this (even if helm install --wait didn't report the Deployments were ready yet).
(The commands above assume you're using the current version 3 of Helm. Helm 2 has slightly different syntax and commands, but at this point is unsupported and end-of-lifed.)
If I run
helm upgrade --cleanup-on-fail \
$RELEASE jupyterhub/jupyterhub \
--namespace $NAMESPACE \
--version=0.9.0 \
--values config.yaml
It fails, with this error: Error: UPGRADE FAILED: pre-upgrade hooks failed: timed out waiting for the condition. It just hangs for a bit and ultimately times out. It seems like too small of a change to cause a true timeout. I found this command in the Zero to JupyterHub docs, where it describes how to apply changes to the configuration file.
I've tried several permutations, including leaving out cleanup, leaving out version, etc. The only thing I could get to work was helm upgrade jhub jupyterhub/jupyterhub, but I don't think it's producing the desired effect.
For example, when I add a line in my config.yaml to change the default to Jupyter Lab, it doesn't work if I run helm upgrade jhub jupyterhub/jupyterhub. I believe I need to specify config.yaml using --values or -f
My overall project is to set up JupyterHub on a cloud Kubernetes environment. I'm using GKE and the online terminal.
Thanks
Solved: I specified tag incorrectly in config.yaml. I put the digest rather than the actual tag. Here are the images on DockerHub
I am new to Helm Kubernetes. I am currently using a list of bash commands to create a local Minikube cluster with many containers installed. In order to alleviate the manual burden we were thinking of creating an (umbrella) Helm Chart to execute the whole list of commands.
Between the commands that I would need to run in the Chart there are few (cleanup) kubectl deletes, i.e. :
kubectl delete all,configmap --all -n system --force --grace-period=0
and also some helm installs, i.e.:
helm repo add bitnami https://charts.bitnami.com/bitnami && \
helm install postgres bitnami/postgresql --set postgresqlPassword=test,postgresqlDatabase=test && \
Question1: is it possible to include kubectl command in my Helm Chart?
Question2: is it possible to add a dependency from a Chart only remotely available? I.e. the dependency from postgres above.
Question3: If you think Helm is not the correct tool for doing this, what would you suggest instead?
Thank you
You can't embed imperative kubectl commands in a Helm chart. An installed Helm chart keeps track of a specific set of Kubernetes resources it owns; you can helm delete the release, and that will delete that specific set of things. Similarly, if you have an installed Helm chart, you can helm upgrade it, and the new chart contents will replace the old ones.
For the workflow you describe – you're maintaining a developer environment based on Minikube, and you want to be able to start clean – there are two good approaches to take:
helm delete the release(s) that are already there, which will uninstall their managed Kubernetes resources; or
minikube delete the whole "cluster" (as a single container or VM), and then minikube start a new empty "cluster".
I used HELM to install the Prometheus operator and kube-prometheus into my kubernetes cluster using the following commands:
helm install coreos/prometheus-operator --name prometheus-operator --namespace monitoring --set rbacEnable=false
helm install coreos/kube-prometheus --name kube-prometheus --set global.rbacEnable=false --namespace monitoring
Everything is running fine, however, I want to set up email alerts and in order to do so I must configure the SMTP settings in "custom.ini" file according to the grafana website. I am fairly new to Kuberenetes and using HELM charts, therefore I have no idea which command I would use to access this file or make updates to it? Is it possible to do so without having to redeploy?
Can anyone provide me with a command to update custom values?
You could pass grafana.env value to add SMTP-related settings:
GF_SMTP_ENABLED=true,GF_SMTP_HOST,GF_SMTP_USER and GF_SMTP_PASSWORD
should do the trick. The prometheus-operator chart relies on the upstream stable/grafana chart (although, still using the 1.25 version)
I have a chart that is installing a pod to kubernetes. since that Helm allows us to set values in a single chart, i decided to create a reusable chart that allows me to create multiple pod with the same chart configuration.
I'm trying to create about 10,000 pods, and using helm install is easiest way since that im reusing the chart config. I was wondering how can i improve the performance of helm install?
I tried to scale tiller-deploy to about 4, but only one of the pods that are processing the helm requests.
Example script to create 10,000 pods
created = has_created(`helm status #{$name} 2>&1`)
if !created
`helm install --name=#{$name} --set start=#{$start} --set end=#{$until} --set key=#{$key} ./chart`
p "deployed #{$name} release"
end
Thanks
Your bottleneck is not tiller, it’s the way that you’re starting the process. What about to run this process in background or use a modern language to create this in a thread?
You could try to have a single chart that you install that has a long requirements list of your 10,000 pods with different variables passed, that way helm can send a single install command and tiller can take care of the rest. This might be a bit faster as you limit the communication between helm and tiller.