Kubernetes, Helm - helm upgrade fails when config is specified - JupyterHub - kubernetes

If I run
helm upgrade --cleanup-on-fail \
$RELEASE jupyterhub/jupyterhub \
--namespace $NAMESPACE \
--version=0.9.0 \
--values config.yaml
It fails, with this error: Error: UPGRADE FAILED: pre-upgrade hooks failed: timed out waiting for the condition. It just hangs for a bit and ultimately times out. It seems like too small of a change to cause a true timeout. I found this command in the Zero to JupyterHub docs, where it describes how to apply changes to the configuration file.
I've tried several permutations, including leaving out cleanup, leaving out version, etc. The only thing I could get to work was helm upgrade jhub jupyterhub/jupyterhub, but I don't think it's producing the desired effect.
For example, when I add a line in my config.yaml to change the default to Jupyter Lab, it doesn't work if I run helm upgrade jhub jupyterhub/jupyterhub. I believe I need to specify config.yaml using --values or -f
My overall project is to set up JupyterHub on a cloud Kubernetes environment. I'm using GKE and the online terminal.
Thanks

Solved: I specified tag incorrectly in config.yaml. I put the digest rather than the actual tag. Here are the images on DockerHub

Related

Command to force new deployment in helm?

I use gitlab + kubernetes.
I use this command:
helm secrets -d vault upgrade --install --atomic --namespace NAMESPACE --values VALUES.yml --set image.tag="WHATEVER" DEPLOYMENT_NAME FILE_TO_THIS_DEPLOYMENT
the moment the CI pipeline fails i cannot restart it again, because of some Kubernetes/Helm errors:
another operation (install/upgrade/rollback) is in progress
I know that i can just fix this inside kubernetes and then i can rerun, but this is just a shitty experience for people who dont know that much about Kubernetes/Helm.
Is there a one-shot command which is just really deploying a new version and if the old version was somehow in a failing state, delete/fix it beforehand?
I really just want to execute the same commands again and again and just expect it to work without manually fixing kubernetes state everytime anything happens.

Error: UPGRADE FAILED: values don't meet the specifications of the schema(s) in the following chart(s)

I am trying to setup airflow using the official helm chart.
I want to set a custom values.yaml and am using the following yaml as a basis.
https://github.com/apache/airflow/blob/main/chart/values.yaml
I ran the following command.
helm upgrade --install airflow apache-airflow/airflow --namespace airflow --create-namespace --values ./values.yaml
and got the following error.
I know that the setup works fine when I don't specify the values.yaml, so am I using the wrong template or is there something else?
You should check if your values.yaml file has been corrupted during editing. Meaning that indentation is incorrect. If you add the values.yaml to your questions it would be easier to analyze.

How to delete values introduced on airflow through helm?

helm install airflow . --namespace airflow -f my_values.yaml -f my_other_values.yaml
I executed the command from above but had to interrupt it, cannot re-execute it because gives me the error:
Error: cannot re-use a name that is still in use
How can I fix it?
Thank you
Either helm uninstall the existing release
helm uninstall airflow
helm install airflow . -n airflow -f values.dev.yaml ...
or use helm upgrade to replace it with a new one
helm upgrade airflow . -n airflow -f values.dev.yaml ...
Both will have almost the same effect. You can helm rollback the upgrade but the uninstall discards that history.
Mechanically, helm install and helm upgrade just send Kubernetes manifests to the cluster, and from there the cluster takes responsibility for actually doing the work. Unless the chart has time-consuming hook jobs, it's actually possible that your current installation is fine and you don't need to do any of this (even if helm install --wait didn't report the Deployments were ready yet).
(The commands above assume you're using the current version 3 of Helm. Helm 2 has slightly different syntax and commands, but at this point is unsupported and end-of-lifed.)

Github action terminate helm orphan process and release stuck in "pending-install"

I'm creating a GitHub action pipeline to deploy backend application to AKS. I try to follow this tutorial https://learn.microsoft.com/en-us/learn/modules/aks-deployment-pipeline-github-actions. First, I follow along with the tutorial with my demo project, which works perfectly fine. After that, I apply to exist backend project then something goes wrong.
When deploy on resources aren't upgrade.
Helm release status is shown as "pending-install", but my demo project is in "deployed" status.
Github action terminates the orphan process on complete job step, but there isn't an orphan process in the demo project.[Please see image of pipeline log for reference]
Demo pipeline
Backend project pipeline
What I've done.
I've tried removing all Helm resource(including Helm secret) manually and redo again, but I still face the same error.
I've tried to compare every configuration between demo project and backend project but I cannot catch any mismatch.
If I helm install backend project from my laptop using the same command but manually inject variable, it works(The release is in deployed status).
Other useful information
Helm version on pipeline: v3.3.1
Helm command that I currently use:
- name: Run Helm Deploy
run: |
helm upgrade \
--debug \
--install \
--create-namespace \
--atomic \
--wait \
--timeout 30m0s \
--namespace dev \
xxxx-release-dev \
./helm --set image.repository=${{ secrets.ACR_NAME }} --set mongo.url=${{ secrets.MONGO_URL_DEV }}
Thanks to https://www.facebook.com/groups/devopsthailand/ Admin to helps me find out the answer. Github action secrets.MONGO_URL_DEV contains the special character that is needed to put a double quote between. Those special characters weird behavior when executed. After putting a double quote in, it works!!.

Zero to Kubernetes: Helm install of JupyterHub fails

I'm following the Zero to Kubernetes GKE installation guide.
I've successfully installed kubernetes and helm but when it comes to do the install:
helm upgrade --cleanup-on-fail \
--install $RELEASE jupyterhub/jupyterhub \
--namespace $NAMESPACE \
--create-namespace \
--version=0.9.0 \
--values config.yaml
The error I keep getting is:
Release "jhub" does not exist. Installing it now.
Error: failed pre-install: timed out waiting for the condition
I've tried adding the flag --timeout 1200s as per the troubleshooting instructions on GKE but it still hangs up.
Any ideas on how to resolve this? If it continues to fail I'll probably just try deploying on Azure AD but ideally GKE should work.
(For reference I've actually done this deploy twice on GKE successfully but had to delete those clusters so I'm really not sure what the issue is.)
EDIT: I resolved the issue.
Further down the config.yaml file I somehow changed the tag from 1.0.7 which was the version registered via Jupyter to 1.1.0 and there was a conflict between my Jupyter registry and my JupyterHub config.
I'm still inexperienced with Jupyter & JupyterHub so apologies if this seemed obvious to others.
As mentioned in comments:
The issue was solved by cleanning old configurations from the cluster.
All the information on posted guide is working fine.