I get the following error
Built on jetson Xavier AGX with kubespray.
helm install --wait --generate-name nvidia/gpu-operator
Error: INSTALLATION FAILED: rendered manifests contain a resource that already exists. Unable to continue with install: CustomResourceDefinition "nodefeaturerules.nfd.k8s-sigs.io" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "gpu-operator-1661139963": current value is "gpu-operator-1661134243"; annotation validation error: key "meta.helm.sh/release-namespace" must equal "default": current value is "gpu-operator"
Try using helm list --all --all-namespaces and if you get any resources try to uninstall them by using following command
helm uninstall <release-name> -n <namespace> --no-hooks
To deploy the GPU Operator using helm.
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 \
&& chmod 700 get_helm.sh \
&& ./get_helm.sh
Now, add the NVIDIA Helm repository:
helm repo add nvidia https://helm.ngc.nvidia.com/nvidia \
&& helm repo update
This will install the operator in the default namespace while all operands were installed in the gpu-operator-resources namespace.
And the command you mentioned <helm install --wait --generate-name nvidia/gpu-operator> is for getting both the operator and operands get installed in the same namespace
Example :
To install the GPU Operator in the gpu-operator namespace:
helm install --wait --generate-name \
-n gpu-operator --create-namespace \
nvidia/gpu-operator
So create a suitable namespace as per your case
For reference follow Install NVIDIA GPU Operator.
Related
I am following these instructions (https://docs.rocket.chat/installation/paas-deployments/eks) which use helm.
However helm seems to have changed since these instructions were written and a 'repository' is now required.
My question is :
What repo should I add for helm v3 that is the equivalent to helm v2 default ?
Here is what I have done
I tried the command helm init --service-account tiller
but received the error, Error: unknown command "init" for "helm.
I read here that init is no longer required for helm.
So I tried the next command to install traefik, helm install stable/traefik --name traefik --namespace kube-system --set rbac.enabled=true.
and that says Error: unknown flag: --name which is also a change for v3,
So I adjust the command to be helm install traefik stable/traefik --namespace kube-system --set rbac.enabled=true.
And now I get Error: failed to download "stable/traefik" (hint: running helm repo update may help).
helm repo update returns Error: no repositories found. You must add one before updating
I tried helm repo add stable but got Usage: helm repo add [NAME] [URL] [flags]
In the update documentation I am not finding anything about what that NAME or URL should be.
So, another way to ask my question is :
What values should [NAME] and [URL] be for the helm repo add command ?
The command is :
helm repo add stable https://kubernetes-charts.storage.googleapis.com
- https://github.com/helm/charts
I have done an helm installation using following command and installation is completed.
helm install <micro-service-helm-chart> --set parm1=foo,parm2=bar
In the helm chart parm1 was set to a and parb2 was set to b, Now I have overrided these values from command line during helm install to foo and bar respectively.
Now, is there a way to check the value of parm1 and parm2 using helm or kubectl command ?
I already tried:
helm ls --debug
helm status
kubectl describe pod <podname>
helm get values —all RELEASE_NAME
where you should fill in your release‘s name to identify the installation.
I have had jhub released in my cluster successfully. I then changed the config to pull another docker image as stated in the documentation.
This time, while running the same old command:
# Suggested values: advanced users of Kubernetes and Helm should feel
# free to use different values.
RELEASE=jhub
NAMESPACE=jhub
helm upgrade --install $RELEASE jupyterhub/jupyterhub \
--namespace $NAMESPACE \
--version=0.8.2 \
--values jupyter-hub-config.yaml
where the jupyter-hub-config.yaml file is:
proxy:
secretToken: "<a secret token>"
singleuser:
image:
# Get the latest image tag at:
# https://hub.docker.com/r/jupyter/datascience-notebook/tags/
# Inspect the Dockerfile at:
# https://github.com/jupyter/docker-stacks/tree/master/datascience-notebook/Dockerfile
name: jupyter/datascience-notebook
tag: 177037d09156
I get the following problem:
UPGRADE FAILED
ROLLING BACK
Error: "jhub" has no deployed releases
Error: UPGRADE FAILED: "jhub" has no deployed releases
I then deleted the namespace via kubectl delete ns/jhub and the release via helm delete --purge jhub. Again tried this command in vain, again the same error.
I read few GH issues and found that either the YAML file was invalid or that the --force flag worked. However, in my case, none of these two are valid.
I expect to make this release and also learn how to edit the current releases.
Note: As you would find in the aforementioned documentation, there is a pvc created.
I had the same issue when I was trying to update my config.yaml file in GKE. Actually what worked for me is to redo these steps:
run curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash
helm init --service-account tiller --history-max 100 --wait
[OPTIONAL] helm version to verify that you have a similar output as the documentation
Add repo
helm repo add jupyterhub https://jupyterhub.github.io/helm-chart/
helm repo update
Run upgrade
RELEASE=jhub
NAMESPACE=jhub
helm upgrade $RELEASE jupyterhub/jupyterhub \
--namespace $NAMESPACE \
--version=0.9.0 \
--values config.yaml
After changes in kubeconfig the next solution worked for me
helm init --tiller-namespace=<ns> --upgrade
Works with kubectl 1.10.0 and helm 2.3.0. I guess this upgrades tiller to compatible helm version.
Don't forget to set KUBECONFIG variable before use this comman - this step itself may solve your issue if you didn't do this after changing your kubeconfig.
export KUBECONFIG=<*.kubeconfig>
In my case in the config cluster.server field has been changed, but context.name and current-context fields I left the same as in the previous config, not sure if it matters. And I faced the same issue on the firs try to deploy new release with helm, but after first successful deploy it's enough to change KUBECONFIG variable.
I hope it helps.
Added the following to my gCloud. I run it everytime I update my config.yaml file. Make sure to be connected to the correct Kubernetes cluster before running.
update.sh
# Installs Helm.
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash
# Make Helm aware of the JupyterHub Helm chart repo.
helm repo add jupyterhub https://jupyterhub.github.io/helm-chart/
helm repo update
# Re-installs the chart configured by your config.yaml.
RELEASE=jhub
JUPYTERHUB_VERSION=0.9.0
helm upgrade $RELEASE jupyterhub/jupyterhub \
--version=${JUPYTERHUB_VERSION} \
--values config.yaml
I am getting an error in my kubernetes cluster while upgrading my install of kamus
$ helm --debug upgrade --install soluto/kamus
[debug] Created tunnel using local port: '64252'
[debug] SERVER: "127.0.0.1:64252"
Error: This command needs 2 arguments: release name, chart path
Using helm version 2.13.1
This error is also known to be cause by not correctly using --set correctly or as intended.
As an example when upgrading my ingress-nginx/ingress-nginx installing as such:
--set "controller.service.annotations.service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz,"controller.service.annotations.service\.beta\.kubernetes\.io/azure-dns-label-name"=$DNS_LABEL
This caused the same error as listed above.
When I removed the quotations it worked as intended.
--set controller.service.annotations.service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path=/healthz,controller.service.annotations.service\.beta\.kubernetes\.io/azure-dns-label-name=$DNS_LABEL
The error in this case had nothing to do with not correctly setting a release name and or chart. More explanation of --set issues and solutions are below.
Helm upgrade command requires release name and chart path. In your case, you missed release name.
helm upgrade [RELEASE] [CHART] [flags]
helm --debug upgrade --install kamus soluto/kamus should work.
I encountered this error when I do --set key value instead of --set key=value. The cause was as stupid as the error message.
Helm upgrade requires both a release name and the chart it references. From the documentation:
Usage:
helm upgrade [RELEASE] [CHART] [flags]
According to the documentation for the --install flag, the command you referenced seems like it should work, but it may be due to differing Helm versions.
helm install soluto/kamus works for me.
I ran into this error (too) many times.
The first thing that should come to your mind is typos in the command.
For example:
If you're passing location of values.yaml with -f <path-to-values.yaml> you should make sure its in the relevant order related to flags that were passed.
If you're passing inline values with the --set flag you should make sure that there are no whitespace in the variable assignment like in this case: --set someVar= $SomeValue.
Run helm help upgrade or helm help install to get more information about each commands.
May be this detailed information will be helpful for someone new to this !
1. My helm files are here (I made changes in values.yaml to upgrade):
controlplane $ pwd
/root/existing2helm
controlplane $ ls
Chart.yaml charts templates values.yaml
2. Listing current releases
controlplane $ helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
newdeploybyhelm default 2 2021-02-01 00:39:11.596751325 +0000 UTC deployed existing2helm-0.1.0 1.16.0
3. Finally executing the upgrade command
controlplane $ helm upgrade newdeploybyhelm /root/existing2helm
Release "newdeploybyhelm" has been upgraded. Happy Helming!
NAME: newdeploybyhelm
LAST DEPLOYED: Mon Feb 1 00:48:30 2021
NAMESPACE: default
STATUS: deployed
I got this error when I add first line parameters
--install --create-namespace --wait --atomic
to the end of the parameter list again. You may want to check duplicate parameters, or duplicate --install command if you are using a parameter builder.
Running the below command results in an error. Even if I manually delete the namespace before.
$ helm-2.7.2 --kube-context nine-ml-dev install erst/jenkins-devops-package --namespace sud-devops --name sud-devops --version 1.11.0 --values nine-ml-dev/sud-jenkins-server.yaml
Error: release sud-devops failed: namespaces "sud-devops" already exists
But if I leave out the --namespace it goes to default. What's the proper way to specify a namespace for my package?