Error: This command needs 1 argument: chart name - kubernetes

I am following Install OneAgent on Kubernetes official instructions while doing this I am getting the error mentioned in the title. when I add --name after helm install I am getting
Error: apiVersion 'v2' is not valid. The value must be "v1"
helm instructions:
helm install dynatrace-oneagent-operator \
dynatrace/dynatrace-oneagent-operator -n\
dynatrace --values values.yaml

Well, if you're using this helm chart it's stated in its description that it requires helm 3:
The Dynatrace OneAgent Operator Helm Chart which supports the rollout
and lifecycle of Dynatrace OneAgent in Kubernetes and OpenShift
clusters.
This Helm Chart requires Helm 3. 👈
and you use Helm 2:
Client: &version.Version{SemVer:"v2.16.1", GitCommit:"bbdfe5e7803a12bbdf97e94cd847859890cf4050", GitTreeState:"clean"} Server: &version.Version{SemVer:"v2.16.1", GitCommit:"bbdfe5e7803a12bbdf97e94cd847859890cf4050", GitTreeState:"clean"}
As to your error message:
Error: apiVersion 'v2' is not valid. The value must be "v1"
it can be expected on helm 2 when running a chart that requires helm 3 as the apiVersion has been incremented from v1 to v2 only in helm 3. In fact this is one of the major differences between two releases of helm. You can read more about it here:
Chart apiVersion:
Helm decides to increment the chart API version to v2 in Helm3:
# Chart.yaml
-apiVersion: v1 # Helm2
+apiVersion: v2 # Helm3
...
You can install Helm 3 easily by following this official guide.
Note that apart from using helm chart, you can also deploy OneAgent Operator on Kubernetes with kubectl and as you can read in the official dynatrace docs this is actually the recommended way of installation:
We recommend installing OneAgent Operator on Kubernetes with kubectl.
If you prefer Helm, you can use the OneAgent Helm chart as a basic
alternative.

These errors are resolved for me!
#This command needs 1 argument: chart name
#apiVersion 'v2' is not valid. The value must be "v1"
#release seq-charts failed: namespaces "seq" is forbidden: User
"system:serviceaccount:kube-system:default" cannot get resource "namespaces" in API
group "" in the namespace "seq"
I started using local PowerShell for Azure Kubernetes.
These errors started when I made some changes to the Windows environment, but mine might work for you too.
PS C:\Users\{User}> Connect-AzAccount
PS C:\Users\{User}> Set-AzContext 'Subscription Name or ID'
PS C:\Users\{User}> az configure --defaults group=AKS
PS C:\Users\{User}> kubectl create namespace seq
PS C:\Users\{User}> kubectl create namespace prometheus-log
PS C:\Users\{User}> C:\ProgramData\chocolatey\choco upgrade chocolatey
PS C:\Users\{User}> C:\ProgramData\chocolatey\choco install kubernetes-helm
After that.
PS C:\Users\{User}> helm install --name prometheus prometheus-community/kube-prometheus-stack --namespace prometheus-log
Error: This command needs 1 argument: chart name
After that, I tried this.
PS C:\Users\{User}> C:\Users\vahem\.azure-helm\helm install --name prometheus prometheus-community/kube-prometheus-stack --namespace prometheus-log
Error: apiVersion 'v2' is not valid. The value must be "v1"
After that, I tried this.
PS C:\Users\{User}> helm install --name seq-charts --namespace seq --set persistence.existingClaim=seq-pvc stable/seq
Error: release seq-charts failed: namespaces "seq" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "namespaces" in API group "" in the namespace "seq"
After much trial and error, I discovered that there are two different versions of 'helm' on the system.
C:\Users{User}.azure-helm => V2.x.x
C:\ProgramData\chocolatey\lib\kubernetes-helm\tools\windows-amd64\helm =>V3.x.x
Finally I tried this and it worked great. Using 'helm v3.x.x' and no parameter name '--name'
PS C:\Users\{User}> C:\ProgramData\chocolatey\lib\kubernetes-helm\tools\windows-amd64\helm repo update
PS C:\Users\{User}> C:\ProgramData\chocolatey\lib\kubernetes-helm\tools\windows-amd64\helm install seq-charts --namespace seq --set persistence.existingClaim=seq-pvc stable/seq
PS C:\Users\{User}> C:\ProgramData\chocolatey\lib\kubernetes-helm\tools\windows-amd64\helm install prometheus prometheus-community/kube-prometheus-stack --namespace prometheus-log --set persistence.existingClaim=prometheus-pvc
It Worket Great For Me!

Please upgrade your helm version to 3. unless you are using a tillerless version of the helm2.

Related

I am getting this error while installing prometheus operator in helm

This chart is deprecated
Error: INSTALLATION FAILED: failed to install CRD crds/crd-alertmanager.yaml: unable to recognize "": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
helm install prometheus monitor/prometheus-operator --namespace prometheus
The chart prometheus-operator is deprecated!
Deprecation message:
DEPRECATED
This chart will be renamed, but first must be deprecated before the prometheus-community/helm-charts repo is indexed, so that it won't be listed in the hubs. See [this prometheus-community issue](https://github.com/prometheus-community/community/issues/28#issuecomment-670406329) for reasoning and next steps.
Try the latest one:
$ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
$ helm repo update
$ helm install kube-prometheus-stack prometheus-community/kube-prometheus-stack --namespace prometheus
N.B.: The apiVersion for custom resource definitions (CRD) is apiextensions.k8s.io/v1 now.

Error: error installing: the server could not find the requested resource HELM Kubernetes

What I Did:
I installed Helm with
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash
helm init --history-max 200
Getting an error:
$HELM_HOME has been configured at /root/.helm.
Error: error installing: the server could not find the requested resource
what does that error mean?
How should I install Helm and tiller?
Ubuntu version: 18.04
Kubernetes version: 1.16
Helm version:
helm version
Client: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
Error: could not find tiller
Update:
I tried #shawndodo's answer but still tiller not installed
helm init --service-account tiller --override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm'
--output yaml | sed 's#apiVersion: extensions/v1beta1#apiVersion: apps/v1#' | kubectl apply -f -
Update 2:
helm init --history-max 200 working in kubernetes version 1.15
I met the same problem, then I found this reply on here.
helm init --service-account tiller --override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm' --output yaml | sed 's#apiVersion: extensions/v1beta1#apiVersion: apps/v1#' | kubectl apply -f -
It works for me. You can see the detail in this issue.
Unfortunately, Helm is not working with the current version of Kubernetes (1.16.0) as we can see on the issue #6374
For now, we can work around the incompatibility by selecting an older version of Kubernetes.
Starting minikube with a previous Kubernetes version
To solve this issue, simply start the minikube setting the version using the --kubernetes-version param (Ref.):
minikube delete
minikube start --kubernetes-version=1.15.4
Try to reboot the Helm too with the following command:
helm init
After that, you will be able to use the Helm without problems.
So tiller is the server side component that your helm client talks to (tiller is due to be removed in Helm 3 due to various security issues). When running helm init the helm client installs tiller on the cluster that your kubectl is currently setup to connect with (keep in mind that in order to install tiller you need admin access the cluster as tiller needs cluster-wide admin access) However there are many different strategies to work with tiller:
tiller per namespace: This is when you install tiller in a single namespace and only give it access to that namespace (vastly more secure than giving it cluster wide admin), you can find an article on how to here
tillerless: This is when you run tiller locally, you will need to export HELM_HOST to poiunt to this tiller and tiller will use the kube config configured at KUBECONFIG more information found here
I ran into the same issue - exactly the same configuration as initial question:
Ubuntu version: 18.04
Kubernetes version: 1.16
#shawndodo's answer didn't work for me. There were some issues with the tiller deployment and the tiller pod was not getting created at all!
I tried installing the from canary build as described in Helm docs - https://helm.sh/docs/using_helm/#from-canary-builds
helm init --canary-image --upgrade
This didn't work a couple days ago, but tried again (with newer canary build) and it worked today (20191005).
Whether I run into other issues now using canary build remains to be seen, but I got past the initialisation issue...
I tried all suggestions about changing the api version manually to fix this issue, this got rid of the errors but things didnt work properly afterwards. so in my case I removed my latest minicube installation and installed an old one on my mac using the below command, change minikube-darwin-amd64 to minikube-linux-amd64 if needed :
curl -LO https://storage.googleapis.com/minikube/releases/v1.3.0/minikube-darwin-amd64 \
&& sudo install minikube-darwin-amd64 /usr/local/bin/minikube
This downgraded my kubernetes to v1.15.2 which helm currently supports.
kubectl version: v1.16.0
helm version: v2.14.3
minikube start --memory=16384 --cpus=4
helm init --service-account tiller --output yaml | sed 's#apiVersion: extensions/v1beta1#apiVersion: apps/v1#' | sed 's# replicas: 1# replicas: 1\n selector: {"matchLabels": {"app": "helm", "name": "tiller"}}#' | kubectl apply -f -
helm template istio-1.3.3/install/kubernetes/helm/istio-init --name istio-init --namespace istio-system | kubectl apply -f -
helm template istio-1.3.3/install/kubernetes/helm/istio --name istio --namespace istio-system | kubectl apply -f -
We need to have tiller installed in the cluster before we start using helm. helm init command installs tiller in the cluster and also we need to have RBAC configured in the cluster for tiller as well. Here you'll find out the RBAC rules required as per your need for your k8s cluster.
try
apt-get upgrade helm in my case it worked.
helm init --service-account tiller --override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm' --output yaml | sed 's#apiVersion: extensions/v1beta1#apiVersion: apps/v1#' | kubectl apply -f -

Kubernetes service deploying in default namespace instead of defined namespace using Helm

I am trying to deploy my microservice on a Kuberenetes cluster in 2 different environment dev and test. And I am using helm chart to deploy my Kubernetes service. I am using Jenkinsfile to deploy the chart. And inside Jenkinsfile I added helm command within the stage like the following ,
stage ('helmchartinstall')
{
steps
{
sh 'helm upgrade --install kubekubedeploy --namespace test pipeline/spacestudychart'
}
}
}
Here I am defining the --namespace test parameter. But when it deploying, it showing the console output with default namespace. I already created namespaces test and prod.
When I checked the Helm version, I got response like the following,
docker#mildevdcr01:~$ helm version
Client: &version.Version{SemVer:"v2.14.1",
GitCommit:"5270352a09c7e8b6e8c9593002a73535276507c0", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.14.0",
GitCommit:"05811b84a3f93603dd6c2fcfe57944dfa7ab7fd0", GitTreeState:"clean"}
Have I made any mistake here for defining the namespace?
The most likely issue here is that the Chart already specifies default as metadata.namespace which in Helm 2 is not overwritten by the --namespace parameter.
If this is the cause a solution would be to remove the namespace specified in the metadata.namespace or to make it a template parameter (aka release value).
Also see https://stackoverflow.com/a/51137448/1977182.
Approach 1:
export TILLER_NAMESPACE= your_namespace
helm upgrade -i -n release_name chart.tgz
Approach 2:
helm upgrade -i -n release_name --namespace your_namespace chart.tgz

Gitlab-installed Helm: Error: context deadline exceeded

I've a Kubernetes cluster installed in AWS with Kops. I've installed Helm Tiller with the Gitlab UI. The Tiller service seems to be working via Gitlab, for example I've installed Ingress from the Gitlab UI.
But when trying to use that same Tiller from my CLI, I can't manage to get it working. When I helm init it says it's already installed (which makes totally sense):
helm init --tiller-namespace gitlab-managed-apps --service-account tiller
$HELM_HOME has been configured at C:\Users\danie\.helm.
Warning: Tiller is already installed in the cluster.
(Use --client-only to suppress this message, or --upgrade to upgrade Tiller to the current version.)
Happy Helming!
But when trying to, for example, list the charts, it takes 5 minutes and then timeouts:
$ helm list --tiller-namespace gitlab-managed-apps --debug
[debug] Created tunnel using local port: '60471'
[debug] SERVER: "127.0.0.1:60471"
Error: context deadline exceeded
What I'm missing so I can use the Gitlab-installed Tiller from my CLI?
Are you pretty sure that your Tiller server is installed in "gitlab-managed-apps" namespace ? By default it's installed to 'kube-system' one as per official installation instruction on GitLab website, which would mean this is what causes your helm ls command to fail (just skip it)
The best way to verify it is via:
kubectl get deploy/tiller-deploy -n gitlab-managed-apps
Do you see any tiller related deployment object in that namespace ?
Assuming your can operate your KOPS cluster with current kube context, you should have no problem with running helm client locally. You can always explicitly use --kube-context argument with helm command.
Update:
I think I know what causes your problem, Helm when installed via GitLab UI is using secured connection (SSL) between helm and tiller (proof here).
Knowing that, it means you should retrieve set of certificates from Secret object that is mounted on Tiller Pod:
#The CA
ca.cert.pem
ca.key.pem
#The Helm client files
helm.cert.pem
helm.key.pem
#The Tiller server files
tiller.cert.pem
tiller.key.pem
and then connect helm client to tiller server using following command, as explained here:
helm ls --tls --tls-ca-cert ca.cert.pem --tls-cert helm.cert.pem --tls-key helm.key.pem
Here's the way I've been doing this.
First open a shell in the gitlab tiller pod:
# replace the pod name, tiller-deploy-5bb888969c-7bzpl with your own
kubectl exec -n gitlab-managed-apps tiller-deploy-5bb888969c-7bzpl -it -- sh
Then use the pod's native helm and certs... to connect to tiller
$ env | grep TILLER_TLS_CERTS
#cd to the result, in my case /etc/certs
$ cd /etc/certs
# connect to tiller with the certs using the native helm (/helm) in my case:
$ /helm ls --tls --tls-ca-cert ./ca.crt --tls-cert ./tls.crt --tls-key ./tls.key

Use Gitlab-installed Helm from CLI. Could not find tiller

I've created a kubernetes cluster on AWS using Kops, and I've correctly configured the cluster on Gitlab.
I've installed Helm Tiller and Ingress from Gitlab's panel, but I now wish to uninstall the Ingress chart.
I'm not sure how to uninstall the ingress chart. What I'm tring now is configuring my Helm CLI to delete the ingress release, but I'm not getting the Helm CLI correctly configured. The Tiller stuff is being deployed at the gitlab-managed-apps, so I'm trying the following command:
$ helm init --tiller-namespace gitlab-managed-apps --service-account tiller --upgrade
HELM_HOME has been configured at C:\Users\danie\.helm.
Tiller (the Helm server-side component) has been upgraded to the current version.
Happy Helming!
But then when I'm trying to issue the helm ls command I'm getting the following error:
$ helm ls
Error: could not find tiller
But the service account exists on the namespace:
$ kubectl get serviceAccounts -n gitlab-managed-apps
NAME SECRETS AGE
default 1 23h
ingress-nginx-ingress 1 23h
tiller 1 23h
Any ideas how to get the CLI correctly configured?
you have installed tiller to a namespace that is not the default namespace.
As per default the Helm CLI will assume tiller is installed in default and that this is the namespace you want to "get in touch with"
this can be fixed by using the tiller-namespace flag as for your example that'd be
helm list --tiller-namespace gitlab-managed-apps
Try using Helm version 3 onward. Helm versions 1 and 2 are actually composed of two pieces – the Helm CLI, and Tiller, the Helm server-side component. It is important to note that Helm 3 removes the Tiller component, and thus is more secure