Why helm upgrade --install failed when previous install is failure? - kubernetes

This is the helm and tiller version:
> helm version --tiller-namespace data-devops
Client: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
The previous helm installation failed:
helm ls --tiller-namespace data-devops
NAME REVISION UPDATED STATUS CHART NAMESPACE
java-maven-app 1 Thu Aug 9 13:51:44 2018 FAILED java-maven-app-1.0.0 data-devops
When I tried to install it again using this command, it failed:
helm --tiller-namespace data-devops upgrade java-maven-app helm-chart --install \
--namespace data-devops \
--values helm-chart/values/stg-stable.yaml
Error: UPGRADE FAILED: "java-maven-app" has no deployed releases
Is the helm upgrade --install command going to fail, if the previous installation failed? I am expecting it to force install. Any idea?

This is or has been a helm issue for a while. It only affects the situation where the first install of a chart fails and has up to helm 2.7 required a manual delete of the failed release before correcting the issue and installing again. However there is now a --force flag available to address this case - https://github.com/helm/helm/issues/4004

It happens when a deployment fails as unexpected.
First, check the status of the helm release deployment;
❯ helm ls -n $namespace
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
Most probably you will see nothing about the problematic helm deployment. So, check the status of the deployment with the -a option;
❯ helm list -n $namespace -a
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
$release_name $namespace 7 $update_date pending-upgrade $chart_name $app_version
As you can see the deployment stuck with the pending-upgrade status.
Check the helm deployment secrets;
❯ kubectl get secret -n $namespace 42s ⎈ eks_non-prod/monitoring
NAME TYPE DATA AGE
sh.helm.release.v1.$namespace.v1 helm.sh/release.v1 1 2d21h
sh.helm.release.v1.$namespace.v2 helm.sh/release.v1 1 21h
sh.helm.release.v1.$namespace.v3 helm.sh/release.v1 1 20h
sh.helm.release.v1.$namespace.v4 helm.sh/release.v1 1 19h
sh.helm.release.v1.$namespace.v5 helm.sh/release.v1 1 18h
sh.helm.release.v1.$namespace.v6 helm.sh/release.v1 1 17h
sh.helm.release.v1.$namespace.v7 helm.sh/release.v1 1 16h
and describe the last one;
❯ kubectl describe secret sh.helm.release.v1.$namespace.v7
Name: sh.helm.release.v1.$namespace.v7
Namespace: $namespace
Labels: modifiedAt=1611503377
name=$namespace
owner=helm
status=pending-upgrade
version=7
Annotations: <none>
Type: helm.sh/release.v1
Data
====
release: 792744 bytes
You will see the secret has the same status with the failed deployment. So delete the secret;
❯ kubectl delete secret sh.helm.release.v1.$namespace.v7
Now, you should be able to upgrade the helm release. You can check the status of the helm release after the upgrade;
❯ helm list -n $namespace -a
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
$release_name $namespace 7 $update_date deployed $chart_name $app_version

Try:
helm delete --purge <deployment>
This will do the trick
and for helm3 onwards you need to uninstall eg.
helm uninstall <deployment> -n <namespace>

Just to add...
I have often seen the Error: UPGRADE FAILED: "my-app" has no deployed releases error in Helm 3.
Almost every time, the error was in either kubectl, aws-cli or aws-iam-authenticator not Helm. Seems that a lot of problems seem to bubble-up to this exception, which is not ideal.
To diagnose the true issue you can run simple commands in one or more of these tools if you are using them and you should be able to quickly diagnose your problem.
For example:
aws-cli - aws --version to ensure you have the cli installed.
aws-iam-authenticator - aws-iam-authenticator version to check that this is correctly installed.
kubectl - kubectl version will show if the tool is installed.
kubectl - kubectl config current-context will show if you have provided a valid config that can connect to Kubernetes.

Related

uninstall: Release not loaded: new: release: not found, chart deployed using helm 3

I have both helm 2 and helm 3 installed in my localhost. I have created a new chart using helm2
sanket#Admins-MacBook-Pro poc % helm create new
Creating new
created a chart 'new ' using helm version 2. Now I have deployed the chart using helm version 3
sanket#Admins-MacBook-Pro poc % helm3 install new new --namespace test
NAME: new
LAST DEPLOYED: Thu Apr 23 17:56:03 2020
NAMESPACE: test
STATUS: deployed
REVISION: 1
NOTES:
1. Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace test -l "app.kubernetes.io/name=new,app.kubernetes.io/instance=new" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl port-forward $POD_NAME 8080:80
Now when I try to delete the 'new' release it shows :-
sanket#Admins-MacBook-Pro poc % helm3 delete new
Error: uninstall: Release not loaded: new: release: not found
any idea how to resolve this issue .
By default, helm3 only shows releases of default namespace.
Do the following to get your release and delete it.
# Get all releases
helm ls --all-namespaces
# OR
helm ls -A
# Delete release
helm uninstall release_name -n release_namespace
Need to pass --namespace with the delete command.
helm3 ls --namespace test
helm3 ls --namespace deployment_name
You can check your all helm release and charts
1. All helm release
helm ls -A
2. Helm release in specific namespace
helm ls -n {releaseNameSpace}
And if it is there
1. Helm uninstall
helm uninstall {releaseName} -n {releaseNameSpace}

Install helm 2.13.0 on Minikube server (1.6.2) could not find tiller

Hey I'm installing fresh minikube and try to init helm on it no in 3.x.x but 2.13.0 version.
$ minikube start
😄 minikube v1.6.2 on Darwin 10.14.6
✨ Automatically selected the 'hyperkit' driver (alternates: [virtualbox])
🔥 Creating hyperkit VM (CPUs=2, Memory=2000MB, Disk=20000MB) ...
🐳 Preparing Kubernetes v1.17.0 on Docker '19.03.5' ...
🚜 Pulling images ...
🚀 Launching Kubernetes ...
⌛ Waiting for cluster to come online ...
🏄 Done! kubectl is now configured to use "minikube"
$ kubectl -n kube-system create serviceaccount tiller
serviceaccount/tiller created
$ kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount kube-system:tiller
clusterrolebinding.rbac.authorization.k8s.io/tiller created
$ helm init --service-account tiller
$HELM_HOME has been configured at /Users/<user>/.helm.
Error: error installing: the server could not find the requested resource
$ helm init --service-account tiller --override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm' --output yaml | sed 's#apiVersion: extensions/v1beta1#apiVersion: apps/v1#' | kubectl apply -f -
deployment.apps/tiller-deploy created
service/tiller-deploy created
$ helm init --service-account tiller
59 ### ALIASES
$HELM_HOME has been configured at /Users/<user>/.helm.
Error: error installing: the server could not find the requested resource
$ helm version
Client: &version.Version{SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"}
Error: could not find tiller
I try to do same on some random other ns, and with no result:
$ kubectl create ns deployment-stuff
namespace/deployment-stuff created
$ kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin \
--user=$(gcloud config get-value account)
clusterrolebinding.rbac.authorization.k8s.io/cluster-admin-binding created
$ kubectl create serviceaccount tiller --namespace deployment-stuff
kubectl create clusterrolebinding tiller-admin-binding --clusterrole=cluster-admin \
--serviceaccount=deployment-stuff:tiller
serviceaccount/tiller created
clusterrolebinding.rbac.authorization.k8s.io/tiller-admin-binding created
$ helm init --service-account=tiller --tiller-namespace=deployment-stuff
Creating /Users/<user>/.helm
Creating /Users/<user>/.helm/repository
Creating /Users/<user>/.helm/repository/cache
Creating /Users/<user>/.helm/repository/local
Creating /Users/<user>/.helm/plugins
Creating /Users/<user>/.helm/starters
Creating /Users/<user>/.helm/cache/archive
Creating /Users/<user>/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /Users/<user>/.helm.
Error: error installing: the server could not find the requested resource
$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈
$ helm list
Error: could not find tiller
$ helm list --tiller-namespace=kube-system
Error: could not find tiller
$ helm list --tiller-namespace=deployment-stuff
Error: could not find tiller
Same error everywhere Error: error installing: the server could not find the requested resource any ideas how to approach it ?
I installed helm with those commands and works fine with my gcp clusters, helm list returns full list of helms.
wget -c https://get.helm.sh/helm-v2.13.0-darwin-amd64.tar.gz
tar -zxvf helm-v2.13.0-darwin-amd64.tar.gz
mv darwin-amd64/helm /usr/local/bin/helm
tbh I have no idea what's going on, sometimes it works fine on minikube sometimes I get these errors.
This can be fixed by deleting the tiller deployment and service and rerunning the helm init --override command after first helm init.
So after running commands You listed:
$ kubectl -n kube-system create serviceaccount tiller
serviceaccount/tiller created
$ kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount kube-system:tiller
clusterrolebinding.rbac.authorization.k8s.io/tiller created
$ helm init --service-account tiller
And then finding out that tiller could not be found.
$ helm version
Client: &version.Version{SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"}
Error: could not find tiller
Run the following commands:
1.
$ kubectl delete service tiller-deploy -n kube-system
2.
$ kubectl delete deployment tiller-deploy -n kube-system
3.
helm init --override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm' --output yaml | sed 's#apiVersion: extensions/v1beta1#apiVersion: apps/v1#' | kubectl apply -f -
After that You can verify if it worked with:
$ helm version
Client: &version.Version{SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"}
Error: could not find a ready tiller pod
This one needs little more time, give it few seconds.
$ helm version
Client: &version.Version{SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"}
Tell me if it worked.
Check logs of error tiller pod by:
kc -n kube-system describe pod tiller-deploy-*
You'll see following error:
Failed to pull image "gcr.io/kubernetes-helm/tiller:v2.15.1": rpc error: code = Unknown desc = Error response from daemon: Head "https://gcr.io/v2/kubernetes-helm/tiller/manifests/v2.15.1": unknown: Project 'project:kubernetes-helm' not found or deleted.
The reason is they changed the image location, so the old version of helm couldn't pull it.
Pull the image manually by:
docker pull ghcr.io/helm/tiller:v2.15.1
Tag the pulled image to the version that helm needed at the first place
docker tag ghcr.io/helm/tiller:v2.15.1 gcr.io/kubernetes-helm/tiller:v2.15.1
Re-init tiller (helm server):
helm init
and you'll see the tiller deploy is running.

Error: error installing: the server could not find the requested resource HELM Kubernetes

What I Did:
I installed Helm with
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash
helm init --history-max 200
Getting an error:
$HELM_HOME has been configured at /root/.helm.
Error: error installing: the server could not find the requested resource
what does that error mean?
How should I install Helm and tiller?
Ubuntu version: 18.04
Kubernetes version: 1.16
Helm version:
helm version
Client: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
Error: could not find tiller
Update:
I tried #shawndodo's answer but still tiller not installed
helm init --service-account tiller --override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm'
--output yaml | sed 's#apiVersion: extensions/v1beta1#apiVersion: apps/v1#' | kubectl apply -f -
Update 2:
helm init --history-max 200 working in kubernetes version 1.15
I met the same problem, then I found this reply on here.
helm init --service-account tiller --override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm' --output yaml | sed 's#apiVersion: extensions/v1beta1#apiVersion: apps/v1#' | kubectl apply -f -
It works for me. You can see the detail in this issue.
Unfortunately, Helm is not working with the current version of Kubernetes (1.16.0) as we can see on the issue #6374
For now, we can work around the incompatibility by selecting an older version of Kubernetes.
Starting minikube with a previous Kubernetes version
To solve this issue, simply start the minikube setting the version using the --kubernetes-version param (Ref.):
minikube delete
minikube start --kubernetes-version=1.15.4
Try to reboot the Helm too with the following command:
helm init
After that, you will be able to use the Helm without problems.
So tiller is the server side component that your helm client talks to (tiller is due to be removed in Helm 3 due to various security issues). When running helm init the helm client installs tiller on the cluster that your kubectl is currently setup to connect with (keep in mind that in order to install tiller you need admin access the cluster as tiller needs cluster-wide admin access) However there are many different strategies to work with tiller:
tiller per namespace: This is when you install tiller in a single namespace and only give it access to that namespace (vastly more secure than giving it cluster wide admin), you can find an article on how to here
tillerless: This is when you run tiller locally, you will need to export HELM_HOST to poiunt to this tiller and tiller will use the kube config configured at KUBECONFIG more information found here
I ran into the same issue - exactly the same configuration as initial question:
Ubuntu version: 18.04
Kubernetes version: 1.16
#shawndodo's answer didn't work for me. There were some issues with the tiller deployment and the tiller pod was not getting created at all!
I tried installing the from canary build as described in Helm docs - https://helm.sh/docs/using_helm/#from-canary-builds
helm init --canary-image --upgrade
This didn't work a couple days ago, but tried again (with newer canary build) and it worked today (20191005).
Whether I run into other issues now using canary build remains to be seen, but I got past the initialisation issue...
I tried all suggestions about changing the api version manually to fix this issue, this got rid of the errors but things didnt work properly afterwards. so in my case I removed my latest minicube installation and installed an old one on my mac using the below command, change minikube-darwin-amd64 to minikube-linux-amd64 if needed :
curl -LO https://storage.googleapis.com/minikube/releases/v1.3.0/minikube-darwin-amd64 \
&& sudo install minikube-darwin-amd64 /usr/local/bin/minikube
This downgraded my kubernetes to v1.15.2 which helm currently supports.
kubectl version: v1.16.0
helm version: v2.14.3
minikube start --memory=16384 --cpus=4
helm init --service-account tiller --output yaml | sed 's#apiVersion: extensions/v1beta1#apiVersion: apps/v1#' | sed 's# replicas: 1# replicas: 1\n selector: {"matchLabels": {"app": "helm", "name": "tiller"}}#' | kubectl apply -f -
helm template istio-1.3.3/install/kubernetes/helm/istio-init --name istio-init --namespace istio-system | kubectl apply -f -
helm template istio-1.3.3/install/kubernetes/helm/istio --name istio --namespace istio-system | kubectl apply -f -
We need to have tiller installed in the cluster before we start using helm. helm init command installs tiller in the cluster and also we need to have RBAC configured in the cluster for tiller as well. Here you'll find out the RBAC rules required as per your need for your k8s cluster.
try
apt-get upgrade helm in my case it worked.
helm init --service-account tiller --override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm' --output yaml | sed 's#apiVersion: extensions/v1beta1#apiVersion: apps/v1#' | kubectl apply -f -

Tiller is installed but not found by Helm

Background
I have kubernetes installed in clustered mode.
All nodes are up and running
I want to use jenkins-x to get ease of deployment.
Now jenkins-x uses Helm to do this job; Helm comes up with client and server architecture.
Helm setup can be achieved by following two ways:-
Using jenkins-x
jx install --username <username>
Standalone Helm
helm init
This helps to setup itsserver (Tiller), by putting it in pod of Kubernetes.
Whats issue
The issue is when I use first approach it does Tiller installation and later get failed by saying 'Tiller is available but not up and running'.
Created ClusterRoleBinding tiller
retrying after error:existing tiller deployment found but not running, please check the kube-system namespace and resolve any issues
Second approach also gets fail in similar path
It also does the Tiller installation but it does not find Tiller when I'm trying to list it.
helm ls
Error: could not find tiller
So essence of issue is :
It does Tiller installation but fails it in finding later.
helm init
Warning: Tiller is already installed in the cluster.
helm ls
Error: could not find tiller
I just went ahead and installed both helm and Jx with no problem. So, I don't know how to resolve your issue, but you can install it as below, and should work.
Installing Helm:
$ wget https://kubernetes-helm.storage.googleapis.com/helm-v2.9.1-linux-amd64.tar.gz
$ tar xzvf helm-v2.9.1-linux-amd64.tar.gz
$ cd linux-amd64/
$ sudo cp helm /usr/local/bin/helm
$ helm init
Installing Jx
$ curl -L https://github.com/jenkins-x/jx/releases/download/v1.2.98/jx-linux-amd64.tar.gz | tar xzv
$ sudo mv jx /usr/local/bin
Making Tiller cluster-admin role:
$ kubectl create clusterrolebinding tiller-cluster-admin \
--clusterrole=cluster-admin \
--serviceaccount=kube-system:default
Checking it works:
$ helm install --name prometheus stable/prometheus
$ helm ls
prometheus 1 Sun Jun 3 09:47:12 2018 DEPLOYED prometheus-6.7.0 default
there may be a problem with the tiller pod starting either due to resources or RBAC. Try these commands:
kubectl get deploy -n kube-system
kubectl get node -n kube-system
that might give more of a clue. If you can find a tiller pod thats failing mabe
kubectl describe pod tiller-1234 -n kube-system

Helm: Error: no available release name found

I am getting a couple of errors with Helm that I can not find explanations for elsewhere. The two errors are below.
Error: no available release name found
Error: the server does not allow access to the requested resource (get configmaps)
Further details of the two errors are in the code block further below.
I have installed a Kubernetes cluster on Ubuntu 16.04. I have a Master (K8SMST01) and two nodes (K8SN01 & K8SN02).
This was created using kubeadm using Weave network for 1.6+.
Everything seems to run perfectly well as far as Deployments, Services, Pods, etc... DNS seems to work fine, meaning pods can access services using the DNS name (myservicename.default).
Using "helm create" and "helm search" work, but interacting with the tiller deployment do not seem to work. Tiller is installed and running according to the Helm install documentation.
root#K8SMST01:/home/blah/charts# helm version
Client: &version.Version{SemVer:"v2.3.0",
GitCommit:"d83c245fc324117885ed83afc90ac74afed271b4", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.3.0", GitCommit:"d83c245fc324117885ed83afc90ac74afed271b4", GitTreeState:"clean"}
root#K8SMST01:/home/blah/charts# helm install ./mychart
Error: no available release name found
root#K8SMST01:/home/blah/charts# helm ls
Error: the server does not allow access to the requested resource (get configmaps)
Here are the running pods:
root#K8SMST01:/home/blah/charts# kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE
etcd-k8smst01 1/1 Running 4 1d 10.139.75.19 k8smst01
kube-apiserver-k8smst01 1/1 Running 3 19h 10.139.75.19 k8smst01
kube-controller-manager-k8smst01 1/1 Running 2 1d 10.139.75.19 k8smst01
kube-dns-3913472980-dm661 3/3 Running 6 1d 10.32.0.2 k8smst01
kube-proxy-56nzd 1/1 Running 2 1d 10.139.75.19 k8smst01
kube-proxy-7hflb 1/1 Running 1 1d 10.139.75.20 k8sn01
kube-proxy-nbc4c 1/1 Running 1 1d 10.139.75.21 k8sn02
kube-scheduler-k8smst01 1/1 Running 3 1d 10.139.75.19 k8smst01
tiller-deploy-1172528075-x3d82 1/1 Running 0 22m 10.44.0.3 k8sn01
weave-net-45335 2/2 Running 2 1d 10.139.75.21 k8sn02
weave-net-7j45p 2/2 Running 2 1d 10.139.75.20 k8sn01
weave-net-h279l 2/2 Running 5 1d 10.139.75.19 k8smst01
The solution given by kujenga from the GitHub issue worked without any other modifications:
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
I think it's an RBAC issue. It seems that helm isn't ready for 1.6.1's RBAC.
There is a issue open for this on Helm's Github.
https://github.com/kubernetes/helm/issues/2224
"When installing a cluster for the first time using kubeadm v1.6.1,
the initialization defaults to setting up RBAC controlled access,
which messes with permissions needed by Tiller to do installations,
scan for installed components, and so on. helm init works without
issue, but helm list, helm install, and so on all do not work, citing
some missing permission or another."
A temporary work around has been suggest:
"We "disable" RBAC using the command kubectl create clusterrolebinding
permissive-binding --clusterrole=cluster-admin --user=admin
--user=kubelet --group=system:serviceaccounts;"
But I can not speak for it's validity. The good news is that this is a known issue and work is being done to fix it. Hope this helps.
I had the same issue with the kubeadm setup on to CentOS 7.
Helm doesn't make a service account when you "helm init" and the default one doesn't have the permissions to read from the configmaps - so it will fail to be able to run a check to see if the deployment name it wants to use is unique.
This got me past it:
kubectl create clusterrolebinding add-on-cluster-admin \
--clusterrole=cluster-admin \
--serviceaccount=kube-system:default
But that is giving the default account tons of power, I just did this so I could get on with my work. Helm needs to add the creation of their own service account to the "helm init" code.
All addons in the kubernetes use the "defaults" service account.
So Helm also runs with "default" service account. You should provide permissions to it. Assign rolebindings to it.
For read-only permissions:
kubectl create rolebinding default-view --clusterrole=view \ --serviceaccount=kube-system:default --namespace=kube-system
For admin access: Eg: to install packages.
kubectl create clusterrolebinding add-on-cluster-admin \ --clusterrole=cluster-admin \ --serviceaccount=kube-system:default
You can also install tiller server in adifferent namespace using the below command.
First create the namesapce
Create the serviceaccount for the namespace
install the tiller in this respective namespace using the below command.
helm init --tiller-namespace test-namespace
This solution has worked for me: https://github.com/helm/helm/issues/3055#issuecomment-397296485
$ kubectl create serviceaccount --namespace kube-system tiller
$ kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
$ helm init --service-account tiller --upgrade
$ helm update repo
$ helm install stable/redis --version 3.3.5
But after that, something has changed ; I have to add --insecure-skip-tls-verify=true flag to my kubectl commands ! I don't know how to fix that knowing that I am interacting with a gcloud containers cluster.
Per https://github.com/kubernetes/helm/issues/2224#issuecomment-356344286, the following commands resolved the error for me too:
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
Per https://github.com/kubernetes/helm/issues/3055
helm init --service-account default
This worked for me when the RBAC (serviceaccount) commands didn't.
It's an RBAC issue. You need to have a service account with a cluster-admin role. And you should pass this service account during HELM initialization.
For example, if you have created a service account with the name tiller, you heml command would look like the following.
helm init --service-account=tiller
I followed this blog to resolve this issue. https://scriptcrunch.com/helm-error-no-available-release/
check the logs for your tiller container:
kubectl logs tiller-deploy-XXXX --namespace=kube-system
if you found something like this:
Error: 'dial tcp 10.44.0.16:3000: connect: no route to host'
Then probably a firewall/iptables as described here solution is to remove some rules:
sudo iptables -D INPUT -j REJECT --reject-with icmp-host-prohibited
sudo iptables -D FORWARD -j REJECT --reject-with icmp-host-prohibited