I have deployed jupyterhub on my GKE cluster using helm. However, when I run helm list --all (or helm list --failed etc) I see no output.
I can confirm that tiller is running in my cluster:
$ helm version
Client: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
And I can see the tiller pod:
$ kubectl get pods -n kube-system | grep tiller
tiller-deploy-778f674bf5-5jksm 1/1 Running 0 132d
I can also see that my deployment of jupyterhub is running using kubectl get pods -n jhub.
How I can I determine why the output of helm list is empty?
I had same issue where helm list was showing empty output.
In case if anyone lands on this page looking for solution, please check below.
source (similar) : https://github.com/helm/helm/issues/7146
In short : Issue here we need to use namespaces when we do listing. or --all-namespace would help as well.
helm list --all-namespaces
I have a strong feeling you are missing some permissions. This is a GKE cluster. So RBAC is enabled.
The standard practice is to first create a dedicated Service account in the appropriate namespace. For example sake, lets say kube-system
kubectl create serviceaccount tiller --namespace kube-system
Then you need to give appropriate permissions to this service account.
FOR TESTING / NON-SECURE !!!
Lets allow this service account to run with super user privileges i.e. run as cluster-admin
kubectl create clusterrolebinding tiller-admin --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
FOR PRODUCTION / SECURE
Create a Role that gives the minimum privileges for tiller to run and associate with the tiller service account using a RoleBinding.
Then go ahead and initialize tiller with the associated serviceAccount.
helm init --service-account tiller
Related
helm install --name my-rabbitserver stable/rabbitmq --namespace rabbit
Error: release my-rabbitserver failed: namespaces "rabbit" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "namespaces" in API group "" in the namespace "rabbit"
I have tried with (and without a rabbit namespace created before the install attempt)
I am using helm 2.16.9, so I need to qualify the name of my installation with a --name
I am using this against a Google Cloud kubernetes cluster
It looks as though the Helm tiller pod did not have sufficient priveldeges.
I found this similar issue:
https://support.sumologic.com/hc/en-us/articles/360037704393-Kubernetes-Helm-install-fails-with-Error-namespaces-sumologic-is-forbidden-User-system-serviceaccount-kube-system-default-cannot-get-resource-namespaces-in-API-group-in-the-namespace-sumologic-
Basically I have to stop the tiller deployment, set up tiller ServiceAccount yaml and run it to give tiller access to the kube-system. And then execute helm init again with the new service account.
The helm rabbitmq installs then appear work as advertised
I thought helm was supposed to make life easier, but it still has its own limitations and additional yaml files to get it to work as advertised.
During Installation of Helm3 stable, i found Helm3 stable does not implement tiller deployment for fetching cluster details, it works as a Client utility only, my question is it, if it is not implementing tiller concept for fetching details, how does it connect with EKS.
I have already installed kubectl and it is running fine, is it something like this, helm client is dependent on kubectl service?
I performed following steps:
1.helm version
version.BuildInfo{Version:"v3.1.0", GitCommit:"b29d20baf09943e134c2fa5e1e1cab3bf93315fa", GitTreeState:"clean", GoVersion:"go1.13.7"}
2.kubectl create serviceaccount tiller --namespace kube-system
serviceaccount/tiller created
3.notepad rbac-config.yaml
4.kubectl apply -f rbac-config.yaml
clusterrolebinding.rbac.authorization.k8s.io/tiller-role-binding created
5.helm init --service-account tiller
Error: unknown flag: --service-account
I know steps 2,3,4 are not required in Helm3, but curious to know how helm3 interacts as a client service with EKS cluster.
Just like kubectl, helm also uses kubeconfig to communicate with the cluster.
So, both kubectl and helm depend on the cluster's config file rather depending on each other.
I have followed instructions from this blog post to set up a k3s cluster on a couple of raspberry pi 4:
I'm now trying to get my hands dirty with traefik as front, but I'm having issues with the way it has been deployed as a 'HelmChart' I think.
From the k3s docs
It is also possible to deploy Helm charts. k3s supports a CRD
controller for installing charts. A YAML file specification can look
as following (example taken from
/var/lib/rancher/k3s/server/manifests/traefik.yaml):
So I have been starting up my k3s with the --no-deploy traefik option to manually add it with settings. So I therefore manually apply a yaml like this:
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
name: traefik
namespace: kube-system
spec:
chart: https://%{KUBERNETES_API}%/static/charts/traefik-1.64.0.tgz
set:
rbac.enabled: "true"
ssl.enabled: "true"
kubernetes.ingressEndpoint.useDefaultPublishedService: "true"
dashboard:
enabled: true
domain: "traefik.k3s1.local"
But when trying to iterate over settings to get it working as I want, I'm having trouble tearing it down. If I try kubectl delete -f on this yaml it just hangs indefinitely. And I can't seem to find a clean way to delete all the resources manually either.
I've been resorting now to just reinstall my entire cluster over and over because I can't seem to cleanup properly.
Is there a way to delete all the resources created by a chart like this without the helm cli (which I don't even have)?
Are you sure that kubectl delete -f is hanging?
I had the same issue as you and it seemed like kubectl delete -f was hanging, but it was really just taking a long time.
As far as I can tell, when you issue the kubectl delete -f a pod in the kube-system namespace with a name of helm-delete-* should spin up and try to delete the resources deployed via helm. You can get the full name of that container by running kubectl -n kube-system get pods, find the one with kube-delete-<name of yaml>-<id>. Then use the pod name to look at the logs using kubectl -n kube-system logs kube-delete-<name of yaml>-<id>.
An example of what I did was:
kubectl delete -f jenkins.yaml # seems to hang
kubectl -n kube-system get pods # look at pods in kube-system namespace
kubectl -n kube-system logs helm-delete-jenkins-wkjct # look at the delete logs
I see two options here:
Use the --now flag to delete your yaml file with minimal delay.
Use --grace-period=0 --force flags to force delete the resource.
There are other options but you'll need Helm CLI for them.
Please let me know if that helped.
I have installed Rancher 2 and created a kubernetes cluster of internal vm's ( no AWS / gcloud).
The cluster is up and running.
I logged into one of the nodes.
1) Installed Kubectl and executed kubectl cluster-info . It listed my cluster information correctly.
2) Installed helm
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get > get_helm.sh
chmod 700 get_helm.sh
./get_helm.sh
root#lnmymachine # helm version
Client: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}
3) Configured helm referencing Rancher Helm Init
kubectl -n kube-system create serviceaccount tiller
kubectl create clusterrolebinding tiller \
--clusterrole cluster-admin \
--serviceaccount=kube-system:tiller
helm init --service-account tiller
Tried installing Jenkins via helm
root#lnmymachine # helm ls
Error: Unauthorized
root#lnmymachine # helm install --name initial stable/jenkins
Error: the server has asked for the client to provide credentials
Browsed similar issues and few of them were due to multiple clusters. I have only one cluster. kubectl gives all information correctly.
Any idea whats happening.
It seems there is a mistake while creating the ClusterRoleBinding:
Instead of --clusterrole cluster-admin, you should have --clusterrole=cluster-admin
You can check if this is the case by verifying if ServiceAccount, ClustrerRoleBinding were created correctly.
kubectl describe -n kube-system sa tiller
kubectl describe clusterrolebinding tiller
Seems like they have already fixed this on Rancher Helm Init page.
I was facing the same issue, but the following steps worked for me.
root#node1:~# helm install --name prom-operator stable/prometheus-operator --namespace monitoring
Error: the server has asked for the client to provide credentials
Step 1: Delete the Service Account
root#node1:~# kubectl delete serviceaccount --namespace kube-system tiller
serviceaccount "tiller" deleted
Step2: Delete the cluster role binding
root#node1:~# kubectl delete clusterrolebinding tiller-cluster-rule
clusterrolebinding.rbac.authorization.k8s.io "tiller-cluster-rule" deleted
Step3: Remove helm directory
root#node1:~# rm -rf .helm/
Step4: Create the Service account again.
root#node1:~# kubectl create serviceaccount tiller --namespace kube-system
serviceaccount/tiller created
Step 5: Create the cluster role binding
root#node1:~# kubectl create clusterrolebinding tiller-cluster-rule \
> --clusterrole=cluster-admin \
> --serviceaccount=kube-system:tiller
clusterrolebinding.rbac.authorization.k8s.io/tiller-cluster-rule created
Step6: run helm init command
helm init --service-account=tiller
Creating /root/.helm
Creating /root/.helm/repository
Creating /root/.helm/repository/cache
Creating /root/.helm/repository/local
Creating /root/.helm/plugins
Creating /root/.helm/starters
Creating /root/.helm/cache/archive
Creating /root/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /root/.helm.
Warning: Tiller is already installed in the cluster.
(Use --client-only to suppress this message, or --upgrade to upgrade Tiller to the current version.)
Step 7: Delete the tiller-deploy-xxx pod
kubectl delete pod -n kube-system tiller-deploy
pod "tiller-deploy-5d58456765-xlns2" deleted
Wait till it is recreated.
Step 8: Install the helm charts.
helm install --name prom-operator stable/prometheus-operator --namespace monitoring
I am getting a couple of errors with Helm that I can not find explanations for elsewhere. The two errors are below.
Error: no available release name found
Error: the server does not allow access to the requested resource (get configmaps)
Further details of the two errors are in the code block further below.
I have installed a Kubernetes cluster on Ubuntu 16.04. I have a Master (K8SMST01) and two nodes (K8SN01 & K8SN02).
This was created using kubeadm using Weave network for 1.6+.
Everything seems to run perfectly well as far as Deployments, Services, Pods, etc... DNS seems to work fine, meaning pods can access services using the DNS name (myservicename.default).
Using "helm create" and "helm search" work, but interacting with the tiller deployment do not seem to work. Tiller is installed and running according to the Helm install documentation.
root#K8SMST01:/home/blah/charts# helm version
Client: &version.Version{SemVer:"v2.3.0",
GitCommit:"d83c245fc324117885ed83afc90ac74afed271b4", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.3.0", GitCommit:"d83c245fc324117885ed83afc90ac74afed271b4", GitTreeState:"clean"}
root#K8SMST01:/home/blah/charts# helm install ./mychart
Error: no available release name found
root#K8SMST01:/home/blah/charts# helm ls
Error: the server does not allow access to the requested resource (get configmaps)
Here are the running pods:
root#K8SMST01:/home/blah/charts# kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE
etcd-k8smst01 1/1 Running 4 1d 10.139.75.19 k8smst01
kube-apiserver-k8smst01 1/1 Running 3 19h 10.139.75.19 k8smst01
kube-controller-manager-k8smst01 1/1 Running 2 1d 10.139.75.19 k8smst01
kube-dns-3913472980-dm661 3/3 Running 6 1d 10.32.0.2 k8smst01
kube-proxy-56nzd 1/1 Running 2 1d 10.139.75.19 k8smst01
kube-proxy-7hflb 1/1 Running 1 1d 10.139.75.20 k8sn01
kube-proxy-nbc4c 1/1 Running 1 1d 10.139.75.21 k8sn02
kube-scheduler-k8smst01 1/1 Running 3 1d 10.139.75.19 k8smst01
tiller-deploy-1172528075-x3d82 1/1 Running 0 22m 10.44.0.3 k8sn01
weave-net-45335 2/2 Running 2 1d 10.139.75.21 k8sn02
weave-net-7j45p 2/2 Running 2 1d 10.139.75.20 k8sn01
weave-net-h279l 2/2 Running 5 1d 10.139.75.19 k8smst01
The solution given by kujenga from the GitHub issue worked without any other modifications:
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
I think it's an RBAC issue. It seems that helm isn't ready for 1.6.1's RBAC.
There is a issue open for this on Helm's Github.
https://github.com/kubernetes/helm/issues/2224
"When installing a cluster for the first time using kubeadm v1.6.1,
the initialization defaults to setting up RBAC controlled access,
which messes with permissions needed by Tiller to do installations,
scan for installed components, and so on. helm init works without
issue, but helm list, helm install, and so on all do not work, citing
some missing permission or another."
A temporary work around has been suggest:
"We "disable" RBAC using the command kubectl create clusterrolebinding
permissive-binding --clusterrole=cluster-admin --user=admin
--user=kubelet --group=system:serviceaccounts;"
But I can not speak for it's validity. The good news is that this is a known issue and work is being done to fix it. Hope this helps.
I had the same issue with the kubeadm setup on to CentOS 7.
Helm doesn't make a service account when you "helm init" and the default one doesn't have the permissions to read from the configmaps - so it will fail to be able to run a check to see if the deployment name it wants to use is unique.
This got me past it:
kubectl create clusterrolebinding add-on-cluster-admin \
--clusterrole=cluster-admin \
--serviceaccount=kube-system:default
But that is giving the default account tons of power, I just did this so I could get on with my work. Helm needs to add the creation of their own service account to the "helm init" code.
All addons in the kubernetes use the "defaults" service account.
So Helm also runs with "default" service account. You should provide permissions to it. Assign rolebindings to it.
For read-only permissions:
kubectl create rolebinding default-view --clusterrole=view \ --serviceaccount=kube-system:default --namespace=kube-system
For admin access: Eg: to install packages.
kubectl create clusterrolebinding add-on-cluster-admin \ --clusterrole=cluster-admin \ --serviceaccount=kube-system:default
You can also install tiller server in adifferent namespace using the below command.
First create the namesapce
Create the serviceaccount for the namespace
install the tiller in this respective namespace using the below command.
helm init --tiller-namespace test-namespace
This solution has worked for me: https://github.com/helm/helm/issues/3055#issuecomment-397296485
$ kubectl create serviceaccount --namespace kube-system tiller
$ kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
$ helm init --service-account tiller --upgrade
$ helm update repo
$ helm install stable/redis --version 3.3.5
But after that, something has changed ; I have to add --insecure-skip-tls-verify=true flag to my kubectl commands ! I don't know how to fix that knowing that I am interacting with a gcloud containers cluster.
Per https://github.com/kubernetes/helm/issues/2224#issuecomment-356344286, the following commands resolved the error for me too:
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
Per https://github.com/kubernetes/helm/issues/3055
helm init --service-account default
This worked for me when the RBAC (serviceaccount) commands didn't.
It's an RBAC issue. You need to have a service account with a cluster-admin role. And you should pass this service account during HELM initialization.
For example, if you have created a service account with the name tiller, you heml command would look like the following.
helm init --service-account=tiller
I followed this blog to resolve this issue. https://scriptcrunch.com/helm-error-no-available-release/
check the logs for your tiller container:
kubectl logs tiller-deploy-XXXX --namespace=kube-system
if you found something like this:
Error: 'dial tcp 10.44.0.16:3000: connect: no route to host'
Then probably a firewall/iptables as described here solution is to remove some rules:
sudo iptables -D INPUT -j REJECT --reject-with icmp-host-prohibited
sudo iptables -D FORWARD -j REJECT --reject-with icmp-host-prohibited