I'm having trouble in installing Helm to one of my GKE cluster through gcloud shell.
When I run: helm install --name mongo-rs-mongodb-replicaset -f 3-values.yaml stable/mongodb-replicaset --debug This is what I get:
[debug] Created tunnel using local port: '39387'
[debug] SERVER: "127.0.0.1:39387"
[debug] Original chart version: ""
[debug] Fetched stable/mongodb-replicaset to /home/idan/.helm/cache/archive/mongodb-replicaset-3.9.6.tgz
[debug] CHART PATH: /home/idan/.helm/cache/archive/mongodb-replicaset-3.9.6.tgz
Error: the server has asked for the client to provide credentials
My service account is set properly:
kubectl describe serviceaccount tiller --namespace kube-system
Name: tiller
Namespace: kube-system
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: tiller-token-vbrrn
Tokens: tiller-token-vbrrn
Events: <none>
kubectl describe clusterrolebinding tiller
Name: tiller
Labels: <none>
Annotations: <none>
Role:
Kind: ClusterRole
Name: cluster-admin
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount tiller kube-system
I'm owner of my project's IAM, and I'm not sure which credentials should I provide - I have never seen it in the past. Tried to initialize it with helm --upgrade too.
Did you setup rbac?
If not, set it up and run helm init --service-account tiller --upgrade it should fix your problem.
After every solution I found didn't work, I tried re-creating my cluster and running the same commands and it has simply worked...
Related
I am trying to install postgresql using helm on my kubernetes cluster.
I get error in transport when i run the helm install command.
I have tied different solutions online, none worked.
helm install --name realtesting stable/postgresql --debug
The expect result is a deployed postgresql on my kubernetes cluster
Please help!
It seems that you have not initialized helm with a service account.
In rbac-config.yaml:
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
Step 1: kubectl apply -f rbac-config.yaml
Step 2: helm init --service-account tiller --history-max 200
Step 3: Test the setup with heml ls. There would not be any output from running this command and that is expected. Now, you can run helm install --name realtesting stable/postgresql
you need to deploy the tiller server.
then follow the below steps
master $ kubectl get po -n kube-system |grep tiller
tiller-deploy-5bcf6f5c7c-km8hn 1/1 Running 0 18s
master $ helm install --name realtesting stable/postgresql --debug
[debug] Created tunnel using local port: '32876'
[debug] SERVER: "127.0.0.1:32876"
[debug] Original chart version: ""
[debug] Fetched stable/postgresql to /root/.helm/cache/archive/postgresql-4.0.1.tgz
[debug] CHART PATH: /root/.helm/cache/archive/postgresql-4.0.1.tgz
NAME: realtesting
REVISION: 1
RELEASED: Fri May 10 08:52:11 2019
CHART: postgresql-4.0.1
I'm trying to install Prometheus on my K8S cluster
when I run command
kubectl get namespaces
I got the following namespace:
default Active 26h
kube-public Active 26h
kube-system Active 26h
monitoring Active 153m
prod Active 5h49m
Now I want to create the Prometheus via
helm install stable/prometheus --name prom -f k8s-values.yml
and I got error:
Error: release prom-demo failed: namespaces "default" is forbidden:
User "system:serviceaccount:kube-system:default" cannot get resource
"namespaces" in API group "" in the namespace "default"
even if I switch to monitoring ns I got the same error,
the k8s-values.yml look like following
rbac:
create: false
server:
name: server
service:
nodePort: 30002
type: NodePort
Any idea what could be missing here ?
You are getting this error because you are using RBAC without giving the right permissions.
Give the tiller permissions:
taken from https://github.com/helm/helm/blob/master/docs/rbac.md
Example: Service account with cluster-admin role
In rbac-config.yaml:
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
Note: The cluster-admin role is created by default in a Kubernetes cluster, so you don't have to define it explicitly.
$ kubectl create -f rbac-config.yaml
serviceaccount "tiller" created
clusterrolebinding "tiller" created
$ helm init --service-account tiller
Create a service account for prometheus:
Change the value of rbac.create to true:
rbac:
create: true
server:
name: server
service:
nodePort: 30002
type: NodePort
Look at prometheus operator to spin up all monitoring services from prometheus stack.
below link is helpful
https://github.com/coreos/prometheus-operator/tree/master/contrib/kube-prometheus/manifests
all the manifests are listed there. go through those files and deploy whatever you need to monitor in your k8s cluster
I did K8s(1.11) cluster using kubeadm tool. It 1 master and one node in the cluster.
I applied dashboard UI there.
kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
Created service account (followed this link: https://github.com/kubernetes/dashboard/wiki/Creating-sample-user)
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
and
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
Start kube proxy: kubectl proxy --address 0.0.0.0 --accept-hosts '.*'
And access dashboard from remote host using this URL: http://<k8s master node IP>:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login
Its asking for token for login: got token using this command: kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
After copy and apply the token in browser.. its not logging in. Its not showing authentication error too… Not sure wht is wrong with this? Is my token wrong or my kube proxy command wrong?
I recreated all the steps in accordance to what you've posted.
Turns out the issue is in the <k8s master node IP>, you should use localhost in this case. So to access the proper dashboard, you have to use:
http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login
When you start kubectl proxy - you create a tunnel to your apiserver on the master node. By default, Dashboard is starting with ServiceType: ClusterIP. The Port on the master node in this mode is not open, and that is the reason you can't reach it on the 'master node IP'. If you would like to use master node IP, you have to change the ServiceType to NodePort.
You have to delete the old service and update the config by changing service type to NodePort as in the example below (note that ClusterIP is not there because it is assumed by default).
Create a new yaml file name newservice.yaml
---
# ------------------- Dashboard Service ------------------- #
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
type: NodePort
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
Delete the old service
kubectl delete service kubernetes-dashboard -n kube-system
Apply the new service
kubectl apply -f newservice.yaml
Run describe service
kubectl describe svc kubernetes-dashboard -n kube-system | grep "NodePort"
and you can use that port with the IP address of the master node
Type: NodePort
NodePort: <unset> 30518/TCP
http://<k8s master node IP>:30518/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login
Note that the port number is generated randomly and yours will be probably different.
I have installed a cluster on Google Kubernetes Engine.
And then, I created namespace "staging"
$ kubectl get namespaces
default Active 26m
kube-public Active 26m
kube-system Active 26m
staging Active 20m
Then, I switched to operate in the staging namespace
$ kubectl config use-context staging
$ kubectl config current-context
staging
And then, I installed postgresql using helm on staging namespace
helm install --name staging stable/postgresql
But I got:
Error: release staging failed: namespaces "staging" is forbidden: User "system:serviceaccount:kube-system:default" cannot get namespaces in the namespace "staging": Unknown user "system:serviceaccount:kube-system:default"
What does it mean..?? How to get it working..??
Thank youu..
As your cluster is RBAC enabled, seems like your tiller Pod do not have enough permission.
You are using default ServiceAccount which lacks enough RBAC permission, tiller requires.
All you need to create ClusterRole, ClusterRoleBinding and ServiceAccount. With them you can provide necessary permission to your Pod.
Follow this steps
_1. Create ClusterRole tiller
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: tiller
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
Note: I have used full permission here.
_2. Create ServiceAccount tiller in kube-system namespace
$ kubectl create sa tiller -n kube-system
_3. Create ClusterRoleBinding tiller
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: tiller
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
apiGroup: ""
roleRef:
kind: ClusterRole
name: tiller
apiGroup: rbac.authorization.k8s.io
Now you need to use this ServiceAccount in your tiller Deployment.
As you already have one, edit that
$ kubectl edit deployment -n kube-system tiller-deploy
Set serviceAccountName to tiller under PodSpec
Read more about RBAC
Try:
helm init --upgrade --service-account tiller
as suggested by Scott S in this comment.
I have installed helm 2.6.2 on the kubernetes 8 cluster. helm init worked fine. but when I run helm list it giving this error.
helm list
Error: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list configmaps in the namespace "kube-system"
How to fix this RABC error message?
Once these commands:
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
helm init --service-account tiller --upgrade
were run, the issue has been solved.
More Secure Answer
The accepted answer gives full admin access to Helm which is not the best solution security wise. With a little more work, we can restrict Helm's access to a particular namespace. More details in the Helm documentation.
$ kubectl create namespace tiller-world
namespace "tiller-world" created
$ kubectl create serviceaccount tiller --namespace tiller-world
serviceaccount "tiller" created
Define a Role that allows Tiller to manage all resources in tiller-world like in role-tiller.yaml:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: tiller-manager
namespace: tiller-world
rules:
- apiGroups: ["", "batch", "extensions", "apps"]
resources: ["*"]
verbs: ["*"]
Then run:
$ kubectl create -f role-tiller.yaml
role "tiller-manager" created
In rolebinding-tiller.yaml,
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: tiller-binding
namespace: tiller-world
subjects:
- kind: ServiceAccount
name: tiller
namespace: tiller-world
roleRef:
kind: Role
name: tiller-manager
apiGroup: rbac.authorization.k8s.io
Then run:
$ kubectl create -f rolebinding-tiller.yaml
rolebinding "tiller-binding" created
Afterwards you can run helm init to install Tiller in the tiller-world namespace.
$ helm init --service-account tiller --tiller-namespace tiller-world
Now prefix all commands with --tiller-namespace tiller-world or set TILLER_NAMESPACE=tiller-world in your environment variables.
More Future Proof Answer
Stop using Tiller. Helm 3 removes the need for Tiller completely. If you are using Helm 2, you can use helm template to generate the yaml from your Helm chart and then run kubectl apply to apply the objects to your Kubernetes cluster.
helm template --name foo --namespace bar --output-dir ./output ./chart-template
kubectl apply --namespace bar --recursive --filename ./output -o yaml
Helm runs with "default" service account. You should provide permissions to it.
For read-only permissions:
kubectl create rolebinding default-view --clusterrole=view --serviceaccount=kube-system:default --namespace=kube-system
For admin access: Eg: to install packages.
kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
The default serviceaccount does not have API permissions. Helm likely needs to be assigned a service account, and that service account given API permissions. See the RBAC documentation for granting permissions to service accounts: https://kubernetes.io/docs/admin/authorization/rbac/#service-account-permissions
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
kubectl apply -f your-config-file-name.yaml
and then update helm instalation to use serviceAccount:
helm init --service-account tiller --upgrade
I got a this error while trying to install tiller in offline mode, I thought the 'tiller' service account didn't have enough rights but at it turns out that a network policy was blocking the communication between tiller and the api-server.
The solution was to create a network policy for tiller allowing all egress communication of tiller
export TILLER_NAMESPACE=<your-tiller-namespace> solved it for me, if <your-tiller-namespace> is not kube-system. This points the Helm client to the right Tiller namespace.
If you are using an EKS cluster from AWS and are facing the forbidden issue ( eg: forbidden: User ... cannot list resource "jobs" in API group "batch" in the namespace "default" then this worked for me:
Solution:
Ensure you have configured AWS
Ensure that configured user has the permission to access the cluster.