Error in Transport for helm install package - kubernetes

I am trying to install postgresql using helm on my kubernetes cluster.
I get error in transport when i run the helm install command.
I have tied different solutions online, none worked.
helm install --name realtesting stable/postgresql --debug
The expect result is a deployed postgresql on my kubernetes cluster
Please help!

It seems that you have not initialized helm with a service account.
In rbac-config.yaml:
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
Step 1: kubectl apply -f rbac-config.yaml
Step 2: helm init --service-account tiller --history-max 200
Step 3: Test the setup with heml ls. There would not be any output from running this command and that is expected. Now, you can run helm install --name realtesting stable/postgresql

you need to deploy the tiller server.
then follow the below steps
master $ kubectl get po -n kube-system |grep tiller
tiller-deploy-5bcf6f5c7c-km8hn 1/1 Running 0 18s
master $ helm install --name realtesting stable/postgresql --debug
[debug] Created tunnel using local port: '32876'
[debug] SERVER: "127.0.0.1:32876"
[debug] Original chart version: ""
[debug] Fetched stable/postgresql to /root/.helm/cache/archive/postgresql-4.0.1.tgz
[debug] CHART PATH: /root/.helm/cache/archive/postgresql-4.0.1.tgz
NAME: realtesting
REVISION: 1
RELEASED: Fri May 10 08:52:11 2019
CHART: postgresql-4.0.1

Related

Unable to initialize helm (tiller) on newly created GKE cluster

I have just created a GKE cluster on Google Cloud platform. I have installed in the cloud console helm :
$ helm version
version.BuildInfo{Version:"v3.0.0", GitCommit:"e29ce2a54e96cd02ccfce88bee4f58bb6e2a28b6", GitTreeState:"clean", GoVersion:"go1.13.4"}
I have also created the necessary serviceaccount and clusterrolebinding objects:
$ cat helm-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
$ kubectl apply -f helm-rbac.yaml
serviceaccount/tiller created
clusterrolebinding.rbac.authorization.k8s.io/tiller created
However trying to initialise tiller gives me the following error:
$ helm init --service-account tiller --history-max 300
Error: unknown flag: --service-account
Why is that?
However trying to initialise tiller gives me the following error:
Error: unknown flag: --service-account
Why is that?
Helm 3 is a major upgrade. The Tiller component is now obsolete.
There is no command helm init therefore also the flag --service-account is removed.
The internal implementation of Helm 3 has changed considerably from Helm 2. The most apparent change is the removal of Tiller.

Can't Install Helm on GKE Cluster

I'm having trouble in installing Helm to one of my GKE cluster through gcloud shell.
When I run: helm install --name mongo-rs-mongodb-replicaset -f 3-values.yaml stable/mongodb-replicaset --debug This is what I get:
[debug] Created tunnel using local port: '39387'
[debug] SERVER: "127.0.0.1:39387"
[debug] Original chart version: ""
[debug] Fetched stable/mongodb-replicaset to /home/idan/.helm/cache/archive/mongodb-replicaset-3.9.6.tgz
[debug] CHART PATH: /home/idan/.helm/cache/archive/mongodb-replicaset-3.9.6.tgz
Error: the server has asked for the client to provide credentials
My service account is set properly:
kubectl describe serviceaccount tiller --namespace kube-system
Name: tiller
Namespace: kube-system
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: tiller-token-vbrrn
Tokens: tiller-token-vbrrn
Events: <none>
kubectl describe clusterrolebinding tiller
Name: tiller
Labels: <none>
Annotations: <none>
Role:
Kind: ClusterRole
Name: cluster-admin
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount tiller kube-system
I'm owner of my project's IAM, and I'm not sure which credentials should I provide - I have never seen it in the past. Tried to initialize it with helm --upgrade too.
Did you setup rbac?
If not, set it up and run helm init --service-account tiller --upgrade it should fix your problem.
After every solution I found didn't work, I tried re-creating my cluster and running the same commands and it has simply worked...

I have problems running "helm install --name bluecompute ibmcase/bluecompute-ce" when trying to complete the IBM course

I was trying to follow this guide https://www.ibm.com/cloud/garage/tutorials/microservices-app-on-kubernetes?task=1 but in the Task 4, step 7 I get a problem like this:
I dont cant find solution to this problem, and I dont know exaclty what is happening and why the problem is ocurring. Thanks for the help.
You are getting this error because you have not initialized helm with a service account.
In rbac-config.yaml:
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
Step 1: kubectl apply -f rbac-config.yaml
Step 2: helm init --service-account tiller --history-max 200
Step 3: Test the setup with heml ls. There would not be any output from running this command and that is expected. Now, you can run helm install --name bluecompute ibmcase/bluecompute-ce
This is documented for setting up helm on IBM Cloud here:
https://cloud.ibm.com/docs/containers?topic=containers-helm#helm

Unable to install kubernetes charts on specified namespace

I have installed a cluster on Google Kubernetes Engine.
And then, I created namespace "staging"
$ kubectl get namespaces
default Active 26m
kube-public Active 26m
kube-system Active 26m
staging Active 20m
Then, I switched to operate in the staging namespace
$ kubectl config use-context staging
$ kubectl config current-context
staging
And then, I installed postgresql using helm on staging namespace
helm install --name staging stable/postgresql
But I got:
Error: release staging failed: namespaces "staging" is forbidden: User "system:serviceaccount:kube-system:default" cannot get namespaces in the namespace "staging": Unknown user "system:serviceaccount:kube-system:default"
What does it mean..?? How to get it working..??
Thank youu..
As your cluster is RBAC enabled, seems like your tiller Pod do not have enough permission.
You are using default ServiceAccount which lacks enough RBAC permission, tiller requires.
All you need to create ClusterRole, ClusterRoleBinding and ServiceAccount. With them you can provide necessary permission to your Pod.
Follow this steps
_1. Create ClusterRole tiller
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: tiller
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
Note: I have used full permission here.
_2. Create ServiceAccount tiller in kube-system namespace
$ kubectl create sa tiller -n kube-system
_3. Create ClusterRoleBinding tiller
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: tiller
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
apiGroup: ""
roleRef:
kind: ClusterRole
name: tiller
apiGroup: rbac.authorization.k8s.io
Now you need to use this ServiceAccount in your tiller Deployment.
As you already have one, edit that
$ kubectl edit deployment -n kube-system tiller-deploy
Set serviceAccountName to tiller under PodSpec
Read more about RBAC
Try:
helm init --upgrade --service-account tiller
as suggested by Scott S in this comment.

helm list : cannot list configmaps in the namespace "kube-system"

I have installed helm 2.6.2 on the kubernetes 8 cluster. helm init worked fine. but when I run helm list it giving this error.
helm list
Error: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list configmaps in the namespace "kube-system"
How to fix this RABC error message?
Once these commands:
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
helm init --service-account tiller --upgrade
were run, the issue has been solved.
More Secure Answer
The accepted answer gives full admin access to Helm which is not the best solution security wise. With a little more work, we can restrict Helm's access to a particular namespace. More details in the Helm documentation.
$ kubectl create namespace tiller-world
namespace "tiller-world" created
$ kubectl create serviceaccount tiller --namespace tiller-world
serviceaccount "tiller" created
Define a Role that allows Tiller to manage all resources in tiller-world like in role-tiller.yaml:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: tiller-manager
namespace: tiller-world
rules:
- apiGroups: ["", "batch", "extensions", "apps"]
resources: ["*"]
verbs: ["*"]
Then run:
$ kubectl create -f role-tiller.yaml
role "tiller-manager" created
In rolebinding-tiller.yaml,
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: tiller-binding
namespace: tiller-world
subjects:
- kind: ServiceAccount
name: tiller
namespace: tiller-world
roleRef:
kind: Role
name: tiller-manager
apiGroup: rbac.authorization.k8s.io
Then run:
$ kubectl create -f rolebinding-tiller.yaml
rolebinding "tiller-binding" created
Afterwards you can run helm init to install Tiller in the tiller-world namespace.
$ helm init --service-account tiller --tiller-namespace tiller-world
Now prefix all commands with --tiller-namespace tiller-world or set TILLER_NAMESPACE=tiller-world in your environment variables.
More Future Proof Answer
Stop using Tiller. Helm 3 removes the need for Tiller completely. If you are using Helm 2, you can use helm template to generate the yaml from your Helm chart and then run kubectl apply to apply the objects to your Kubernetes cluster.
helm template --name foo --namespace bar --output-dir ./output ./chart-template
kubectl apply --namespace bar --recursive --filename ./output -o yaml
Helm runs with "default" service account. You should provide permissions to it.
For read-only permissions:
kubectl create rolebinding default-view --clusterrole=view --serviceaccount=kube-system:default --namespace=kube-system
For admin access: Eg: to install packages.
kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
The default serviceaccount does not have API permissions. Helm likely needs to be assigned a service account, and that service account given API permissions. See the RBAC documentation for granting permissions to service accounts: https://kubernetes.io/docs/admin/authorization/rbac/#service-account-permissions
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
kubectl apply -f your-config-file-name.yaml
and then update helm instalation to use serviceAccount:
helm init --service-account tiller --upgrade
I got a this error while trying to install tiller in offline mode, I thought the 'tiller' service account didn't have enough rights but at it turns out that a network policy was blocking the communication between tiller and the api-server.
The solution was to create a network policy for tiller allowing all egress communication of tiller
export TILLER_NAMESPACE=<your-tiller-namespace> solved it for me, if <your-tiller-namespace> is not kube-system. This points the Helm client to the right Tiller namespace.
If you are using an EKS cluster from AWS and are facing the forbidden issue ( eg: forbidden: User ... cannot list resource "jobs" in API group "batch" in the namespace "default" then this worked for me:
Solution:
Ensure you have configured AWS
Ensure that configured user has the permission to access the cluster.