Gitlab-installed Helm: Error: context deadline exceeded - kubernetes

I've a Kubernetes cluster installed in AWS with Kops. I've installed Helm Tiller with the Gitlab UI. The Tiller service seems to be working via Gitlab, for example I've installed Ingress from the Gitlab UI.
But when trying to use that same Tiller from my CLI, I can't manage to get it working. When I helm init it says it's already installed (which makes totally sense):
helm init --tiller-namespace gitlab-managed-apps --service-account tiller
$HELM_HOME has been configured at C:\Users\danie\.helm.
Warning: Tiller is already installed in the cluster.
(Use --client-only to suppress this message, or --upgrade to upgrade Tiller to the current version.)
Happy Helming!
But when trying to, for example, list the charts, it takes 5 minutes and then timeouts:
$ helm list --tiller-namespace gitlab-managed-apps --debug
[debug] Created tunnel using local port: '60471'
[debug] SERVER: "127.0.0.1:60471"
Error: context deadline exceeded
What I'm missing so I can use the Gitlab-installed Tiller from my CLI?

Are you pretty sure that your Tiller server is installed in "gitlab-managed-apps" namespace ? By default it's installed to 'kube-system' one as per official installation instruction on GitLab website, which would mean this is what causes your helm ls command to fail (just skip it)
The best way to verify it is via:
kubectl get deploy/tiller-deploy -n gitlab-managed-apps
Do you see any tiller related deployment object in that namespace ?
Assuming your can operate your KOPS cluster with current kube context, you should have no problem with running helm client locally. You can always explicitly use --kube-context argument with helm command.
Update:
I think I know what causes your problem, Helm when installed via GitLab UI is using secured connection (SSL) between helm and tiller (proof here).
Knowing that, it means you should retrieve set of certificates from Secret object that is mounted on Tiller Pod:
#The CA
ca.cert.pem
ca.key.pem
#The Helm client files
helm.cert.pem
helm.key.pem
#The Tiller server files
tiller.cert.pem
tiller.key.pem
and then connect helm client to tiller server using following command, as explained here:
helm ls --tls --tls-ca-cert ca.cert.pem --tls-cert helm.cert.pem --tls-key helm.key.pem

Here's the way I've been doing this.
First open a shell in the gitlab tiller pod:
# replace the pod name, tiller-deploy-5bb888969c-7bzpl with your own
kubectl exec -n gitlab-managed-apps tiller-deploy-5bb888969c-7bzpl -it -- sh
Then use the pod's native helm and certs... to connect to tiller
$ env | grep TILLER_TLS_CERTS
#cd to the result, in my case /etc/certs
$ cd /etc/certs
# connect to tiller with the certs using the native helm (/helm) in my case:
$ /helm ls --tls --tls-ca-cert ./ca.crt --tls-cert ./tls.crt --tls-key ./tls.key

Related

Helm 'Error: error installing: namespaces "{username}" not found'

I'm using Minikube to tinker with Helm.
I understand Helm installs tiller in the kube-system namespace by default:
The easiest way to install tiller into the cluster is simply to run
helm init...
Once it connects, it will install tiller into the kube-system
namespace.
But instead it's trying to install tiller in a namespace named after me:
$ ~/bin/minikube start
* minikube v1.4.0 on Ubuntu 18.04
* Tip: Use 'minikube start -p ' to create a new cluster, or 'minikube delete' to delete this one.
* Starting existing virtualbox VM for "minikube" ...
* Waiting for the host to be provisioned ...
* Preparing Kubernetes v1.16.0 on Docker 18.09.9 ...
* Relaunching Kubernetes using kubeadm ...
* Waiting for: apiserver proxy etcd scheduler controller dns
* Done! kubectl is now configured to use "minikube"
$ helm init
$HELM_HOME has been configured at /home/mcrenshaw/.helm.
Error: error installing: namespaces "mcrenshaw" not found
$
I can specify the tiller namespace, but then I have to specify it in every subsequent use of helm.
$ helm init --tiller-namespace=kube-system
$HELM_HOME has been configured at /home/mcrenshaw/.helm.
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
$ helm upgrade --install some-thing .
Error: could not find tiller
$ helm upgrade --install some-thing . --tiller-namespace=kube-system
Release "some-thing" does not exist. Installing it now.
I suppose specifying the namespace in each command is fine. But it feels incorrect. Have I done something to corrupt my Helm config?
Update:
Per Eduardo's request, here's my helm version:
$ helm version --tiller-namespace=kube-system
Client: &version.Version{SemVer:"v2.15.0", GitCommit:"c2440264ca6c078a06e088a838b0476d2fc14750", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.15.0", GitCommit:"c2440264ca6c078a06e088a838b0476d2fc14750", GitTreeState:"clean"}
There are two ways of setting the Tiller default namespace:
Using the --tiller-namespace flag (as you are already using).
By setting the $TILLER_NAMESPACE environment variable.
The flag configuration takes precedence over the environment config. You probably have this environment variable set (you can check with printenv TILLER_NAMESPACE). If so, unset it and the further helm commands should point properly to kube-system namespace.

Error: error installing: the server could not find the requested resource HELM Kubernetes

What I Did:
I installed Helm with
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash
helm init --history-max 200
Getting an error:
$HELM_HOME has been configured at /root/.helm.
Error: error installing: the server could not find the requested resource
what does that error mean?
How should I install Helm and tiller?
Ubuntu version: 18.04
Kubernetes version: 1.16
Helm version:
helm version
Client: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
Error: could not find tiller
Update:
I tried #shawndodo's answer but still tiller not installed
helm init --service-account tiller --override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm'
--output yaml | sed 's#apiVersion: extensions/v1beta1#apiVersion: apps/v1#' | kubectl apply -f -
Update 2:
helm init --history-max 200 working in kubernetes version 1.15
I met the same problem, then I found this reply on here.
helm init --service-account tiller --override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm' --output yaml | sed 's#apiVersion: extensions/v1beta1#apiVersion: apps/v1#' | kubectl apply -f -
It works for me. You can see the detail in this issue.
Unfortunately, Helm is not working with the current version of Kubernetes (1.16.0) as we can see on the issue #6374
For now, we can work around the incompatibility by selecting an older version of Kubernetes.
Starting minikube with a previous Kubernetes version
To solve this issue, simply start the minikube setting the version using the --kubernetes-version param (Ref.):
minikube delete
minikube start --kubernetes-version=1.15.4
Try to reboot the Helm too with the following command:
helm init
After that, you will be able to use the Helm without problems.
So tiller is the server side component that your helm client talks to (tiller is due to be removed in Helm 3 due to various security issues). When running helm init the helm client installs tiller on the cluster that your kubectl is currently setup to connect with (keep in mind that in order to install tiller you need admin access the cluster as tiller needs cluster-wide admin access) However there are many different strategies to work with tiller:
tiller per namespace: This is when you install tiller in a single namespace and only give it access to that namespace (vastly more secure than giving it cluster wide admin), you can find an article on how to here
tillerless: This is when you run tiller locally, you will need to export HELM_HOST to poiunt to this tiller and tiller will use the kube config configured at KUBECONFIG more information found here
I ran into the same issue - exactly the same configuration as initial question:
Ubuntu version: 18.04
Kubernetes version: 1.16
#shawndodo's answer didn't work for me. There were some issues with the tiller deployment and the tiller pod was not getting created at all!
I tried installing the from canary build as described in Helm docs - https://helm.sh/docs/using_helm/#from-canary-builds
helm init --canary-image --upgrade
This didn't work a couple days ago, but tried again (with newer canary build) and it worked today (20191005).
Whether I run into other issues now using canary build remains to be seen, but I got past the initialisation issue...
I tried all suggestions about changing the api version manually to fix this issue, this got rid of the errors but things didnt work properly afterwards. so in my case I removed my latest minicube installation and installed an old one on my mac using the below command, change minikube-darwin-amd64 to minikube-linux-amd64 if needed :
curl -LO https://storage.googleapis.com/minikube/releases/v1.3.0/minikube-darwin-amd64 \
&& sudo install minikube-darwin-amd64 /usr/local/bin/minikube
This downgraded my kubernetes to v1.15.2 which helm currently supports.
kubectl version: v1.16.0
helm version: v2.14.3
minikube start --memory=16384 --cpus=4
helm init --service-account tiller --output yaml | sed 's#apiVersion: extensions/v1beta1#apiVersion: apps/v1#' | sed 's# replicas: 1# replicas: 1\n selector: {"matchLabels": {"app": "helm", "name": "tiller"}}#' | kubectl apply -f -
helm template istio-1.3.3/install/kubernetes/helm/istio-init --name istio-init --namespace istio-system | kubectl apply -f -
helm template istio-1.3.3/install/kubernetes/helm/istio --name istio --namespace istio-system | kubectl apply -f -
We need to have tiller installed in the cluster before we start using helm. helm init command installs tiller in the cluster and also we need to have RBAC configured in the cluster for tiller as well. Here you'll find out the RBAC rules required as per your need for your k8s cluster.
try
apt-get upgrade helm in my case it worked.
helm init --service-account tiller --override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm' --output yaml | sed 's#apiVersion: extensions/v1beta1#apiVersion: apps/v1#' | kubectl apply -f -

Use Gitlab-installed Helm from CLI. Could not find tiller

I've created a kubernetes cluster on AWS using Kops, and I've correctly configured the cluster on Gitlab.
I've installed Helm Tiller and Ingress from Gitlab's panel, but I now wish to uninstall the Ingress chart.
I'm not sure how to uninstall the ingress chart. What I'm tring now is configuring my Helm CLI to delete the ingress release, but I'm not getting the Helm CLI correctly configured. The Tiller stuff is being deployed at the gitlab-managed-apps, so I'm trying the following command:
$ helm init --tiller-namespace gitlab-managed-apps --service-account tiller --upgrade
HELM_HOME has been configured at C:\Users\danie\.helm.
Tiller (the Helm server-side component) has been upgraded to the current version.
Happy Helming!
But then when I'm trying to issue the helm ls command I'm getting the following error:
$ helm ls
Error: could not find tiller
But the service account exists on the namespace:
$ kubectl get serviceAccounts -n gitlab-managed-apps
NAME SECRETS AGE
default 1 23h
ingress-nginx-ingress 1 23h
tiller 1 23h
Any ideas how to get the CLI correctly configured?
you have installed tiller to a namespace that is not the default namespace.
As per default the Helm CLI will assume tiller is installed in default and that this is the namespace you want to "get in touch with"
this can be fixed by using the tiller-namespace flag as for your example that'd be
helm list --tiller-namespace gitlab-managed-apps
Try using Helm version 3 onward. Helm versions 1 and 2 are actually composed of two pieces – the Helm CLI, and Tiller, the Helm server-side component. It is important to note that Helm 3 removes the Tiller component, and thus is more secure

helm install stable/gocd returns an error

After installing helm I'm trying to install gocd for containerizing.
Command
helm install stable/gocd --name gocd --namespace gocd is throwing the following error:
Error: Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps?labelSelector=OWNER%!D(MISSING)TILLER: dial tcp 10.96.0.1:443: i/o timeout
Please help in resolving this issue. What may be the error? How can I correct it so that gocd is installed through helm?
Install the GoCD Helm chart
Helm is a package manager for Kubernetes. Kubernetes packages are called charts. Charts are curated applications for Kubernetes.
Install the GoCD Helm chart with these commands:
helm repo add stable https://kubernetes-charts.storage.googleapis.com
helm install stable/gocd --name gocd --namespace gocd
Access the GoCD server
After you’ve installed the GoCD helm chart, you should be able to access the GoCD server from the Ingress IP.
The Ingress IP address can be obtained as specified below:
Minikube
minikube ip
Others
ip=$(kubectl get ingress --namespace gocd gocd-server -o jsonpath="{.status.loadBalancer.ingress[0].ip}")
echo "http://$ip"
It might take a few minutes for the GoCD server to come up for the first time. You can check if the GoCD server is up with this command:
kubectl get deployments --namespace gocd
The column Available should show 1 for gocd-server.
The GoCD server on startup will look like this.
Now that you have accessed the GoCD server successfully, you will need to configure the Kubernetes elastic agent plugin.

Tiller is installed but not found by Helm

Background
I have kubernetes installed in clustered mode.
All nodes are up and running
I want to use jenkins-x to get ease of deployment.
Now jenkins-x uses Helm to do this job; Helm comes up with client and server architecture.
Helm setup can be achieved by following two ways:-
Using jenkins-x
jx install --username <username>
Standalone Helm
helm init
This helps to setup itsserver (Tiller), by putting it in pod of Kubernetes.
Whats issue
The issue is when I use first approach it does Tiller installation and later get failed by saying 'Tiller is available but not up and running'.
Created ClusterRoleBinding tiller
retrying after error:existing tiller deployment found but not running, please check the kube-system namespace and resolve any issues
Second approach also gets fail in similar path
It also does the Tiller installation but it does not find Tiller when I'm trying to list it.
helm ls
Error: could not find tiller
So essence of issue is :
It does Tiller installation but fails it in finding later.
helm init
Warning: Tiller is already installed in the cluster.
helm ls
Error: could not find tiller
I just went ahead and installed both helm and Jx with no problem. So, I don't know how to resolve your issue, but you can install it as below, and should work.
Installing Helm:
$ wget https://kubernetes-helm.storage.googleapis.com/helm-v2.9.1-linux-amd64.tar.gz
$ tar xzvf helm-v2.9.1-linux-amd64.tar.gz
$ cd linux-amd64/
$ sudo cp helm /usr/local/bin/helm
$ helm init
Installing Jx
$ curl -L https://github.com/jenkins-x/jx/releases/download/v1.2.98/jx-linux-amd64.tar.gz | tar xzv
$ sudo mv jx /usr/local/bin
Making Tiller cluster-admin role:
$ kubectl create clusterrolebinding tiller-cluster-admin \
--clusterrole=cluster-admin \
--serviceaccount=kube-system:default
Checking it works:
$ helm install --name prometheus stable/prometheus
$ helm ls
prometheus 1 Sun Jun 3 09:47:12 2018 DEPLOYED prometheus-6.7.0 default
there may be a problem with the tiller pod starting either due to resources or RBAC. Try these commands:
kubectl get deploy -n kube-system
kubectl get node -n kube-system
that might give more of a clue. If you can find a tiller pod thats failing mabe
kubectl describe pod tiller-1234 -n kube-system