I've created a kubernetes cluster on AWS using Kops, and I've correctly configured the cluster on Gitlab.
I've installed Helm Tiller and Ingress from Gitlab's panel, but I now wish to uninstall the Ingress chart.
I'm not sure how to uninstall the ingress chart. What I'm tring now is configuring my Helm CLI to delete the ingress release, but I'm not getting the Helm CLI correctly configured. The Tiller stuff is being deployed at the gitlab-managed-apps, so I'm trying the following command:
$ helm init --tiller-namespace gitlab-managed-apps --service-account tiller --upgrade
HELM_HOME has been configured at C:\Users\danie\.helm.
Tiller (the Helm server-side component) has been upgraded to the current version.
Happy Helming!
But then when I'm trying to issue the helm ls command I'm getting the following error:
$ helm ls
Error: could not find tiller
But the service account exists on the namespace:
$ kubectl get serviceAccounts -n gitlab-managed-apps
NAME SECRETS AGE
default 1 23h
ingress-nginx-ingress 1 23h
tiller 1 23h
Any ideas how to get the CLI correctly configured?
you have installed tiller to a namespace that is not the default namespace.
As per default the Helm CLI will assume tiller is installed in default and that this is the namespace you want to "get in touch with"
this can be fixed by using the tiller-namespace flag as for your example that'd be
helm list --tiller-namespace gitlab-managed-apps
Try using Helm version 3 onward. Helm versions 1 and 2 are actually composed of two pieces – the Helm CLI, and Tiller, the Helm server-side component. It is important to note that Helm 3 removes the Tiller component, and thus is more secure
Related
Failed to create NodePort error, after deploying ingress
I have an ingress defined as in the screenshot:
Screenshot
The 2 replicas of an Ingress server are not spinning due to the Failed to create NodePort error. Please advice
Just like the error says. You are missing the NodePortPods CRD. It looks like that CRD existed at some point in time. But I don't see it anymore in the repo. You didn't specify how you deployed the ingress operator but you can make sure you install the latest.
helm repo add appscode https://charts.appscode.com/stable/
helm repo update
helm search repo appscode/voyager --version v13.0.0
# Generate the template to check or use helm install
helm template voyager-operator appscode/voyager --version v13.0.0 --namespace kube-system --no-hooks --set cloudProvider=baremetal 👈 Use the right cloud provider
✌️
During Installation of Helm3 stable, i found Helm3 stable does not implement tiller deployment for fetching cluster details, it works as a Client utility only, my question is it, if it is not implementing tiller concept for fetching details, how does it connect with EKS.
I have already installed kubectl and it is running fine, is it something like this, helm client is dependent on kubectl service?
I performed following steps:
1.helm version
version.BuildInfo{Version:"v3.1.0", GitCommit:"b29d20baf09943e134c2fa5e1e1cab3bf93315fa", GitTreeState:"clean", GoVersion:"go1.13.7"}
2.kubectl create serviceaccount tiller --namespace kube-system
serviceaccount/tiller created
3.notepad rbac-config.yaml
4.kubectl apply -f rbac-config.yaml
clusterrolebinding.rbac.authorization.k8s.io/tiller-role-binding created
5.helm init --service-account tiller
Error: unknown flag: --service-account
I know steps 2,3,4 are not required in Helm3, but curious to know how helm3 interacts as a client service with EKS cluster.
Just like kubectl, helm also uses kubeconfig to communicate with the cluster.
So, both kubectl and helm depend on the cluster's config file rather depending on each other.
I'm using Minikube to tinker with Helm.
I understand Helm installs tiller in the kube-system namespace by default:
The easiest way to install tiller into the cluster is simply to run
helm init...
Once it connects, it will install tiller into the kube-system
namespace.
But instead it's trying to install tiller in a namespace named after me:
$ ~/bin/minikube start
* minikube v1.4.0 on Ubuntu 18.04
* Tip: Use 'minikube start -p ' to create a new cluster, or 'minikube delete' to delete this one.
* Starting existing virtualbox VM for "minikube" ...
* Waiting for the host to be provisioned ...
* Preparing Kubernetes v1.16.0 on Docker 18.09.9 ...
* Relaunching Kubernetes using kubeadm ...
* Waiting for: apiserver proxy etcd scheduler controller dns
* Done! kubectl is now configured to use "minikube"
$ helm init
$HELM_HOME has been configured at /home/mcrenshaw/.helm.
Error: error installing: namespaces "mcrenshaw" not found
$
I can specify the tiller namespace, but then I have to specify it in every subsequent use of helm.
$ helm init --tiller-namespace=kube-system
$HELM_HOME has been configured at /home/mcrenshaw/.helm.
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
$ helm upgrade --install some-thing .
Error: could not find tiller
$ helm upgrade --install some-thing . --tiller-namespace=kube-system
Release "some-thing" does not exist. Installing it now.
I suppose specifying the namespace in each command is fine. But it feels incorrect. Have I done something to corrupt my Helm config?
Update:
Per Eduardo's request, here's my helm version:
$ helm version --tiller-namespace=kube-system
Client: &version.Version{SemVer:"v2.15.0", GitCommit:"c2440264ca6c078a06e088a838b0476d2fc14750", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.15.0", GitCommit:"c2440264ca6c078a06e088a838b0476d2fc14750", GitTreeState:"clean"}
There are two ways of setting the Tiller default namespace:
Using the --tiller-namespace flag (as you are already using).
By setting the $TILLER_NAMESPACE environment variable.
The flag configuration takes precedence over the environment config. You probably have this environment variable set (you can check with printenv TILLER_NAMESPACE). If so, unset it and the further helm commands should point properly to kube-system namespace.
What I Did:
I installed Helm with
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash
helm init --history-max 200
Getting an error:
$HELM_HOME has been configured at /root/.helm.
Error: error installing: the server could not find the requested resource
what does that error mean?
How should I install Helm and tiller?
Ubuntu version: 18.04
Kubernetes version: 1.16
Helm version:
helm version
Client: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
Error: could not find tiller
Update:
I tried #shawndodo's answer but still tiller not installed
helm init --service-account tiller --override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm'
--output yaml | sed 's#apiVersion: extensions/v1beta1#apiVersion: apps/v1#' | kubectl apply -f -
Update 2:
helm init --history-max 200 working in kubernetes version 1.15
I met the same problem, then I found this reply on here.
helm init --service-account tiller --override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm' --output yaml | sed 's#apiVersion: extensions/v1beta1#apiVersion: apps/v1#' | kubectl apply -f -
It works for me. You can see the detail in this issue.
Unfortunately, Helm is not working with the current version of Kubernetes (1.16.0) as we can see on the issue #6374
For now, we can work around the incompatibility by selecting an older version of Kubernetes.
Starting minikube with a previous Kubernetes version
To solve this issue, simply start the minikube setting the version using the --kubernetes-version param (Ref.):
minikube delete
minikube start --kubernetes-version=1.15.4
Try to reboot the Helm too with the following command:
helm init
After that, you will be able to use the Helm without problems.
So tiller is the server side component that your helm client talks to (tiller is due to be removed in Helm 3 due to various security issues). When running helm init the helm client installs tiller on the cluster that your kubectl is currently setup to connect with (keep in mind that in order to install tiller you need admin access the cluster as tiller needs cluster-wide admin access) However there are many different strategies to work with tiller:
tiller per namespace: This is when you install tiller in a single namespace and only give it access to that namespace (vastly more secure than giving it cluster wide admin), you can find an article on how to here
tillerless: This is when you run tiller locally, you will need to export HELM_HOST to poiunt to this tiller and tiller will use the kube config configured at KUBECONFIG more information found here
I ran into the same issue - exactly the same configuration as initial question:
Ubuntu version: 18.04
Kubernetes version: 1.16
#shawndodo's answer didn't work for me. There were some issues with the tiller deployment and the tiller pod was not getting created at all!
I tried installing the from canary build as described in Helm docs - https://helm.sh/docs/using_helm/#from-canary-builds
helm init --canary-image --upgrade
This didn't work a couple days ago, but tried again (with newer canary build) and it worked today (20191005).
Whether I run into other issues now using canary build remains to be seen, but I got past the initialisation issue...
I tried all suggestions about changing the api version manually to fix this issue, this got rid of the errors but things didnt work properly afterwards. so in my case I removed my latest minicube installation and installed an old one on my mac using the below command, change minikube-darwin-amd64 to minikube-linux-amd64 if needed :
curl -LO https://storage.googleapis.com/minikube/releases/v1.3.0/minikube-darwin-amd64 \
&& sudo install minikube-darwin-amd64 /usr/local/bin/minikube
This downgraded my kubernetes to v1.15.2 which helm currently supports.
kubectl version: v1.16.0
helm version: v2.14.3
minikube start --memory=16384 --cpus=4
helm init --service-account tiller --output yaml | sed 's#apiVersion: extensions/v1beta1#apiVersion: apps/v1#' | sed 's# replicas: 1# replicas: 1\n selector: {"matchLabels": {"app": "helm", "name": "tiller"}}#' | kubectl apply -f -
helm template istio-1.3.3/install/kubernetes/helm/istio-init --name istio-init --namespace istio-system | kubectl apply -f -
helm template istio-1.3.3/install/kubernetes/helm/istio --name istio --namespace istio-system | kubectl apply -f -
We need to have tiller installed in the cluster before we start using helm. helm init command installs tiller in the cluster and also we need to have RBAC configured in the cluster for tiller as well. Here you'll find out the RBAC rules required as per your need for your k8s cluster.
try
apt-get upgrade helm in my case it worked.
helm init --service-account tiller --override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm' --output yaml | sed 's#apiVersion: extensions/v1beta1#apiVersion: apps/v1#' | kubectl apply -f -
I've a Kubernetes cluster installed in AWS with Kops. I've installed Helm Tiller with the Gitlab UI. The Tiller service seems to be working via Gitlab, for example I've installed Ingress from the Gitlab UI.
But when trying to use that same Tiller from my CLI, I can't manage to get it working. When I helm init it says it's already installed (which makes totally sense):
helm init --tiller-namespace gitlab-managed-apps --service-account tiller
$HELM_HOME has been configured at C:\Users\danie\.helm.
Warning: Tiller is already installed in the cluster.
(Use --client-only to suppress this message, or --upgrade to upgrade Tiller to the current version.)
Happy Helming!
But when trying to, for example, list the charts, it takes 5 minutes and then timeouts:
$ helm list --tiller-namespace gitlab-managed-apps --debug
[debug] Created tunnel using local port: '60471'
[debug] SERVER: "127.0.0.1:60471"
Error: context deadline exceeded
What I'm missing so I can use the Gitlab-installed Tiller from my CLI?
Are you pretty sure that your Tiller server is installed in "gitlab-managed-apps" namespace ? By default it's installed to 'kube-system' one as per official installation instruction on GitLab website, which would mean this is what causes your helm ls command to fail (just skip it)
The best way to verify it is via:
kubectl get deploy/tiller-deploy -n gitlab-managed-apps
Do you see any tiller related deployment object in that namespace ?
Assuming your can operate your KOPS cluster with current kube context, you should have no problem with running helm client locally. You can always explicitly use --kube-context argument with helm command.
Update:
I think I know what causes your problem, Helm when installed via GitLab UI is using secured connection (SSL) between helm and tiller (proof here).
Knowing that, it means you should retrieve set of certificates from Secret object that is mounted on Tiller Pod:
#The CA
ca.cert.pem
ca.key.pem
#The Helm client files
helm.cert.pem
helm.key.pem
#The Tiller server files
tiller.cert.pem
tiller.key.pem
and then connect helm client to tiller server using following command, as explained here:
helm ls --tls --tls-ca-cert ca.cert.pem --tls-cert helm.cert.pem --tls-key helm.key.pem
Here's the way I've been doing this.
First open a shell in the gitlab tiller pod:
# replace the pod name, tiller-deploy-5bb888969c-7bzpl with your own
kubectl exec -n gitlab-managed-apps tiller-deploy-5bb888969c-7bzpl -it -- sh
Then use the pod's native helm and certs... to connect to tiller
$ env | grep TILLER_TLS_CERTS
#cd to the result, in my case /etc/certs
$ cd /etc/certs
# connect to tiller with the certs using the native helm (/helm) in my case:
$ /helm ls --tls --tls-ca-cert ./ca.crt --tls-cert ./tls.crt --tls-key ./tls.key