$ helm version gives "Cannot connect to tiller" - kubernetes

I created a kubernetes cluster 3 using vagrant machines and installed helm. But when checking the version of helm it gives the version of client and says "cannot connect to tiller.
I can't install any chart using helm due to an error related to forwarding ports.
vagrant#master:~$ helm init
$HELM_HOME has been configured at /home/vagrant/.helm.
Warning: Tiller is already installed in the cluster.
(Use --client-only to suppress this message, or --upgrade to upgrade Tiller to the current version.)
Happy Helming!
vagrant#master:~$ helm version
Client: &version.Version{SemVer:"v2.6.2", GitCommit:"be3ae4ea91b2960be98c07e8f73754e67e87963c", GitTreeState:"clean"}
Error: cannot connect to Tiller
vagrant#master:~$ helm install nginx
Error: forwarding ports: error upgrading connection: unable to upgrade connection: pod does not exist
I found a solution here :
https://kubernetes.io/docs/getting-started-guides/ubuntu/troubleshooting/
This is caused by the API load balancer not forwarding ports in the context of the helm client-server relationship.
But the procedure to solve the error is not clear for me.
For example, the first step says to expose the Kubernetes Master service :
juju expose kubernetes-master
But i use kubectl instead of juju. So how can i find the name of k8s master service? and how should do this step using kubectl?
In short, I want to do the steps using kubectl, instead of juju.And i don't understand the difference.
Can anyone help me?

Related

Using helm and a Kubernetes Cluster with Microk8s on one or two local physical Ubuntu server

I installed Microk8s on a local physical Ubuntu 20-04 server (without a GUI):
microk8s status --wait-ready
microk8s is running
high-availability: no
datastore master nodes: 127.0.0.1:19001
datastore standby nodes: none
addons:
enabled:
ha-cluster # Configure high availability on the current node
helm # Helm 2 - the package manager for Kubernetes
disabled:
When I try to install something with helm it says:
Error: INSTALLATION FAILED: Kubernetes cluster unreachable: Get "http://localhost:8080/version": dial tcp 127.0.0.1:8080: connect: connection refused
What configuration has to be done to use the MicroK8s Kubernetes cluster for helm installations?
Do I have to enable more MicroK8s services for that?
Can I run a Kubernetes cluster on one or two single local physical Ubuntu server with MicroK8s?
Searching for solution for your issue, I have found this one. Try to run:
[microk8s] kubectl config view --raw > ~/.kube/config
Helm interacts directly with the Kubernetes API server so it needs to be able to connect to a Kubernetes cluster. Helms reads the same configuration files used by kubectl to do it automatically.
Based on Learning Helm by O'Reilly Media:
Helm will try to find this information by reading the environment variable $KUBECONFIG. If that is not set, it will look in the same default locations that kubectl looks in.
See also:
This discussion about similar issue on Github
This similar issue

Helm 3 installation error through proxy server

Does anyone know why helm 3 install occurs error through proxy server?
Environment:
OS: Ubuntu 18.04
Kubernetes: v1.19.0
Helm: v3.3.4
root#ecs-k8s-master:~# kubectl cluster-info
Kubernetes master is running at https://192.168.30.5:6443
KubeDNS is running at https://192.168.30.5:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
root#ecs-k8s-master:~# https_proxy=http://172.19.1.222:3128 helm install ingress-nginx ingress-nginx/ingress-nginx
Error: Kubernetes cluster unreachable: Get "https://192.168.30.5:6443/version?timeout=32s": Forbidden
It seems like your proxy server either does require authentication or does not allow the connection to your cluster as it tells you "forbidden".

Helm 'Error: error installing: namespaces "{username}" not found'

I'm using Minikube to tinker with Helm.
I understand Helm installs tiller in the kube-system namespace by default:
The easiest way to install tiller into the cluster is simply to run
helm init...
Once it connects, it will install tiller into the kube-system
namespace.
But instead it's trying to install tiller in a namespace named after me:
$ ~/bin/minikube start
* minikube v1.4.0 on Ubuntu 18.04
* Tip: Use 'minikube start -p ' to create a new cluster, or 'minikube delete' to delete this one.
* Starting existing virtualbox VM for "minikube" ...
* Waiting for the host to be provisioned ...
* Preparing Kubernetes v1.16.0 on Docker 18.09.9 ...
* Relaunching Kubernetes using kubeadm ...
* Waiting for: apiserver proxy etcd scheduler controller dns
* Done! kubectl is now configured to use "minikube"
$ helm init
$HELM_HOME has been configured at /home/mcrenshaw/.helm.
Error: error installing: namespaces "mcrenshaw" not found
$
I can specify the tiller namespace, but then I have to specify it in every subsequent use of helm.
$ helm init --tiller-namespace=kube-system
$HELM_HOME has been configured at /home/mcrenshaw/.helm.
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
$ helm upgrade --install some-thing .
Error: could not find tiller
$ helm upgrade --install some-thing . --tiller-namespace=kube-system
Release "some-thing" does not exist. Installing it now.
I suppose specifying the namespace in each command is fine. But it feels incorrect. Have I done something to corrupt my Helm config?
Update:
Per Eduardo's request, here's my helm version:
$ helm version --tiller-namespace=kube-system
Client: &version.Version{SemVer:"v2.15.0", GitCommit:"c2440264ca6c078a06e088a838b0476d2fc14750", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.15.0", GitCommit:"c2440264ca6c078a06e088a838b0476d2fc14750", GitTreeState:"clean"}
There are two ways of setting the Tiller default namespace:
Using the --tiller-namespace flag (as you are already using).
By setting the $TILLER_NAMESPACE environment variable.
The flag configuration takes precedence over the environment config. You probably have this environment variable set (you can check with printenv TILLER_NAMESPACE). If so, unset it and the further helm commands should point properly to kube-system namespace.

Gitlab-installed Helm: Error: context deadline exceeded

I've a Kubernetes cluster installed in AWS with Kops. I've installed Helm Tiller with the Gitlab UI. The Tiller service seems to be working via Gitlab, for example I've installed Ingress from the Gitlab UI.
But when trying to use that same Tiller from my CLI, I can't manage to get it working. When I helm init it says it's already installed (which makes totally sense):
helm init --tiller-namespace gitlab-managed-apps --service-account tiller
$HELM_HOME has been configured at C:\Users\danie\.helm.
Warning: Tiller is already installed in the cluster.
(Use --client-only to suppress this message, or --upgrade to upgrade Tiller to the current version.)
Happy Helming!
But when trying to, for example, list the charts, it takes 5 minutes and then timeouts:
$ helm list --tiller-namespace gitlab-managed-apps --debug
[debug] Created tunnel using local port: '60471'
[debug] SERVER: "127.0.0.1:60471"
Error: context deadline exceeded
What I'm missing so I can use the Gitlab-installed Tiller from my CLI?
Are you pretty sure that your Tiller server is installed in "gitlab-managed-apps" namespace ? By default it's installed to 'kube-system' one as per official installation instruction on GitLab website, which would mean this is what causes your helm ls command to fail (just skip it)
The best way to verify it is via:
kubectl get deploy/tiller-deploy -n gitlab-managed-apps
Do you see any tiller related deployment object in that namespace ?
Assuming your can operate your KOPS cluster with current kube context, you should have no problem with running helm client locally. You can always explicitly use --kube-context argument with helm command.
Update:
I think I know what causes your problem, Helm when installed via GitLab UI is using secured connection (SSL) between helm and tiller (proof here).
Knowing that, it means you should retrieve set of certificates from Secret object that is mounted on Tiller Pod:
#The CA
ca.cert.pem
ca.key.pem
#The Helm client files
helm.cert.pem
helm.key.pem
#The Tiller server files
tiller.cert.pem
tiller.key.pem
and then connect helm client to tiller server using following command, as explained here:
helm ls --tls --tls-ca-cert ca.cert.pem --tls-cert helm.cert.pem --tls-key helm.key.pem
Here's the way I've been doing this.
First open a shell in the gitlab tiller pod:
# replace the pod name, tiller-deploy-5bb888969c-7bzpl with your own
kubectl exec -n gitlab-managed-apps tiller-deploy-5bb888969c-7bzpl -it -- sh
Then use the pod's native helm and certs... to connect to tiller
$ env | grep TILLER_TLS_CERTS
#cd to the result, in my case /etc/certs
$ cd /etc/certs
# connect to tiller with the certs using the native helm (/helm) in my case:
$ /helm ls --tls --tls-ca-cert ./ca.crt --tls-cert ./tls.crt --tls-key ./tls.key

helm install stable/gocd returns an error

After installing helm I'm trying to install gocd for containerizing.
Command
helm install stable/gocd --name gocd --namespace gocd is throwing the following error:
Error: Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps?labelSelector=OWNER%!D(MISSING)TILLER: dial tcp 10.96.0.1:443: i/o timeout
Please help in resolving this issue. What may be the error? How can I correct it so that gocd is installed through helm?
Install the GoCD Helm chart
Helm is a package manager for Kubernetes. Kubernetes packages are called charts. Charts are curated applications for Kubernetes.
Install the GoCD Helm chart with these commands:
helm repo add stable https://kubernetes-charts.storage.googleapis.com
helm install stable/gocd --name gocd --namespace gocd
Access the GoCD server
After you’ve installed the GoCD helm chart, you should be able to access the GoCD server from the Ingress IP.
The Ingress IP address can be obtained as specified below:
Minikube
minikube ip
Others
ip=$(kubectl get ingress --namespace gocd gocd-server -o jsonpath="{.status.loadBalancer.ingress[0].ip}")
echo "http://$ip"
It might take a few minutes for the GoCD server to come up for the first time. You can check if the GoCD server is up with this command:
kubectl get deployments --namespace gocd
The column Available should show 1 for gocd-server.
The GoCD server on startup will look like this.
Now that you have accessed the GoCD server successfully, you will need to configure the Kubernetes elastic agent plugin.