Error creating service account using Helm on Kubernetes - kubernetes

I am trying to create a service account using helm on Kubernetes as described here:
https://tutorials.kevashcraft.com/k8s/install-helm/
When I execute the following line:
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
I get an error:
Error from server (BadRequest): invalid character 's' looking for beginning of object key string
Can someone give me some guidance as to what is wrong?
Thanks!

Try kubectl patch deploy --namespace kube-system tiller-deploy -p "{\"spec\":{\"template\":{\"spec\":{\"serviceAccount\":\"tiller\"}}}}" i.e. using outer double-quotes and escaping the inner double-quotes. There's a github issue where somebody hit the same error in a different context and was able to resolve it like this.
Edit: MrTouya determined that in this case what worked was kubectl patch deploy --namespace kube-system tiller-deploy -p '{\"spec\":{\"template\":{\"spec\":{\"serviceAccount\":\"tiller\"}}}}'

Related

Rancher helm chart, cannot find secret bootstrap-secret

So I am trying to deploy rancher on my K3S cluster.
I installed it using the documentation and helm: Rancher documentation
While I am getting access using my loadbalancer. I cannot find the secret to insert into the setup.
They discribe the following command for getting the token:
kubectl get secret --namespace cattle-system bootstrap-secret -o go-template='{{.data.bootstrapPassword|base64decode}}{{ "\n" }}'
When I run this I get the following error
Error from server (NotFound): secrets "bootstrap-secret" not found
And also I cannot find the bootstrap-secret inside the namespace cattle-system.
So can somebody help me out where I need to look?
I was with the same problem. So I figured it out with the following commands:
I installed the helm chart with "--set bootstrapPassword=Changeme123!", for example:
helm upgrade --install
--namespace cattle-system
--set hostname=rancher.example.com
--set replicas=3
--set bootstrapPassword=Changeme123!
rancher rancher-stable/rancher
I forced a hard reset, because even if I had setted the bootstrap password in the installation helm chart command, I was not able to login. So, I used the following command to hard reset:
kubectl -n cattle-system exec $(kubectl -n cattle-system get pods -l app=rancher | grep '1/1' | head -1 | awk '{ print $1 }') -- reset-password
So, I hope that can help you.

HELM admission is constantly creating Pod in status "Container Creating"

I am using K8S version 19.
I tried to install second nginx-ingress controller on my server (I have already one for Linux so I tried to install for Windows as well)
helm install nginx-ingress-win ingress-nginx/ingress-nginx
-f internal-ingress.yaml
--set controller.nodeSelector."beta\.kubernetes\.io/os"=windows
--set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=windows
--set controller.admissionWebhooks.patch.nodeSelector."beta\.kubernetes\.io/os"=windows
--set tcp.9000="default/frontarena-ads-win-test:9000"
This failed with "Error: failed pre-install: timed out waiting for the condition".
So I have run helm uninstall to remove that chart
helm uninstall nginx-ingress-win
release "nginx-ingress-win" uninstalled
But I am getting Validation Webhook Pod created constantly
kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-ingress-win-ingress-nginx-admission-create-f2qcx 0/1 ContainerCreating 0 41m
I delete pod with kubectl delete pod but it get created again and again.
I tried also
kubectl delete -A ValidatingWebhookConfiguration nginx-ingress-win-ingress-nginx-admission but I am getting message not found for all combinations. How I can resolve this and how I can get rid off this?
Thank you!!!
If this Pod is managed by a Deployment,StatefulSet,DaemonSet etc., it will be automatically recreated every time you delete it, so trying to remove a Pod in most situations makes not much sense.
If you want to check what controlls this Pod, run:
kubectl describe pod nginx-ingress-win-ingress-nginx-admission-create-f2qcx | grep Controlled
You would probably see some ReplicaSet, which is also managed by a Deployment or another object. Suppose I want to check what I should delete to get rid of my nginx-deployment-574b87c764-kjpf6 Pod. I can do this as follows:
$ kubectl describe pod nginx-deployment-574b87c764-kjpf6 | grep -i controlled
Controlled By: ReplicaSet/nginx-deployment-574b87c764
then I need to run again kubectl describe on the name of the ReplicaSet we found:
$ kubectl describe rs nginx-deployment-574b87c764 | grep -i controlled
Controlled By: Deployment/nginx-deployment
Finally we can see that it is managed by a Deployment named nginx-deployment and this is the resource we need to delete to get rid of our nginx-deployment-574b87c764-kjpf6 Pod.

how to recover pods with UnexpectedAdmissionError

My pods terminate automatic and finally I found the disk usage was 100% and auto dropped by kubernetes(v1.15.2).Now I am free disk and how to restart the UnexpectedAdmissionError pod like this:
I already tried this:
~ ⌚ 0:34:23
$ kubectl rollout restart deployment kubernetes-dashboard-6466b68b-z6z78
Error from server (NotFound): deployments.extensions "kubernetes-dashboard-6466b68b-z6z78" not found
do not work for me.Any suggestion?
This worked for me:
$ kubectl get pod kubernetes-dashboard-6466b68b-z6z78 -n kube-system -o yaml | kubectl replace --force -f -
pod "kubernetes-dashboard-6466b68b-z6z78" deleted
pod/kubernetes-dashboard-6466b68b-z6z78 replaced
From Documentation:
Replace a resource by filename or stdin.
JSON and YAML formats are accepted. If replacing an existing resource, the complete resource spec must be provided.
This can be obtained by
$ kubectl get TYPE NAME -o yaml
It is worth checking kubectl replace --help as well.
Hope this help you.

Output of "helm list --all" is empty

I have deployed jupyterhub on my GKE cluster using helm. However, when I run helm list --all (or helm list --failed etc) I see no output.
I can confirm that tiller is running in my cluster:
$ helm version
Client: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
And I can see the tiller pod:
$ kubectl get pods -n kube-system | grep tiller
tiller-deploy-778f674bf5-5jksm 1/1 Running 0 132d
I can also see that my deployment of jupyterhub is running using kubectl get pods -n jhub.
How I can I determine why the output of helm list is empty?
I had same issue where helm list was showing empty output.
In case if anyone lands on this page looking for solution, please check below.
source (similar) : https://github.com/helm/helm/issues/7146
In short : Issue here we need to use namespaces when we do listing. or --all-namespace would help as well.
helm list --all-namespaces
I have a strong feeling you are missing some permissions. This is a GKE cluster. So RBAC is enabled.
The standard practice is to first create a dedicated Service account in the appropriate namespace. For example sake, lets say kube-system
kubectl create serviceaccount tiller --namespace kube-system
Then you need to give appropriate permissions to this service account.
FOR TESTING / NON-SECURE !!!
Lets allow this service account to run with super user privileges i.e. run as cluster-admin
kubectl create clusterrolebinding tiller-admin --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
FOR PRODUCTION / SECURE
Create a Role that gives the minimum privileges for tiller to run and associate with the tiller service account using a RoleBinding.
Then go ahead and initialize tiller with the associated serviceAccount.
helm init --service-account tiller

Helm: Error: no available release name found

I am getting a couple of errors with Helm that I can not find explanations for elsewhere. The two errors are below.
Error: no available release name found
Error: the server does not allow access to the requested resource (get configmaps)
Further details of the two errors are in the code block further below.
I have installed a Kubernetes cluster on Ubuntu 16.04. I have a Master (K8SMST01) and two nodes (K8SN01 & K8SN02).
This was created using kubeadm using Weave network for 1.6+.
Everything seems to run perfectly well as far as Deployments, Services, Pods, etc... DNS seems to work fine, meaning pods can access services using the DNS name (myservicename.default).
Using "helm create" and "helm search" work, but interacting with the tiller deployment do not seem to work. Tiller is installed and running according to the Helm install documentation.
root#K8SMST01:/home/blah/charts# helm version
Client: &version.Version{SemVer:"v2.3.0",
GitCommit:"d83c245fc324117885ed83afc90ac74afed271b4", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.3.0", GitCommit:"d83c245fc324117885ed83afc90ac74afed271b4", GitTreeState:"clean"}
root#K8SMST01:/home/blah/charts# helm install ./mychart
Error: no available release name found
root#K8SMST01:/home/blah/charts# helm ls
Error: the server does not allow access to the requested resource (get configmaps)
Here are the running pods:
root#K8SMST01:/home/blah/charts# kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE
etcd-k8smst01 1/1 Running 4 1d 10.139.75.19 k8smst01
kube-apiserver-k8smst01 1/1 Running 3 19h 10.139.75.19 k8smst01
kube-controller-manager-k8smst01 1/1 Running 2 1d 10.139.75.19 k8smst01
kube-dns-3913472980-dm661 3/3 Running 6 1d 10.32.0.2 k8smst01
kube-proxy-56nzd 1/1 Running 2 1d 10.139.75.19 k8smst01
kube-proxy-7hflb 1/1 Running 1 1d 10.139.75.20 k8sn01
kube-proxy-nbc4c 1/1 Running 1 1d 10.139.75.21 k8sn02
kube-scheduler-k8smst01 1/1 Running 3 1d 10.139.75.19 k8smst01
tiller-deploy-1172528075-x3d82 1/1 Running 0 22m 10.44.0.3 k8sn01
weave-net-45335 2/2 Running 2 1d 10.139.75.21 k8sn02
weave-net-7j45p 2/2 Running 2 1d 10.139.75.20 k8sn01
weave-net-h279l 2/2 Running 5 1d 10.139.75.19 k8smst01
The solution given by kujenga from the GitHub issue worked without any other modifications:
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
I think it's an RBAC issue. It seems that helm isn't ready for 1.6.1's RBAC.
There is a issue open for this on Helm's Github.
https://github.com/kubernetes/helm/issues/2224
"When installing a cluster for the first time using kubeadm v1.6.1,
the initialization defaults to setting up RBAC controlled access,
which messes with permissions needed by Tiller to do installations,
scan for installed components, and so on. helm init works without
issue, but helm list, helm install, and so on all do not work, citing
some missing permission or another."
A temporary work around has been suggest:
"We "disable" RBAC using the command kubectl create clusterrolebinding
permissive-binding --clusterrole=cluster-admin --user=admin
--user=kubelet --group=system:serviceaccounts;"
But I can not speak for it's validity. The good news is that this is a known issue and work is being done to fix it. Hope this helps.
I had the same issue with the kubeadm setup on to CentOS 7.
Helm doesn't make a service account when you "helm init" and the default one doesn't have the permissions to read from the configmaps - so it will fail to be able to run a check to see if the deployment name it wants to use is unique.
This got me past it:
kubectl create clusterrolebinding add-on-cluster-admin \
--clusterrole=cluster-admin \
--serviceaccount=kube-system:default
But that is giving the default account tons of power, I just did this so I could get on with my work. Helm needs to add the creation of their own service account to the "helm init" code.
All addons in the kubernetes use the "defaults" service account.
So Helm also runs with "default" service account. You should provide permissions to it. Assign rolebindings to it.
For read-only permissions:
kubectl create rolebinding default-view --clusterrole=view \ --serviceaccount=kube-system:default --namespace=kube-system
For admin access: Eg: to install packages.
kubectl create clusterrolebinding add-on-cluster-admin \ --clusterrole=cluster-admin \ --serviceaccount=kube-system:default
You can also install tiller server in adifferent namespace using the below command.
First create the namesapce
Create the serviceaccount for the namespace
install the tiller in this respective namespace using the below command.
helm init --tiller-namespace test-namespace
This solution has worked for me: https://github.com/helm/helm/issues/3055#issuecomment-397296485
$ kubectl create serviceaccount --namespace kube-system tiller
$ kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
$ helm init --service-account tiller --upgrade
$ helm update repo
$ helm install stable/redis --version 3.3.5
But after that, something has changed ; I have to add --insecure-skip-tls-verify=true flag to my kubectl commands ! I don't know how to fix that knowing that I am interacting with a gcloud containers cluster.
Per https://github.com/kubernetes/helm/issues/2224#issuecomment-356344286, the following commands resolved the error for me too:
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
Per https://github.com/kubernetes/helm/issues/3055
helm init --service-account default
This worked for me when the RBAC (serviceaccount) commands didn't.
It's an RBAC issue. You need to have a service account with a cluster-admin role. And you should pass this service account during HELM initialization.
For example, if you have created a service account with the name tiller, you heml command would look like the following.
helm init --service-account=tiller
I followed this blog to resolve this issue. https://scriptcrunch.com/helm-error-no-available-release/
check the logs for your tiller container:
kubectl logs tiller-deploy-XXXX --namespace=kube-system
if you found something like this:
Error: 'dial tcp 10.44.0.16:3000: connect: no route to host'
Then probably a firewall/iptables as described here solution is to remove some rules:
sudo iptables -D INPUT -j REJECT --reject-with icmp-host-prohibited
sudo iptables -D FORWARD -j REJECT --reject-with icmp-host-prohibited