The command fails due to credentials problems, but when you test with kubectl get nodes everything looks fine.
Output of helm install:
⋊> ~/t/mtltech on master ⨯ helm install --name nginx-ingress stable/nginx-ingress --set rbac.create=true 00:31:41
Error: the server has asked for the client to provide credentials
Output of kubectl get nodes:
⋊> ~/t/mtltech on master ⨯ kubectl get nodes 00:37:41
NAME STATUS ROLES AGE VERSION
gke-mtltech-default-pool-977ee0b2-5lmi Ready <none> 7h v1.11.7-gke.4
gke-mtltech-default-pool-977ee0b2-hi4v Ready <none> 7h v1.11.7-gke.4
gke-mtltech-default-pool-977ee0b2-mjiv Ready <none> 7h v1.11.7-gke.4
Output of helm version:
Client: &version.Version{SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"}
Output of kubectl version:
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.11", GitCommit:"637c7e288581ee40ab4ca210618a89a555b6e7e9", GitTreeState:"clean", BuildDate:"2018-11-26T14:38:32Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"11+", GitVersion:"v1.11.7-gke.4", GitCommit:"618716cbb236fb7ca9cabd822b5947e298ad09f7", GitTreeState:"clean", BuildDate:"2019-02-05T19:22:29Z", GoVersion:"go1.10.7b4", Compiler:"gc", Platform:"linux/amd64"}
Cloud Provider: Google Cloud
I've tried to reset it several times with rm -rf ~/.helm && helm init --service-account tiller but it doesn't change anything.
Any idea ?
Thanks.
The problem here is the Tiller. I do not know how you deployed Helm and Tiller, but the mistake was there.
I used this chart and all works correctly, then I deleted my service account and cluster role binding and I met the same error - deleting only cluster role binding gives error:
Error: release nginx-ingress failed: namespaces "default" is forbidden: User "system:serviceaccount:kube-system:tiller" cannot get namespaces in the namespace "default"
So the error is due to missing Service Account or both.
Solution for this:
rm -rf ~/.helm
kubectl create serviceaccount tiller --namespace kube-system
kubectl create clusterrolebinding tiller-cluster-rule \
--clusterrole=cluster-admin \
--serviceaccount=kube-system:tiller
helm init --service-account=tiller
kubectl get pods -n kube-system
check the full name of tiller pod:
kubectl delete pod -n kube-system tiller-deploy-xxx
Wait till the tiller pod will be redeployed and install your helm chart:
helm install --name nginx-ingress stable/nginx-ingress --set rbac.create=true
Related
While creating a deployment using command
kubectl create deploy nginx --image=nginx:1.7.8 --replicas=2 --port=80
I am getting error Error: unknown flag: --replicas
controlplane $ kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:58:59Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:50:46Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
controlplane $ kubectl create deploy nginx --image=nginx:1.7.8 --replicas=2 --port=80
Error: unknown flag: --replicas
See 'kubectl create deployment --help' for usage.
Could anyone please help me with the reason for this as this command is working on other Kubernetes clusters?
You may try to put a blank character between -- and the commands
For example
kubectl create deploy nginx --image=nginx:1.7.8 -- replicas=2
It's work for me.
It looks like that --replicas and --port flags were added in version 1.19 based on the v1-19 release notes and that's why you are seeing the error.
So, you need the minimum version 1.19 to able to use the replicas and port flags as part of the kubectl create deployment command.
You can however use the kubectl scale/expose command after creating the deployment.
Relevant PR links for replicas and port.
if you trying to update the replica parameter in Azure release pipeline inside the help upgrade command then refer to the following link
Overriding Helm chart values
here it explains that you can override the parameters inside the vallues.yaml file with set command like this
helm upgrade $(RELEASE_ENV) --install \
infravc/manifests/helm/web \
--set namespace=$(NAMESPACE) \
--set replicas=$(replicas) \
--set replicasMax=$(replicasMax) \
--set ingress.envSuffix=$(envSuffix) \
--set ENV.SECRET=$(appSecretNonprod) \
--set ENV.CLIENT_ID=$(clientIdNonprod) \
I'm a perfect noob with K8s. I installed microk8s and Helm using snap to experiment locally. I wonder whether my current issue comes from the use of snap (purpose of which is encapsulation, from what I understood)
Environment
Ubuntu 20.04LTS
helm version
version.BuildInfo{Version:"v3.2.4", GitCommit:"0ad800ef43d3b826f31a5ad8dfbb4fe05d143688", GitTreeState:"clean", GoVersion:"go1.13.12"}
kubectl version
Client Version: version.Info{Major:"1", Minor:"18+", GitVersion:"v1.18.4-1+6f17be3f1fd54a", GitCommit:"6f17be3f1fd54a88681869d1cf8bedd5a2174504", GitTreeState:"clean", BuildDate:"2020-06-23T21:16:24Z", GoVersion:"go1.14.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18+", GitVersion:"v1.18.4-1+6f17be3f1fd54a", GitCommit:"6f17be3f1fd54a88681869d1cf8bedd5a2174504", GitTreeState:"clean", BuildDate:"2020-06-23T21:17:52Z", GoVersion:"go1.14.4", Compiler:"gc", Platform:"linux/amd64"}
kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* microk8s microk8s-cluster admin
Post install set up
microk8s enable helm3
Kubernetes is up and running
kubectl cluster-info
Kubernetes master is running at https://127.0.0.1:16443
CoreDNS is running at https://127.0.0.1:16443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Problem while connecting helm to microk8s
helm ls --kube-token ~/token --kube-apiserver https://127.0.0.1:16443
Error: Kubernetes cluster unreachable: Get https://127.0.0.1:16443/version?timeout=32s: x509: certificate signed by unknown authority
How can I tell helm
to trust microk8s certs or
to ignore this verification step
From what I read, I may overcome this issue by pointing to kube's config using --kubeconfig.
helm ls --kube-token ~/token --kube-apiserver https://127.0.0.1:16443 --kubeconfig /path/to/kubernetes/config
In the context of microk8s installed with snap, I am not quite sure what this conf file is nor where to find it.
/snap/microk8s/1503 ?
/var/snap/microk8s/1503 ?
Helm looks for kubeconfig at this path $HOME/.kube/config.
Please run this command
microk8s.kubectl config view --raw > $HOME/.kube/config
This will save the config at required path in your directory and shall work
Reference Link here
Please try exporting kubeconfig file using following command:
export KUBECONFIG=/var/snap/microk8s/current/credentials/client.config
If you happen to be using WSL with docker desktop with k8s running in docker desktop but helm running in WSL a very similar command as provided by Tarun will also work.
Assuming you are running the Windows version of kubectl
➜ which kubectl.exe
➜ /mnt/c/Program Files/Docker/Docker/resources/bin/kubectl.exe
➜ which kubectl
➜ kubectl: aliased to /mnt/c/Program\ Files/Docker/Docker/resources/bin/kubectl.exe
➜ kubectl config view --raw > $HOME/.kube/config
Hey I'm installing fresh minikube and try to init helm on it no in 3.x.x but 2.13.0 version.
$ minikube start
😄 minikube v1.6.2 on Darwin 10.14.6
✨ Automatically selected the 'hyperkit' driver (alternates: [virtualbox])
🔥 Creating hyperkit VM (CPUs=2, Memory=2000MB, Disk=20000MB) ...
🐳 Preparing Kubernetes v1.17.0 on Docker '19.03.5' ...
🚜 Pulling images ...
🚀 Launching Kubernetes ...
⌛ Waiting for cluster to come online ...
🏄 Done! kubectl is now configured to use "minikube"
$ kubectl -n kube-system create serviceaccount tiller
serviceaccount/tiller created
$ kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount kube-system:tiller
clusterrolebinding.rbac.authorization.k8s.io/tiller created
$ helm init --service-account tiller
$HELM_HOME has been configured at /Users/<user>/.helm.
Error: error installing: the server could not find the requested resource
$ helm init --service-account tiller --override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm' --output yaml | sed 's#apiVersion: extensions/v1beta1#apiVersion: apps/v1#' | kubectl apply -f -
deployment.apps/tiller-deploy created
service/tiller-deploy created
$ helm init --service-account tiller
59 ### ALIASES
$HELM_HOME has been configured at /Users/<user>/.helm.
Error: error installing: the server could not find the requested resource
$ helm version
Client: &version.Version{SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"}
Error: could not find tiller
I try to do same on some random other ns, and with no result:
$ kubectl create ns deployment-stuff
namespace/deployment-stuff created
$ kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin \
--user=$(gcloud config get-value account)
clusterrolebinding.rbac.authorization.k8s.io/cluster-admin-binding created
$ kubectl create serviceaccount tiller --namespace deployment-stuff
kubectl create clusterrolebinding tiller-admin-binding --clusterrole=cluster-admin \
--serviceaccount=deployment-stuff:tiller
serviceaccount/tiller created
clusterrolebinding.rbac.authorization.k8s.io/tiller-admin-binding created
$ helm init --service-account=tiller --tiller-namespace=deployment-stuff
Creating /Users/<user>/.helm
Creating /Users/<user>/.helm/repository
Creating /Users/<user>/.helm/repository/cache
Creating /Users/<user>/.helm/repository/local
Creating /Users/<user>/.helm/plugins
Creating /Users/<user>/.helm/starters
Creating /Users/<user>/.helm/cache/archive
Creating /Users/<user>/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /Users/<user>/.helm.
Error: error installing: the server could not find the requested resource
$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈
$ helm list
Error: could not find tiller
$ helm list --tiller-namespace=kube-system
Error: could not find tiller
$ helm list --tiller-namespace=deployment-stuff
Error: could not find tiller
Same error everywhere Error: error installing: the server could not find the requested resource any ideas how to approach it ?
I installed helm with those commands and works fine with my gcp clusters, helm list returns full list of helms.
wget -c https://get.helm.sh/helm-v2.13.0-darwin-amd64.tar.gz
tar -zxvf helm-v2.13.0-darwin-amd64.tar.gz
mv darwin-amd64/helm /usr/local/bin/helm
tbh I have no idea what's going on, sometimes it works fine on minikube sometimes I get these errors.
This can be fixed by deleting the tiller deployment and service and rerunning the helm init --override command after first helm init.
So after running commands You listed:
$ kubectl -n kube-system create serviceaccount tiller
serviceaccount/tiller created
$ kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount kube-system:tiller
clusterrolebinding.rbac.authorization.k8s.io/tiller created
$ helm init --service-account tiller
And then finding out that tiller could not be found.
$ helm version
Client: &version.Version{SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"}
Error: could not find tiller
Run the following commands:
1.
$ kubectl delete service tiller-deploy -n kube-system
2.
$ kubectl delete deployment tiller-deploy -n kube-system
3.
helm init --override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm' --output yaml | sed 's#apiVersion: extensions/v1beta1#apiVersion: apps/v1#' | kubectl apply -f -
After that You can verify if it worked with:
$ helm version
Client: &version.Version{SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"}
Error: could not find a ready tiller pod
This one needs little more time, give it few seconds.
$ helm version
Client: &version.Version{SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"}
Tell me if it worked.
Check logs of error tiller pod by:
kc -n kube-system describe pod tiller-deploy-*
You'll see following error:
Failed to pull image "gcr.io/kubernetes-helm/tiller:v2.15.1": rpc error: code = Unknown desc = Error response from daemon: Head "https://gcr.io/v2/kubernetes-helm/tiller/manifests/v2.15.1": unknown: Project 'project:kubernetes-helm' not found or deleted.
The reason is they changed the image location, so the old version of helm couldn't pull it.
Pull the image manually by:
docker pull ghcr.io/helm/tiller:v2.15.1
Tag the pulled image to the version that helm needed at the first place
docker tag ghcr.io/helm/tiller:v2.15.1 gcr.io/kubernetes-helm/tiller:v2.15.1
Re-init tiller (helm server):
helm init
and you'll see the tiller deploy is running.
I'm getting deprecation errors while trying to run a couple of nginx pods
bash-4.4$ kubectl run nginx --image=nginx --port=80 --replicas=3
WARNING: New generator "deployment/apps.v1beta1" specified, but it isn't available. Falling back to "deployment/v1beta1".
kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.
error: no matches for kind "Deployment" in version "apps/v1beta1"
While attempting to add the apps generator I also hit a snag...
bash-4.4$ kubectl run nginx --image=nginx --port=80 --replicas=3 --generator=deployment/apps.v1beta1
kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.
error: no matches for kind "Deployment" in version "apps/v1beta1"
Not really certain what is going on, I'm just trying a simple hello world, here's the playbook that deploys and exposes
---
#######################################
# Deploy and expose Nginx service
#######################################
# Expects kubectl being configured on the local machine
# using kubectl.yml playbook
- hosts: localhost
connection: local
tasks:
- name: Launch 3 nginx pods
command: "kubectl run nginx --image=nginx --port=80 --replicas=3"
# command: "kubectl create deployment nginx --image=nginx --generator=deployment-basic/v1beta1"
- name: Expose nginx
command: "kubectl expose deployment nginx --type NodePort"
- name: Get exposed port
command: "kubectl get svc nginx --output=jsonpath='{range .spec.ports[0]}{.nodePort}'"
register: result
- set_fact:
node_port: "{{ result.stdout }}"
- debug: msg="Exposed port {{ node_port }}"
And here is some background on the cluster and versions etc
bash-4.4$ kubectl version
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-30T21:39:38Z", GoVersion:"go1.11.1", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.6", GitCommit:"ae4550cc9c89a593bcda6678df201db1b208133b", GitTreeState:"clean", BuildDate:"2016-08-26T18:06:06Z", GoVersion:"go1.6.2", Compiler:"gc", Platform:"linux/amd64"}
bash-4.4$ kubectl get componentstatus
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-1 Healthy {"health": "true"}
etcd-2 Healthy {"health": "true"}
etcd-0 Healthy {"health": "true"}
Help would be greatly appreciated, I'm holding up some guys at a hackathon ;)
You have a very large mismatch over kubectl and Kubernetes cluster versions (1.12.2 vs 1.3.6). I recommend you download kubectl 1.3.6. If you are using Linux:
$ wget https://dl.k8s.io/v1.3.6/kubernetes-client-linux-amd64.tar.gz
$ tar zxvf kubernetes-client-linux-amd64.tar.gz
Or MacOS:
$ wget https://dl.k8s.io/v1.3.6/kubernetes-client-darwin-amd64.tar.gz
$ tar zxvf kubernetes-client-darwin-amd64.tar.gz
I have a Kubernetes 1.10 cluster up and running. Using the following command, I create a container running bash inside the cluster:
kubectl run tmp-shell --rm -i --tty --image centos -- /bin/bash
I download the correct version of kubectl inside the running container, make it executable and try to run
./kubectl get pods
but get the following error:
Error from server (Forbidden): pods is forbidden:
User "system:serviceaccount:default:default" cannot
list pods in the namespace "default"
Does this mean, that kubectl detected it is running inside a cluster and is automatically connecting to that one? How do I allow the serviceaccount to list the pods? My final goal will be to run helm inside the container. According to the docs I found, this should work fine as soon as kubectl is working fine.
Does this mean, that kubectl detected it is running inside a cluster and is automatically connecting to that one?
Yes, it used the KUBERNETES_SERVICE_PORT and KUBERNETES_SERVICE_HOST envvars to locate the API server, and the credential in the auto-injected /var/run/secrets/kubernetes.io/serviceaccount/token file to authenticate itself.
How do I allow the serviceaccount to list the pods?
That depends on the authorization mode you are using. If you are using RBAC (which is typical), you can grant permissions to that service account by creating RoleBinding or ClusterRoleBinding objects.
See https://kubernetes.io/docs/reference/access-authn-authz/rbac/#service-account-permissions for more information.
I believe helm requires extensive permissions (essentially superuser on the cluster). The first step would be to determine what service account helm was running with (check the serviceAccountName in the helm pods). Then, to grant superuser permissions to that service account, run:
kubectl create clusterrolebinding helm-superuser \
--clusterrole=cluster-admin \
--serviceaccount=$SERVICEACCOUNT_NAMESPACE:$SERVICEACCOUNT_NAME
True kubectl will try to get everything needs to authenticate with the master.
But with ClusterRole and "cluster-admin" you'll give unlimited permissions across all namespaces for that pod and sounds a bit risky.
For me, it was a bit annoying adding extra 43MB for the kubectl client in my Kubernetes container but the alternative was to use one of the SDKs to implement a more basic client. kubectl is easier to authenticate because the client will get the token needs from /var/run/secrets/kubernetes.io/serviceaccount plus we can use manifests files if we want. I think for most common of the Kubernetes setups you shouldn't add any additional environment variables or attach any volume secret, will just work if you have the right ServiceAccount.
Then you can test if is working with something like:
$ kubectl exec -it <your-container-with-the-attached-privs> -- /kubectl get pods -n <YOUR_NAMESPACE>
NAME. READY STATUS RESTARTS AGE
pod1-0 1/1 Running 0 6d17h
pod2-0 1/1 Running 0 6d16h
pod3-0 1/1 Running 0 6d17h
pod3-2 1/1 Running 0 67s
or permission denied:
$ kubectl exec -it <your-container-with-the-attached-privs> -- /kubectl get pods -n kube-system
Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:default:spinupcontainers" cannot list resource "pods" in API group "" in the namespace "kube-system"
command terminated with exit code 1
Tested on:
$ kubectl exec -it <your-container-with-the-attached-privs> -- /kubectl versionClient Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:20:10Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:12:17Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
You can check my answer at How to run kubectl commands inside a container? for RoleBinding and RBAC.