When I run any kubectl command I get following WARNING:
W0517 14:33:54.147340 46871 gcp.go:120] WARNING: the gcp auth plugin is deprecated in v1.22+, unavailable in v1.25+; use gcloud instead.
To learn more, consult https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke
I have followed the instructions in the link several times but the WARNING keeps appearing making kubectl output uncomfortable to read.
OS:
cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=22.04
DISTRIB_CODENAME=jammy
DISTRIB_DESCRIPTION="Ubuntu 22.04 LTS"
kubectl version:
Client Version: v1.24.0
Kustomize Version: v4.5.4
gke-gcloud-auth-plugin:
Kubernetes v1.23.0-alpha+66064c62c6c23110c7a93faca5fba668018df732
gcloud version:
Google Cloud SDK 385.0.0
alpha 2022.05.06
beta 2022.05.06
bq 2.0.74
bundled-python3-unix 3.9.12
core 2022.05.06
gsutil 5.10
I "login" with:
gcloud init
and then:
gcloud container clusters get-credentials cluster_name --region my-region
finally:
myyser#mymachine:/$ k get pods -n madeupns
W0517 14:50:10.570103 50345 gcp.go:120] WARNING: the gcp auth plugin is deprecated in v1.22+, unavailable in v1.25+; use gcloud instead.
To learn more, consult https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke
No resources found in madeupns namespace.
How can I remove the WARNING or fix the problem?
Removing my .kube/config and re-running get-credentials didn't work.
I fixed this problem by adding the correct export in .bashrc
export USE_GKE_GCLOUD_AUTH_PLUGIN=True
After sourcing .bashrc with . ~/.bashrc and reloading cluster config with:
gcloud container clusters get-credentials clustername
the warning dissapeared:
user#laptop:/$ k get svc -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP
kube-system default-http-backend NodePort 10.10.13.157 <none>
kube-system kube-dns ClusterIP 10.10.0.10 <none>
kube-system kube-dns-upstream ClusterIP 10.10.13.92 <none>
kube-system metrics-server ClusterIP 10.10.2.191 <none>
Got a similar issue, while connecting to a fresh Kubernetes cluster having a version v1.22.10-gke.600
gcloud container clusters get-credentials my-cluster --zone europe-west6-b --project project
and got the below error, as seems like now its become error for the newer version
Fetching cluster endpoint and auth data.
CRITICAL: ACTION REQUIRED: gke-gcloud-auth-plugin, which is needed for continued use of kubectl, was not found or is not executable. Install gke-gcloud-auth-plugin for use with kubectl by following https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke
fix that worked for me
gcloud components install gke-gcloud-auth-plugin
You need to do the following things to avoid this warning message now and to avoid errors in the future.
Add the correct export in .bashrc. I am using .zshrc instead of .bashrc so added export in .zshrc
export USE_GKE_GCLOUD_AUTH_PLUGIN=True
Reload .bashrc
source ~/.bashrc
Update gcloud to the latest version.
gcloud components update
Run the following command. Replace the CLUSTER_NAME with the name of your cluster. This will force the kubeconfig for this cluster to be updated to the Client-go Credential Plugin configuration.
gcloud container clusters get-credentials CLUSTER_NAME
Check kubeconfig file by enter the following command. Now you should be able to detect the changes(gke-gcloud-auth-plugin) in the kubeconfig file in the users section in the Root/Home directory
cat ~/.kube/config
The reason behind this is:
kubectl starting the version from v1.26 will no longer have a built-in authentication mechanism for GKE. So GKE users will need to download and use a separate authentication plugin to generate GKE-specific tokens to support the authentication of GKE. To get more details please read here.
Related
I installed minikube in Windows 10 . I am able to start minikube
**C:\WINDOWS\system32>minikube start
* minikube v1.15.1 on Microsoft Windows 10 Pro 10.0.18363 Build 18363
* Using the hyperv driver based on existing profile
* Starting control plane node minikube in cluster minikube
* Restarting existing hyperv VM for "minikube" ...
* Preparing Kubernetes v1.19.4 on Docker 19.03.13 ...
* Verifying Kubernetes components...
* Enabled addons: storage-provisioner, default-storageclass
* kubectl not found. If you need it, try: 'minikube kubectl -- get pods -A'
* Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default**
But there is a warning in above output ( 2nd last line ) says
kubectl not found. If you need it, try: 'minikube kubectl -- get pods -A'
After that I executed this command too minikube kubectl -- get pods -A
Still getting below error while trying kubectl
C:\WINDOWS\system32>kubectl
'kubectl' is not recognized as an internal or external command,
operable program or batch file.
Minikube installs kubectl inside of itself.
So to use the kubectl which you installed via minikube, you have to prepend the command arguments with minikube kubectl --. For example:
# the same as `kubectl version --client`
minikube kubectl -- version --client
For convenience, you may want to add an alias in your shell configuration.
Source: https://minikube.sigs.k8s.io/docs/handbook/kubectl/
kubectl is wrapped around minikube.
Don't forget to add a -- after minikube kubectl
minikube kubectl -- describe pod kube-scheduler-minikube --namespace kube-system
minikube kubectl -- get pods --namespace kube-system
You have installed minikube, kubectl is not a part of minikube package.
It says when you do minikube start that kubectl is not present and if you need to you can use minikube kubectl instead.
This is also mentioned here
If you already have kubectl installed, you can now use it to access your shiny new cluster
It means that the kubectl might not be present on your machine or that it is not added to your PATH.
You can follow these instructions to install it either by downloading executable or by using curl:
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.19.0/bin/windows/amd64/kubectl.exe
After that add the binary to PATH.
You can run kubectl version --client to ensure correct version is downloaded.
Use doskey.exe to create an alias for kubectl.
Example:
doskey kubectl="%PROGRAMFILES%\Kubernetes\Minikube\minikube.exe" kubectl -- $*
You might need to update the path if you've installed minikube somewhere else.
I am trying to find the kubeproxy logs on minikube, It doesn't seem they are located.
sudo cat: /var/log/kubeproxy.log: No such file or directory
A more generic way (besides what hoque described) that you can use on any kubernetes cluster is to check the logs using kubectl.
kubectl logs kube-proxy-s8lcb -n kube-system
Using this solution allow you to check logs for any K8s cluster even if you don't have access to your nodes.
Pod logs are located in /var/log/pods/.
Run
$ minikube ssh
$ ls /var/log/pods/
default_dapi-test-pod_1566526c-1051-4102-a23b-13b73b1dd904
kube-system_coredns-5d4dd4b4db-7ttnf_59d7b01c-4d7d-40f9-8d6a-ac62b1fa018e
kube-system_coredns-5d4dd4b4db-n8d5t_6aa36b9a-6539-4ef2-b163-c7e713861fa2
kube-system_etcd-minikube_188c8af9ff66b5060895a385b1bb50c2
kube-system_kube-addon-manager-minikube_f7d3bd9bbbbdd48d97a3437e231fff24
kube-system_kube-apiserver-minikube_b15fea5ed20174140af5049ecdd1c59e
kube-system_kube-controller-manager-minikube_d8cdb4170ab1aac172022591866bd7eb
kube-system_kube-proxy-qc4xl_30a6100a-db70-42c1-bbd5-4a818379a004
kube-system_kube-scheduler-minikube_14ff2730e74c595cd255e47190f474fd
I want to spin up a single installer pod with helm install that once running, will apply some logic and install other applications into my cluster using helm install.
I'm aware of the helm dependencies, but I want to run some business logic with the installations and I'd rather do it in the installer pod and on the host triggering the whole installation process.
I found suggestions on using the Kubernetes REST API when inside a pod, but helm requires kubectl installed and configured.
Any ideas?
It seems this was a lot easier than I thought...
On a simple pod running Debian, I just installed kubectl, and with the default service account's secret that's already mounted, the kubectl was already configured to the cluster's API.
Note that the configured default namespace is the one that my installer pod is deployed to.
Verified with
$ kubectl cluster-info
$ kubectl get ns
I then installed helm, which was already using the kubectl to access the cluster for installing tiller.
Verified with
$ helm version
$ helm init
I installed a test chart
$ helm install --name my-release stable/wordpress
It works!!
I hope this helps
You could add kubectl to your installer pod.
"In cluster" credentials could be provided via service account in "default-token" secret: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
I have deployed kubernetes cluster. The issue i have is that the dashboard is not accessible from external desktop system.
Following is my setup.
Two vm's with cluster deployed, one master one node.
dashboard running without any issue the kube-dns is also working as expected.
kubernetes version is 1.7.
Issue: When trying to access dashboard externally through kubectl proxy. i get unauthorized response.
This is with rbac role and rolebindings enabled.
How to i configure the cluster for http browser access to dashboard from external system.
Any hint/suggestions are most welcome.
kubectl proxy not working > 1.7
try this:
copy ~/.kube/config file to your desktop
then run the kubect like this
export POD_NAME=$(kubectl --kubeconfig=config get pods -n kube-system -l "app=kubernetes-dashboard,release=kubernetes-dashboard" -o jsonpath="{.items[0].metadata.name}")
echo http://127.0.0.1:9090/
kubectl --kubeconfig=config -n kube-system port-forward $POD_NAME 9090:9090
Then access the ui like this: http://127.0.0.1:9090
see this helps
If kubectl proxy gives the Unauthorized error, there can be 2 reasons:
Your user cert doesn't have the appropriate permissions. This is unlikely since you successfully deployed kube-dns and the dashboard.
kubelet authn/authz is enabled and it's not setup correctly. See the answer to my question.
I am getting a couple of errors with Helm that I can not find explanations for elsewhere. The two errors are below.
Error: no available release name found
Error: the server does not allow access to the requested resource (get configmaps)
Further details of the two errors are in the code block further below.
I have installed a Kubernetes cluster on Ubuntu 16.04. I have a Master (K8SMST01) and two nodes (K8SN01 & K8SN02).
This was created using kubeadm using Weave network for 1.6+.
Everything seems to run perfectly well as far as Deployments, Services, Pods, etc... DNS seems to work fine, meaning pods can access services using the DNS name (myservicename.default).
Using "helm create" and "helm search" work, but interacting with the tiller deployment do not seem to work. Tiller is installed and running according to the Helm install documentation.
root#K8SMST01:/home/blah/charts# helm version
Client: &version.Version{SemVer:"v2.3.0",
GitCommit:"d83c245fc324117885ed83afc90ac74afed271b4", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.3.0", GitCommit:"d83c245fc324117885ed83afc90ac74afed271b4", GitTreeState:"clean"}
root#K8SMST01:/home/blah/charts# helm install ./mychart
Error: no available release name found
root#K8SMST01:/home/blah/charts# helm ls
Error: the server does not allow access to the requested resource (get configmaps)
Here are the running pods:
root#K8SMST01:/home/blah/charts# kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE
etcd-k8smst01 1/1 Running 4 1d 10.139.75.19 k8smst01
kube-apiserver-k8smst01 1/1 Running 3 19h 10.139.75.19 k8smst01
kube-controller-manager-k8smst01 1/1 Running 2 1d 10.139.75.19 k8smst01
kube-dns-3913472980-dm661 3/3 Running 6 1d 10.32.0.2 k8smst01
kube-proxy-56nzd 1/1 Running 2 1d 10.139.75.19 k8smst01
kube-proxy-7hflb 1/1 Running 1 1d 10.139.75.20 k8sn01
kube-proxy-nbc4c 1/1 Running 1 1d 10.139.75.21 k8sn02
kube-scheduler-k8smst01 1/1 Running 3 1d 10.139.75.19 k8smst01
tiller-deploy-1172528075-x3d82 1/1 Running 0 22m 10.44.0.3 k8sn01
weave-net-45335 2/2 Running 2 1d 10.139.75.21 k8sn02
weave-net-7j45p 2/2 Running 2 1d 10.139.75.20 k8sn01
weave-net-h279l 2/2 Running 5 1d 10.139.75.19 k8smst01
The solution given by kujenga from the GitHub issue worked without any other modifications:
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
I think it's an RBAC issue. It seems that helm isn't ready for 1.6.1's RBAC.
There is a issue open for this on Helm's Github.
https://github.com/kubernetes/helm/issues/2224
"When installing a cluster for the first time using kubeadm v1.6.1,
the initialization defaults to setting up RBAC controlled access,
which messes with permissions needed by Tiller to do installations,
scan for installed components, and so on. helm init works without
issue, but helm list, helm install, and so on all do not work, citing
some missing permission or another."
A temporary work around has been suggest:
"We "disable" RBAC using the command kubectl create clusterrolebinding
permissive-binding --clusterrole=cluster-admin --user=admin
--user=kubelet --group=system:serviceaccounts;"
But I can not speak for it's validity. The good news is that this is a known issue and work is being done to fix it. Hope this helps.
I had the same issue with the kubeadm setup on to CentOS 7.
Helm doesn't make a service account when you "helm init" and the default one doesn't have the permissions to read from the configmaps - so it will fail to be able to run a check to see if the deployment name it wants to use is unique.
This got me past it:
kubectl create clusterrolebinding add-on-cluster-admin \
--clusterrole=cluster-admin \
--serviceaccount=kube-system:default
But that is giving the default account tons of power, I just did this so I could get on with my work. Helm needs to add the creation of their own service account to the "helm init" code.
All addons in the kubernetes use the "defaults" service account.
So Helm also runs with "default" service account. You should provide permissions to it. Assign rolebindings to it.
For read-only permissions:
kubectl create rolebinding default-view --clusterrole=view \ --serviceaccount=kube-system:default --namespace=kube-system
For admin access: Eg: to install packages.
kubectl create clusterrolebinding add-on-cluster-admin \ --clusterrole=cluster-admin \ --serviceaccount=kube-system:default
You can also install tiller server in adifferent namespace using the below command.
First create the namesapce
Create the serviceaccount for the namespace
install the tiller in this respective namespace using the below command.
helm init --tiller-namespace test-namespace
This solution has worked for me: https://github.com/helm/helm/issues/3055#issuecomment-397296485
$ kubectl create serviceaccount --namespace kube-system tiller
$ kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
$ helm init --service-account tiller --upgrade
$ helm update repo
$ helm install stable/redis --version 3.3.5
But after that, something has changed ; I have to add --insecure-skip-tls-verify=true flag to my kubectl commands ! I don't know how to fix that knowing that I am interacting with a gcloud containers cluster.
Per https://github.com/kubernetes/helm/issues/2224#issuecomment-356344286, the following commands resolved the error for me too:
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
Per https://github.com/kubernetes/helm/issues/3055
helm init --service-account default
This worked for me when the RBAC (serviceaccount) commands didn't.
It's an RBAC issue. You need to have a service account with a cluster-admin role. And you should pass this service account during HELM initialization.
For example, if you have created a service account with the name tiller, you heml command would look like the following.
helm init --service-account=tiller
I followed this blog to resolve this issue. https://scriptcrunch.com/helm-error-no-available-release/
check the logs for your tiller container:
kubectl logs tiller-deploy-XXXX --namespace=kube-system
if you found something like this:
Error: 'dial tcp 10.44.0.16:3000: connect: no route to host'
Then probably a firewall/iptables as described here solution is to remove some rules:
sudo iptables -D INPUT -j REJECT --reject-with icmp-host-prohibited
sudo iptables -D FORWARD -j REJECT --reject-with icmp-host-prohibited