I'm trying to connect to a cluster and I'm getting the following error:
gcloud container clusters get-credentials cluster1 --region europe-west2 --project my-project
Fetching cluster endpoint and auth data.
CRITICAL: ACTION REQUIRED: gke-gcloud-auth-plugin, which is needed for continued use of kubectl, was not found or is not executable.
Install gke-gcloud-auth-plugin for use with kubectl by following https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke
kubeconfig entry generated for dbcell-cluster.
I have installed Google Cloud SDK 400, kubektl 1.22.12, gke-gcloud-auth-plugin 0.3.0, and also setup /~.bashrc with
export USE_GKE_GCLOUD_AUTH_PLUGIN=True
gke-gcloud-auth-plugin --version
Kubernetes v1.24.0-alpha+f42d1572e39979f6f7de03bd163f8ec04bc7950d
but when I try to connect to the cluster always I'm getting the same error, any idea here?
Thanks
The cluster exist in that region, also I verfied the env variable
with
echo $USE_GKE_GCLOUD_AUTH_PLUGIN
True
I installed the gke-gcloud-auth-plugin using gcloud components install... I do not know what more can I check
gcloud components list
I solved the same problem by removing my current kubeconfig context for GCP.
Get your context name running:
kubectl config get-contexts
Delete the context:
kubectl config delete-context CONTEXT_NAME
Reconfigure the credentials
gcloud container clusters get-credentials CLUSTER_NAME --region REGION --project PROJECT
The warning message should be gone by now.
Related
We create the cluster using the following command
kops create cluster --node-count=3 --node-size=c5.2xlarge --master-count=3 --master-size=c5.xlarge --zones=eu-west-1a --name=${KOPS_CLUSTER_NAME} --yes
We are using kops cluster. We export the kubeconfig using this command
$ kops export kubecfg --admin --kubeconfig ~/workspace/kubeconfig --state=s3://YOUR-S3-BUCKET-NAME"
It works fine for sometime. But after sometime we again start getting the same error as TTL expires for kubeconfig
error: You must be logged in to the server (Unauthorized) kops
Is there any way we can get rid of this annoying TTL?
After going through the docs, found that we can actually give the validity of the kubeconfig as an argument
$ kops export kubecfg --admin=87600h0m0s --kubeconfig ~/workspace/kubeconfig --state=s3://<bucket-name> --name=<cluster-name>
I'm trying to follow this step by step to upload the airflow in Kubernetes (https://github.com/EamonKeane/airflow-GKE-k8sExecutor-helm) but in this part of the execution I have problems as follows:
Researching on the topic did not find anything that solved so far my problem, does anyone have any suggestions of what to do?
SQL_ALCHEMY_CONN=postgresql+psycopg2://$AIRFLOW_DB_USER:$AIRFLOW_DB_USER_PASSWORD#$KUBERNETES_POSTGRES_CLOUDSQLPROXY_SERVICE:$KUBERNETES_POSTGRES_CLOUDSQLPROXY_PORT/$AIRFLOW_DB_NAME
echo $SQL_ALCHEMY_CONN > /secrets/airflow/sql_alchemy_conn
# Create the fernet key which is needed to decrypt database the database
FERNET_KEY=$(dd if=/dev/urandom bs=32 count=1 2>/dev/null | openssl base64)
echo $FERNET_KEY > /secrets/airflow/fernet-key
kubectl create secret generic airflow \
--from-file=fernet-key=/secrets/airflow/fernet-key \
--from-file=sql_alchemy_conn=/secrets/airflow/sql_alchemy_conn
Unable to connect to the server: error executing access token command
"/google/google-cloud-sdk/bin/gcloud config config-helper
--format=json": err=exit status 1 output= stderr=ERROR: gcloud crashed (BadStatusLine): '' If you would like to report this issue, please run
the following command: gcloud feedback To check gcloud for common
problems, please run the following command: gcloud info
--run-diagnostics
I solved this by creating a new cloud shell tab to connect the cluster:
gcloud container clusters get-credentials testcluster1 --zone = your_zone
Example:
get the name and location of your cluster
gcloud container clusters list
then
gcloud container clusters get-credentials demo --region=us-west1-a
I am using Google cloud composer ,and created composer environment.Composer environment is ready(has green tick), now I am trying to set variables used in DAG python code using google cloud shell.
command to set variables:
gcloud composer environments run test-environment \
--location us-central1 variables -- \
--set gcp_project xxx-gcp
Exact error message:
ERROR: (gcloud.composer.environments.run) Desired GKE pod not found. If the environment was recently started, please wait and retry.
I tried following things as part of investigation, but got same error each time.
I have created a new environment using UI and not google shell commands.
I checked pods in kubernetes engine and all are green , did not see any issue.
I verified composer API, Billing kubernetes, all required API's are enabled.
I have 'Editor' role assigned.
added screenshot I saw first time some failures
Error with exit code 1
google troubleshooting guide describe: If the exit code is 1, the container crashed because the application crashed.
This is a side effect of Composer version 1.6.0 if you are using a google-cloud-sdk that is too old, because it now launches pods in namespaces other than default. The error you see is a result of looking for Kubernetes pods in the default namespace and failing to find them.
To fix this, run gcloud components update. If you cannot yet update, a workaround to execute Airflow commands is to manually SSH to a pod yourself and run airflow. To start, obtain GKE cluster credentials:
$ gcloud container clusters get-credentials $COMPOSER_GKE_CLUSTER_NAME
Once you have the credentials, you should find which namespace the pods are running in (which you can also find using Cloud Console):
$ kubectl get namespaces
NAME STATUS AGE
composer-1-6-0-airflow-1-9-0-6f89fdb7 Active 17h
default Active 17h
kube-public Active 17h
kube-system Active 17h
You can then SSH into any scheduler/worker pod, and run commands:
$ kubectl exec \
--namespace=$NAMESPACE \
-it airflow-worker-569bc59df5-x6jhl airflow list_dags -r
You can also open a shell if you prefer:
$ kubectl exec \
--namespace=$NAMESPACE \
-it airflow-worker-569bc59df5-x6jhl bash
airflow#airflow-worker-569bc59df5-x6jhl:~$ airflow list_dags -r
The failed airflow-database-init-job jobs are unrelated and will not cause problems in your Composer environment.
The following link https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters talks about the setting up of a private GKE cluster in a separate custom VPC. The Terraform code that creates the cluster and VPCs are available from https://github.com/rajtmana/gcp-terraform/blob/master/k8s-cluster/main.tf Cluster creation completed and I wanted to use some kubectl commands from the Google Cloud Shell. I used the following commands
$ gcloud container clusters get-credentials mservice-dev-cluster --region europe-west2
$ gcloud container clusters update mservice-dev-cluster \
> --region europe-west2 \
> --enable-master-authorized-networks \
> --master-authorized-networks "35.241.216.229/32"
Updating mservice-dev-cluster...done.
ERROR: (gcloud.container.clusters.update) Operation [<Operation
clusterConditions: []
detail: u'Patch failed'
$ gcloud container clusters update mservice-dev-cluster \
> --region europe-west2 \
> --enable-master-authorized-networks \
> --master-authorized-networks "172.17.0.2/32"
Updating mservice-dev-cluster...done.
Updated [https://container.googleapis.com/v1/projects/protean-
XXXX/zones/europe-west2/clusters/mservice-dev-cluster].
To inspect the contents of your cluster, go to:
https://console.cloud.google.com/kubernetes/workload_/gcloud/europe-
west2/mservice-dev-cluster?project=protean-XXXX
$ kubectl config current-context
gke_protean-XXXX_europe-west2_mservice-dev-cluster
$ kubectl get services
Unable to connect to the server: dial tcp 172.16.0.2:443: i/o timeout
When I give the public IP of the Cloud Shell, it says that public IP is not allowed with error message given above. If I give the internal IP of Cloud Shell starting with 172, the connection is timing out as well. Any thoughts? Appreciate the help.
Google suggest creating a VM within the same network as the cluster and then accessing that via SSH in the cloud shell and running kubectl commands from there:
https://cloud.google.com/solutions/creating-kubernetes-engine-private-clusters-with-net-proxies
Try to perform the following
gcloud container clusters get-credentials [CLUSTER_NAME]
And confirm that kubectl is using the right credentials:
gcloud auth application-default login
I'm running the following command and getting an error:
$ kubectl get nodes
error: You must be logged in to the server (the server has asked for the client to provide credentials)
What's going on?
You have to run:
$ gcloud container clusters get-credentials [cluster-name]
Docs here.
$gcloud config set compute/zone [zone]
$gcloud container clusters get-credentials [clustername]
Re-setting the compute/zone seems to do the trick.
Using this command
gcloud container clusters list
I got
NAME ZONE MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS
k0 europe-west1-d 1.6.4 35.187.164.84 n1-standard-1 1.6.4 3 RUNNING
So the zone seemed to configured, but it was only after re-executing
gcloud config set compute/zone europe-west1-d
did things start working again.
So the real question is: why has is the compute zone config suddenly no longer valid?
I got the similar issue in Minikube environment.I restarted minikube and it was working as expected. So if the issue is occurring in Minikube environment Please restart it