What is wrong with the command?
gcloud container node-pools create abcxyz --zone europe-west1-b --cluster wordpress --machine-type g1-small --num-nodes 1 --node-taints=critical_assist=nfs_redis_heapster:NoExecute
ERROR: (gcloud.container.node-pools.create) unrecognized arguments: --node-taints=critical_assist=nfs_redis_heapster:NoExecute (did you mean '--node-labels'?)
Usage: gcloud container node-pools create NAME [optional flags]
optional flags may be --cluster | --disk-size | --enable-cloud-endpoints |
--help | --image-type | --machine-type |
--node-labels | --num-nodes | --scopes | --tags | -h
For detailed information on this command and its flags, run:
gcloud container node-pools create --help
I have found the command syntax here: https://cloud.google.com/sdk/gcloud/reference/beta/container/node-pools/create
Strangely I don't see anything in the help:
gcloud beta container node-pools create --help | grep taint
gcloud version
Google Cloud SDK 158.0.0
app-engine-python 1.9.54
beta 2017.03.24
bq 2.0.24
core 2017.06.02
gcloud
gsutil 4.26
kubectl
Updates are available for some Cloud SDK components. To install them,
please run:
$ gcloud components update
Once I ran gcloud components update this was fixed.
Related
I'm trying to connect to a cluster and I'm getting the following error:
gcloud container clusters get-credentials cluster1 --region europe-west2 --project my-project
Fetching cluster endpoint and auth data.
CRITICAL: ACTION REQUIRED: gke-gcloud-auth-plugin, which is needed for continued use of kubectl, was not found or is not executable.
Install gke-gcloud-auth-plugin for use with kubectl by following https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke
kubeconfig entry generated for dbcell-cluster.
I have installed Google Cloud SDK 400, kubektl 1.22.12, gke-gcloud-auth-plugin 0.3.0, and also setup /~.bashrc with
export USE_GKE_GCLOUD_AUTH_PLUGIN=True
gke-gcloud-auth-plugin --version
Kubernetes v1.24.0-alpha+f42d1572e39979f6f7de03bd163f8ec04bc7950d
but when I try to connect to the cluster always I'm getting the same error, any idea here?
Thanks
The cluster exist in that region, also I verfied the env variable
with
echo $USE_GKE_GCLOUD_AUTH_PLUGIN
True
I installed the gke-gcloud-auth-plugin using gcloud components install... I do not know what more can I check
gcloud components list
I solved the same problem by removing my current kubeconfig context for GCP.
Get your context name running:
kubectl config get-contexts
Delete the context:
kubectl config delete-context CONTEXT_NAME
Reconfigure the credentials
gcloud container clusters get-credentials CLUSTER_NAME --region REGION --project PROJECT
The warning message should be gone by now.
I want to get flags used when creating a dataproc cluster in a spark job.
for example, I created my cluster using this command line:
gcloud dataproc clusters create cluster-name \
--region=region \
--bucket=bucket-name \
--temp-bucket=bucket-name \
other args ...
In my scala spark job I want to get the bucket name and other arguments how to do that, I know if I want to get the arguments of my job I must do that:
val sc = sparkSession.sparkContext
val conf_context=sc.getConf.getAll
conf_context.foreach(println)
Any help, please?
Thanks
Dataproc also publishes some attributes, including the bucket name, to GCE instance Metadata. You can also specify your own Metadata. See https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/metadata.
These will be available to you through the Metadata server. For example, if you want to read the bucket name, you can run
curl -s -H Metadata-Flavor:Google http://metadata.google.internal/computeMetadata/v1/instance/attributes/dataproc-bucket
You can use gcloud dataproc clusters describe shell command to get details about the cluster:
gcloud dataproc clusters describe $clusterName --region $clusterRegion
To get the bucket name from this command, you can use grep:
BUCKET_NAME=$(gcloud dataproc clusters describe $clusterName \
--region $clusterRegion \
| grep 'configBucket:' \
| sed 's/.* //')
You should be able to execute this from Scala, see this post for how to do.
have a Rancher installation using docker image. version v2.2.1.
Lately started to get logs “Failed to update lock: etcdserver: mvcc: database space exceeded”
checking etcd for the cluster we have everything looks ok.
etcd status
So noticed that etcd db inside rancher docker container is like this:
inside directoy /var/lib/rancher/management-state/etcd/member/snap
2.1G Jul 17 22:29 db
but cannot compact or interact with it.
Why Rancher docker image has a etcd db itself? is not enough having the cluster one?
and how can we keep it small in order to solve the problem?
Thanks in advance
Same here with single node installation of rancher/rancher:stable (997af25b7b54).
You can run etcdctl in a service container on the same docker host like your rancher:
docker run --net=container:<NAME_OF_RANCHER_CONTAINER> -id --name etcd-utility rancher/rke-tools:v0.1.40
And then, because you use the net from the rancher container, the localhost output here refers to the rancher container.
docker exec etcd-utility etcdctl member list
8e9e05c52164694d: name=default peerURLs=http://localhost:2380
clientURLs=http://localhost:2379 isLeader=true
Now, when attaching to the etcd-utils container, you can fix the issue with your etcd like this (no output pasted):
host# docker exec -it etcd-utility bash
bash-4.4# export ETCDCTL_API=3
bash-4.4# etcdctl endpoint status --endpoints=$(etcdctl member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ',') --write-out table
bash-4.4# etcdctl compact `etcdctl endpoint status --write-out json | egrep -o '"revision":[0-9]*' | egrep -o '[0-9]*'`
bash-4.4# etcdctl defrag `etcdctl endpoint status --write-out json | egrep -o '"revision":[0-9]*' | egrep -o '[0-9]*'`
bash-4.4# etcdctl alarm list
bash-4.4# etcdctl alarm disarm
The latter follows the etcd trouble shooting guide for cluster etcd, which is explained in detail in the rancher docs etcd-space-errors. For single node, see links in this comment
I'm trying to follow this step by step to upload the airflow in Kubernetes (https://github.com/EamonKeane/airflow-GKE-k8sExecutor-helm) but in this part of the execution I have problems as follows:
Researching on the topic did not find anything that solved so far my problem, does anyone have any suggestions of what to do?
SQL_ALCHEMY_CONN=postgresql+psycopg2://$AIRFLOW_DB_USER:$AIRFLOW_DB_USER_PASSWORD#$KUBERNETES_POSTGRES_CLOUDSQLPROXY_SERVICE:$KUBERNETES_POSTGRES_CLOUDSQLPROXY_PORT/$AIRFLOW_DB_NAME
echo $SQL_ALCHEMY_CONN > /secrets/airflow/sql_alchemy_conn
# Create the fernet key which is needed to decrypt database the database
FERNET_KEY=$(dd if=/dev/urandom bs=32 count=1 2>/dev/null | openssl base64)
echo $FERNET_KEY > /secrets/airflow/fernet-key
kubectl create secret generic airflow \
--from-file=fernet-key=/secrets/airflow/fernet-key \
--from-file=sql_alchemy_conn=/secrets/airflow/sql_alchemy_conn
Unable to connect to the server: error executing access token command
"/google/google-cloud-sdk/bin/gcloud config config-helper
--format=json": err=exit status 1 output= stderr=ERROR: gcloud crashed (BadStatusLine): '' If you would like to report this issue, please run
the following command: gcloud feedback To check gcloud for common
problems, please run the following command: gcloud info
--run-diagnostics
I solved this by creating a new cloud shell tab to connect the cluster:
gcloud container clusters get-credentials testcluster1 --zone = your_zone
Example:
get the name and location of your cluster
gcloud container clusters list
then
gcloud container clusters get-credentials demo --region=us-west1-a
The following link https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters talks about the setting up of a private GKE cluster in a separate custom VPC. The Terraform code that creates the cluster and VPCs are available from https://github.com/rajtmana/gcp-terraform/blob/master/k8s-cluster/main.tf Cluster creation completed and I wanted to use some kubectl commands from the Google Cloud Shell. I used the following commands
$ gcloud container clusters get-credentials mservice-dev-cluster --region europe-west2
$ gcloud container clusters update mservice-dev-cluster \
> --region europe-west2 \
> --enable-master-authorized-networks \
> --master-authorized-networks "35.241.216.229/32"
Updating mservice-dev-cluster...done.
ERROR: (gcloud.container.clusters.update) Operation [<Operation
clusterConditions: []
detail: u'Patch failed'
$ gcloud container clusters update mservice-dev-cluster \
> --region europe-west2 \
> --enable-master-authorized-networks \
> --master-authorized-networks "172.17.0.2/32"
Updating mservice-dev-cluster...done.
Updated [https://container.googleapis.com/v1/projects/protean-
XXXX/zones/europe-west2/clusters/mservice-dev-cluster].
To inspect the contents of your cluster, go to:
https://console.cloud.google.com/kubernetes/workload_/gcloud/europe-
west2/mservice-dev-cluster?project=protean-XXXX
$ kubectl config current-context
gke_protean-XXXX_europe-west2_mservice-dev-cluster
$ kubectl get services
Unable to connect to the server: dial tcp 172.16.0.2:443: i/o timeout
When I give the public IP of the Cloud Shell, it says that public IP is not allowed with error message given above. If I give the internal IP of Cloud Shell starting with 172, the connection is timing out as well. Any thoughts? Appreciate the help.
Google suggest creating a VM within the same network as the cluster and then accessing that via SSH in the cloud shell and running kubectl commands from there:
https://cloud.google.com/solutions/creating-kubernetes-engine-private-clusters-with-net-proxies
Try to perform the following
gcloud container clusters get-credentials [CLUSTER_NAME]
And confirm that kubectl is using the right credentials:
gcloud auth application-default login