How to check which gcloud project is active - gcloud

To see all your gcloud projects you use command gcloud projects list. To switch to a specific project, you use gcloud config set project PROJECT_ID.
But what command can you use when you want to check which project is active? By which I mean, on which project was called the set command last?

gcloud config get-value project
You can always type gcloud config --help
There's a very cool and well-hidden interactive tool: gcloud beta interactive that will help with gcloud command completion.
Personally, I recommend not using configurations to hold default values (for e.g. project) in order to (help) avoid "To which project did I just apply that command?" issues.
IMO, it's much better to be more explicit and I prefer:
gcloud ... --project=${PROJECT}
If, like me, you put the project value in a variable, you can still make mistakes but it is easier to avoid them.
You can also define sets of configurations and then use gcloud ... --configuration=${CONFIG} and this works too as long as you don't set values in the default config

You can use gcloud projects list --filter='lifecycleState:ACTIVE' to get all active projects.
Or you can list them all showing lifecyclestate and filter with grep or other bash stuff:
$ gcloud projects list --format="table(projectNumber,projectId,createTime.date(tz=LOCAL),lifecycleState)" --limit 10
PROJECT_NUMBER PROJECT_ID CREATE_TIME LIFECYCLE_STATE
310270846648 again-testing-no-notebook 2022-12-11T07:03:03 ACTIVE
[...]
Hope this helps.

Related

Multiple active projects under single config? Or, multiple active configurations?

I have a set of clusters split between two projects, 1 and 2. Currently, need to use gcloud init to switch between the two projects. Is there any possibility of having both projects active under the single configuration? Or, is it possible to have two configurations simultaneously active? I would hate to have to use init every time to switch between the two. Thanks!
gcloud init should only be used to (re)initialize gcloud on a host. The only time I ever use it is when I install gcloud on a new machine.
gcloud uses a global config that can be manipulated with the gcloud config command. IMO (I've been using GCP for 9 years) the less you use gcloud config, the better for your experience.
I think you're much better placed specifying config explicitly with gcloud commands.
Every gcloud command can include e.g.:
--project=${PROJECT} to specify the project to use
--account=${ACCOUNT} to specify the gcloud auth'd account to use
--region=${REGION} or --zone=${ZONE} or --location=${LOCATION}
etc.
Using gcloud commands and explicitly setting flags to specific the project, account, location etc. makes it trivial to flip between these and often (though not always) in a more intentional way.

GCloud authentication race conditions

I'm trying to avoid race conditions with gcloud / gsutil authentication on the same system but different CI/CD jobs on my Gitlab-Runner on a Mac Mini.
I have tried setting the auth manually with
RUN gcloud auth activate-service-account --key-file="gitlab-runner.json"
RUN gcloud config set project $GCP_PROJECT_ID
for the Dockerfile (in which I'm performing a download operation from a Google Cloud Storage bucket).
I'm using a configuration in the bash script to run the docker command and in the same script for authenticating I'm using
gcloud config configurations activate $TARGET
Where I've previously done the above two commands to save them to the configuration.
The configurations are working fine if I start the CI/CD jobs one after the other has finished. But I want to trigger them for all clients at the same time, which causes race conditions with gcloud authentication and one of the jobs trying to download from the wrong project bucket.
How to avoid a race condition? I'm already authenticating before each gsutil command but still its causing the race condition. Do I need something like CloudBuild to separate the runtime environments?
You can use Cloud Build to get separate execution environments but this might be an overkill for your use case, as a Cloud Build worker uses an entire VM which might be just too heavy, linux containers / Docker can provide necessary isolation as well.
You should make sure that each container you run has a unique config file placed in the path expected by gcloud. The issue may come from improper volume mounting (all the containers share the same location from the host/OS), or maybe you should mount a directory containing their configuration file (unique for each bucket) on running an image, or perhaps you should run gcloud config configurations activate in a Dockerfile step (thus creating image variants for different buckets if it’s feasible).
Alternatively, and I think this solution might be easier, you can switch from Cloud SDK distribution to standalone gsutil distribution. That way you can provide a path to a boto configuration file through an environment variable.
Such variables can be specified on running a Docker image.

gcloud list instances in managed group sorted by creation time

I need to get the oldest instance from an instance group. I am using the following command:
gcloud compute instance-groups managed list-instances "instance-group-name" --region "us-central1" --format="value(NAME,ZONE,CREATION_TIMESTAMP)" --sort-by='~CREATION_TIMESTAMP'
But it seems --sort-by is not working or I am using it a bit wrong.
Could you please suggest the right way.
It's probably creationTimestamp not CREATION_TIMESTAMP.
See: instances.list and the response body for the underlying field names.
It's slightly confusing but gcloud requires you to use the (field|property) names of the underlying request|response types not the output names.
Another way to more readily determine this is to add --format=yaml or --format=json to gcloud compute instances list (or any gcloud command) to get an idea of what's being returned so that you can begin filtering and formatting it.

Pass internalIpOnly to projects.regions.clusters.create through gcloud

GceClusterConfig object has property internalIpOnly, but there is no clear documentation on how to specify that flag through gcloud command. Is there a way to pass that property?
That feature was first released in gcloud beta dataproc clusters create where you can use the --no-address flag to turn that on. The feature recently became General Availability, and should be making it into the main gcloud dataproc clusters create any moment now (it's possible if you run gcloud components update you'll get the flag in the non-beta branch even though the public documentation hasn't been updated to reflect it yet).

gcloud compute instances add-metadata set environment variable

I am trying to set an environment variable from a script added to an instance metadata. I added the metadata from file using the command:
gcloud compute instances add-metadata server-1 --metadata-from-file file=~/meta.sh
and the script is
#!/bin/sh
export SERVER="ide"
it seems is doing nothing when I reboot the server
The --metadata-from-file flag reads the values for the specified metadata keys from the specified files. In your example, you are assigning the contents of ~/meta.sh as the value for the metadata-data key 'file'.
In order to do something with 'file', you need to read its value from the instance (server-1) and act on it. There are some special metadata keys that are used by compute engine during certain times of the instance life-cycle. For example, 'startup-script' is a key that is read and executed during start-up. I think you intended to use this key. So, try this:
gcloud compute instances add-metadata "server-1" --metadata-from-file startup-script=~/meta.sh
For more details on metadata usage, run:
gcloud compute instances add-metadata --help
or go here:
https://cloud.google.com/compute/docs/metadata
6 years old question, but for future reference for myself and others:
Setting environment-variables in the startup-script doesn't seem to work, but what you can do is write them to your .bashrc - in my example, I set them like this:
gcloud compute instances add-metadata etl-pipelines --metadata startup-script='#! /bin/bash
echo "
export USER='${USER}'
export PASSWORD='${PASSWORD}'
" >> /home/USERNAME/.bashrc
better would of course be to check if that string is already inserted into the VM, but that wasn't relevant for me as I kill the VMs quite quickly anyway.
Alternatively, in this SO answer, it is described how to user curl to get the env-vars directly from the metadata, but I haven't looked further into it yet.