Is there any difference between a key that I create using the gcloud iam command below vs. going thru the console to create a json key? Each results in a .json file that other than obvious differences in the private_key_id and private_key values are identical. Here are the gcloud commands I am using:
gcloud iam service-accounts create my-sa-name
gcloud projects add-iam-policy-binding my-project-id --member="serviceAccount:my-sa-name#my-project-id.iam.gserviceaccount.com" --role="roles/owner"
gcloud iam service-accounts keys create key.json --iam-account=serviceAccount:my-sa-name#my-project-id.iam.gserviceaccount.com
However, when I try and leverage the one pulled down thru the command line, I get:
google.auth.exceptions.RefreshError: ('invalid_grant: Invalid JWT Signature.', '{"error":"invalid_grant","error_description":"Invalid JWT Signature."}')
Oddly if I go to the console, create a key for the same service account, and put the file that downloads to my computer in place of the one from the CLI, all works fine.
How am I using the key you ask? I'm using the function-frameworks to locally run and debug a cloud function that will access a cloud storage bucket, so my code is using the google.cloud.storage client library (python 3.8). I run the local process with the environment variable GOOGLE_APPLICATION_CREDENTIALS set to the location/filename of the json key file. I know this all works fine b/c of the console-downloaded key working fine.
I have also tried using gcloud auth service-account --key-file=key.json and this also gives me the Invalid JWT Signature error.
Fortunately I'm not blocked b/c I can use the manually-created key, but I would REALLY like to automate every possible step here...
So... can anyone explain this? Seen it, figured it out and know how to fix it?
gcloud iam service-accounts keys create key.json \
--iam-account=my-sa-name#my-project-id.iam.gserviceaccount.com
One way to clarify this is:
ACCOUNT="[[YOUR-ACCOUNT]]"
PROJECT="[[YOUR-PROJECT]]"
EMAIL="${ACCOUNT}#${PROJECT}.iam.gserviceaccount.com"
gcloud iam service-accounts create ${ACCOUNT} \
--project=${PROJECT}
gcloud projects add-iam-policy-binding ${PROJECT} \
--member="serviceAccount:${EMAIL}" \
--role="roles/owner"
gcloud iam service-accounts keys create ${ACCOUNT}.json \
--iam-account=${EMAIL} \
--project=${PROJECT}
Related
I'm stumped trying to parse IAM members using gcloud (transforms):
gcloud projects get-iam-policy ${PROJECT} \
--flatten="bindings[].members[]" \
--format="csv[no-heading](bindings.members,bindings.role)"
yields:
serviceAccount:foo,roles/...
user:bar,roles/...
...
Is it possible, using gcloud, to extract e.g. the email address from the member property?
I have a solution:
gcloud projects get-iam-policy ${PROJECT} \
--flatten="bindings[].members[]" \
--format="csv[no-heading](bindings.members.split(':').slice(1:),bindings.role)"
yields, e.g.:
something#service.iam.gserviceaccount.com,roles/editor
someone#gmail.com,roles/owner
NOTE the sample approach can be used on role too: bindings.role.split('/').slice(1:)
I have the Owner role over the whole project. I accidentally deleted a default service account, and now I can't start any VM instance.
I learned that I can undelete the service account with the gcloud alpha iam service-accounts undelete <ID> command, or a similar curl command curl -X POST -H "Authorization: Bearer $(gcloud auth print-access-token)" "https://iam.googleapis.com/v1/projects/<project_id>/serviceAccounts/<ID>:undelete", but both give me the following error:
PERMISSION_DENIED: Permission iam.serviceAccounts.undelete is required to perform this operation on service account
I can't find the iam.serviceAccounts.undelete permission anywhere. The Owner role (that I have) has all other iam.serviceAccounts permissions (create, delete, update...) but not this one.
How can I run that command?
Edit: My bad, I was using a wrong serviceAccount ID the whole time, I missed the last digit when I did my copy/paste... Still the error message is strange.
OK, the following worked for me:
gcloud projects create ${PROJECT}
gcloud iam service-accounts create ${ROBOT} --project=${PROJECT}
ID=$(\
gcloud iam service-accounts describe \
${ROBOT}#${PROJECT}.iam.gserviceaccount.com \
--project=${PROJECT} \
--format="value(uniqueId)") && echo ${ID}
gcloud iam service-accounts delete ${ROBOT}#${PROJECT}.iam.gserviceaccount.com \
--project=${PROJECT}
gcloud alpha iam service-accounts undelete ${ID} --project=${PROJECT}
yields:
restoredAccount:
email: ${ROBOT}#${PROJECT}.iam.gserviceaccount.com
etag: ...
name: projects/${PROJECT}/serviceAccounts/${ROBOT}#${PROJECT}.iam.gserviceaccount.com
oauth2ClientId: '${ID}'
projectId: ${PROJECT}
uniqueId: '${ID}'
I tried this on the default Compute Engine account too, thinking that may behave differently, but it undeletes too:
NUM=$(gcloud projects describe ${PROJECT} \
--format="value(projectNumber)")
ID=$(\
gcloud iam service-accounts describe \
${NUM}-compute#developer.gserviceaccount.com \
--project=${PROJECT} \
--format="value(uniqueId)") && echo ${ID}
gcloud iam service-accounts delete ${NUM}-compute#developer.gserviceaccount.com \
--project=${PROJECT}
gcloud alpha iam service-accounts undelete ${ID} --project=${PROJECT}
Filed a FR with Google's Issue Tracker as it appears there's no way (!?) to enumerate deleted service accounts to find the uniqueId after deleting an account.
Expanding on #halfer's answer, you can potentialy find the deleted unique id by going through activity logs. Using the time should narrow the index down. Look for Delete service accountand then it should show as ${IAM account} deleted ${unique id}
Link: https://console.cloud.google.com/home/activity?project=${project}
During the startup of my server I am getting the following error:
StorageException: server#<project>.iam.gserviceaccount.com does not have storage.buckets.create access to project <project-id>.
From here the docs I understand that adding Storage Object Creator the the roles of my service account should be sufficient to get the storage.buckets.create permissions.
However, the exception above tells my I am missing something.
The Object Creator role is for creating Storage objects not buckets. To administrate buckets you need the Storage Admin role.
To create that Service Account from the Cloud Shell:
gcloud iam service-accounts create buckets-sa \
--description "Storage admin Service account" \
--display-name "buckets-sa"
# Main role to create Buckets in Google Storage
gcloud projects add-iam-policy-binding [PROJECT] \
--member serviceAccount:buckets-sa#[PROJECT].iam.gserviceaccount.com \
--role roles/storage.admin
# Role for testing: Service Account Token Creator
gcloud projects add-iam-policy-binding [PROJECT] \
--member serviceAccount:buckets-sa#[PROJECT].iam.gserviceaccount.com \
--role roles/iam.serviceAccountTokenCreator
# Create Key
gcloud iam service-accounts keys create key-file.json \
--iam-account buckets-sa#[PROJECT].iam.gserviceaccount.com
# Test: Impersonate service account
gcloud auth activate-service-account buckets-sa#[PROJECT].iam.gserviceaccount.com --key-file=key-file.json
gsutil -i "buckets-sa#[PROJECT].iam.gserviceaccount.com" mb gs://new-bucket
# To restore your account in Cloud Shell uncomment execute the following line.
# gcloud auth login
I'm trying to follow this step by step to upload the airflow in Kubernetes (https://github.com/EamonKeane/airflow-GKE-k8sExecutor-helm) but in this part of the execution I have problems as follows:
Researching on the topic did not find anything that solved so far my problem, does anyone have any suggestions of what to do?
SQL_ALCHEMY_CONN=postgresql+psycopg2://$AIRFLOW_DB_USER:$AIRFLOW_DB_USER_PASSWORD#$KUBERNETES_POSTGRES_CLOUDSQLPROXY_SERVICE:$KUBERNETES_POSTGRES_CLOUDSQLPROXY_PORT/$AIRFLOW_DB_NAME
echo $SQL_ALCHEMY_CONN > /secrets/airflow/sql_alchemy_conn
# Create the fernet key which is needed to decrypt database the database
FERNET_KEY=$(dd if=/dev/urandom bs=32 count=1 2>/dev/null | openssl base64)
echo $FERNET_KEY > /secrets/airflow/fernet-key
kubectl create secret generic airflow \
--from-file=fernet-key=/secrets/airflow/fernet-key \
--from-file=sql_alchemy_conn=/secrets/airflow/sql_alchemy_conn
Unable to connect to the server: error executing access token command
"/google/google-cloud-sdk/bin/gcloud config config-helper
--format=json": err=exit status 1 output= stderr=ERROR: gcloud crashed (BadStatusLine): '' If you would like to report this issue, please run
the following command: gcloud feedback To check gcloud for common
problems, please run the following command: gcloud info
--run-diagnostics
I solved this by creating a new cloud shell tab to connect the cluster:
gcloud container clusters get-credentials testcluster1 --zone = your_zone
Example:
get the name and location of your cluster
gcloud container clusters list
then
gcloud container clusters get-credentials demo --region=us-west1-a
I am trying to set up datalab from my chrome book using the following tutorial https://cloud.google.com/dataproc/docs/tutorials/dataproc-datalab. However when trying to set up an SSH tunnel using the following guidelines https://cloud.google.com/dataproc/docs/concepts/accessing/cluster-web-interfaces#create_an_ssh_tunnel I keep on receiving the following error.
ERROR: (gcloud.compute.ssh) Could not fetch resource:
- Project 57800607318 is not found and cannot be used for API calls. If it is recently created, enable Compute Engine API by visiting https://console.developers.google
.com/apis/api/compute.googleapis.com/overview?project=57800607318 then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our sy
stems and retry.
The error message would lead me to believe my "Compute Engine API" is not enabled. However, I have double checked and "Compute Engine API" is enabled.
Here is what I am entering into the cloud shell
gcloud compute ssh ${test-cluster-m} \
--project=${datalab-test-229519} --zone=${us-west1-b} -- \
-4 -N -L ${8080}:${test-cluster-m}:${8080}
The ${} is for accessing the local environment variable. You set them in the step before with:
export PROJECT=project;export HOSTNAME=hostname;export ZONE=zone;PORT=number
In this case would be:
export PROJECT=datalab-test-229519;export HOSTNAME=test-cluster-m;export ZONE=us-west1-b;PORT=8080
Either try this:
gcloud compute ssh test-cluster-m \
--project datalab-test-229519 --zone us-west1-b -- \
-D 8080 -N
Or access the enviroment variables with:
gcloud compute ssh ${HOSTNAME} \
--project=${PROJECT} --zone=${ZONE} -- \
-D ${PORT} -N
Also check the VM you are trying to access is running.