Can I authenticate gcloud cli using both service account and user credentials? - gcloud

Google API clients typically recognise the GOOGLE_APPLICATION_CREDENTIALS environment variable. If found, it's expected to point to a JSON file with credentials for either a service account or a user.
Service account credentials can be downloaded from the GCP web console and look like this:
{
"type": "service_account",
"project_id": "...",
"private_key_id": "...",
"private_key": "...",
"client_email": "...",
"client_id": "...",
"auth_uri": "...",
"token_uri": "...",
"auth_provider_x509_cert_url": "...",
"client_x509_cert_url": "..."
}
User credentials are often available in ~/.config/gcloud/application_default_credentials.json and look something like:
{
"client_id": "...",
"client_secret": "...",
"refresh_token": "...",
"type": "authorized_user"
}
Here's an example of the official google rubygem detecting the type of credentials provided via the environment var.
I'd like to authenticate an unconfigured gcloud install with both types of credential. In our case we happen to be passing the GOOGLE_APPLICATION_CREDENTIALS variable and path into a docker container, but I think this is a valid question for clean installs outside docker too.
If the credentials file is a service account type, I can do this:
gcloud auth activate-service-account --key-file=${GOOGLE_APPLICATION_CREDENTIALS}
However I can't see any way to handle the case where the credentials belong to a real user.
Questions:
Why doesn't the official gcloud tool follow the conventions that other google API clients use and use GOOGLE_APPLICATION_CREDENTIALS when available?
Is there a hidden method that will activate the user credentials case?

As you point out gcloud command line tool (CLI) does not use application default credentials. It has separate system for managing its own credentials.
GOOGLE_APPLICATION_CREDENTIALS are designed for client libraries to simplify wiring in credentials, and gcloud CLI is not a library. Even in the client code best practice is not to depend on this environment variable but instead explicitly provide credentials.
To answer your second question, user credentials can be obtained via
gcloud auth login
command. (NOTE this is different from gcloud auth application-default login) This besides saving actual credentials will also set account property in current configuration:
gcloud config list
gcloud can have many configurations, each with different credentials. See
gcloud config configurations list
You can create multiple configurations, one with user account another with service account and use it simultaneously by providing --configuration parameter, for example
gcloud compute instances list --configuration MY_USER_ACCOUNT_CONFIG
Similarly you can also switch which credentials are used by using --account flag, in which case it will use same configuration and will only swap out the account.

I've found a way to authenticate a fresh gcloud when GOOGLE_APPLICATION_CREDENTIALS points to a file with user credentials rather than service account credentials.
cat ${GOOGLE_APPLICATION_CREDENTIALS}
{
"client_id": "aaa",
"client_secret": "bbb",
"refresh_token": "ccc",
"type": "authorized_user"
}
gcloud config set auth/client_id aaa
gcloud config set auth/client_secret bbb
gcloud auth activate-refresh-token user ccc
This uses the undocumented auth activate-refresh-token subcommand - which isn't ideal - but it does work.
Paired with gcloud auth activate-service-account --key-file=credentials.json, this makes it possible to initialize gcloud regardless of the credential type available at $GOOGLE_APPLICATION_CREDENTIALS

Related

Retrieval of secrets in Azure App Service from Hashicorp Vault using Managed Identity | Missing Role - Error

Hashicorp Vault is the native product of our organization and is a widely used and recommended approach for storing all the key-value pairs or any secrets. Any applications that are deployed on Azure too must store/retrieve the token from Hashicorp Vault and not from the Azure Key Vault. I provided this information just to add a bit of background to the requirement.
Now coming to the actual problem, I deployed the dotnet application on Azure App Service, enable the system-managed identity, and was able to successfully retrieve the JWT token.
As per the flow which I understood by reading the documentation, it says, first retrieve the application token deployed on Azure having System Managed Identity enabled. Once this is done, pass this token for validation to Vault which gets it validated using OIDC from AAD. On successful validation, I will be given back the Vault token which can be used to fetch the secrets from Vault.
To perform these steps configuration is required at the Vault side, for which, I performed all the below steps on the vault server installed on my windows local machine:-
Command line operation
Start the Vault server
Open the other command prompt and set the environment variables set
VAULT_ADDR=http://127.0.0.1:8200 set
VAULT_TOKEN=s.iDdVbLKPCzmqF2z0RiXPMxLk
vault auth enable jwt
vault write auth/jwt/config
oidc_discovery_url=https://sts.windows.net/4a95f16f-35ba-4a52-9cb3-7f300cdc0c60/
bound_issuer=https://sts.windows.net/4a95f16f-35ba-4a52-9cb3-7f300cdc0c60/
vault read auth/jwt/config
Policy associated with the sqlconnection:-
create a role (webapp-role) by using the command
curl --header “X-Vault-Token: %VAULT_TOKEN%” --insecure --request POST
--data #C:\Users\48013\source\repos\HashVaultAzure\Vault-files\payload.json
%VAULT_ADDR%/v1/auth/jwt/role/webapp-role
–payload.json { “bound_audiences”: “https://management.azure.com/”,
“bound_claims”: { “idp”:
“https://sts.windows.net/4a95f16f-35ba-4a52-9cb3-7f300cdc0c60/”,
“oid”: “8d2b99fb-f4f4-4afb-9ee3-276891f40a65”, “tid”:
“4a95f16f-35ba-4a52-9cb3-7f300cdc0c60/” }, “bound_subject”:
“8d2b99fb-f4f4-4afb-9ee3-276891f40a65”, “claim_mappings”: { “appid”:
“application_id”, “xms_mirid”: “resource_id” }, “policies”:
[“sqlconnection”], “role_type”: “jwt”, “token_bound_cidrs”:
[“10.0.0.0/16”], “token_max_ttl”: “24h”, “user_claim”: “sub” }
Vault read auth/jwt/role/webapp-role
Run the command below with the JWT token retrieved from the application (having the managed identity enabled) deployed on Azure
AAD and pass it as “your_jwt”. This command should return the vault
token as shown in the link https://www.vaultproject.io/docs/auth/jwt
curl --request POST --data '{"jwt": "your_jwt", "role":
"webapp-role"}' http://127.0.0.1:8200/v1/auth/jwt/login
At this point I receive an error – “Missing Role”,
I am stuck here and not able to find any solution.
Expected response should be a vault token/client_token as shown:-
JWT Token decoded information
{
"aud": "https://management.azure.com",
"iss": "https://sts.windows.net/4a95f16f-35ba-4a52-9cb3-7f300cdc0c60/",
"iat": 1631172032,
"nbf": 1631172032,
"exp": 1631258732,
"aio": "E2ZgYNBN4JVfle92Tsl1b8m8pc9jAA==",
"appid": "cf5c734c-a4fd-4d85-8049-53de46db4ec0",
"appidacr": "2",
"idp": "https://sts.windows.net/4a95f16f-35ba-4a52-9cb3-7f300cdc0c60/",
"oid": "8d2b99fb-f4f4-4afb-9ee3-276891f40a65",
"rh": "0.AVMAb_GVSro1Ukqcs38wDNwMYExzXM_9pIVNgElT3kbbTsBTAAA.",
"sub": "8d2b99fb-f4f4-4afb-9ee3-276891f40a65",
"tid": "4a95f16f-35ba-4a52-9cb3-7f300cdc0c60",
"uti": "LDjkUZdlKUS4paEleUUFAA",
"ver": "1.0",
"xms_mirid": "/subscriptions/0edeaa4a-d371-4fa8-acbd-3675861b0ac8/resourcegroups/AzureAADResource/providers/Microsoft.Web/sites/hashvault-test",
"xms_tcdt": "1600006540"
}
The issue was with the missing configuration both at the Azure Cloud and Vault side.
These were the addition steps done further to make it work.
Create an Azure SPN (which is equal to creating an app registration with client secret)
az ad sp create-for-rbac --name "Hashicorp Vault Prod AzureSPN"
--skip-assignment Assign as Reader on subscription
Create Vault config
vault auth enable azure vault write auth/jwt/config
tenant_id=lg240e12-76g1-748b-cd9c-je6f29562476
resource=https://management.azure.com/ client_id=34906a49-
9a8f-462b-9d68-33ae40hgf8ug client_secret=123456ABCDEF

Pass specific service account key to Google Storage API

I am trying to authenticate with Google Storage by passing a service account key manually as a Json object. I am trying to pass a specific service account to be used and not the default service account created by the project. However even when I pass the key of my specific service account, the error I get is still using the default service account. How do I tell the google storage api to use the key being passed to it instead?
var accessKey = {
"type": "service_account",
"project_id": "...",
"private_key_id": "...",
"private_key": "...",
"client_email": "myserviceaccount#gcp-serviceaccounts-keys.iam.gserviceaccount.com",
"client_id": "...",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "..."
};
const projectId = " ...";
const { Storage } = require('#google-cloud/storage');
const storage = new Storage({projectId, accessKey});
const bucket = storage.bucket('test-bucket-aa');
The error I get:
Error: gcp-myprojectid#appspot.gserviceaccount.com does not have storage.objects.get access to the Google Cloud Storage object.
It's strongly preferable to keep the key out of the source.
The Storage client wants a reference to the key's filename.
See: Storage examples
Put the string in your source into a file e.g. key.json and then:
const keyFilename = "/path/to/key.json";
const storage = new Storage({
projectId: projectId,
keyFilename: keyFilename
});
Using ADCs with Cloud Functions and Cloud Storage
PROJECT=... # Your Project ID
ACCOUNT=... # Your Service Account
FUNCTION=... # Your Cloud Function
# Create Service Account
gcloud iam service-accounts create ${ACCOUNT} \
--description="Used by Cloud Functions to Access Cloud Storage" \
--display-name="Cloud Functions Storage Accessor" \
--project=${PROJECT}
EMAIL="${ACCOUNT}#${PROJECT}.iam.gserviceaccount.com"
ROLE="roles/storage.objectAdmin"
# Permit the Service Account `storageAdmin`
gcloud projects add-iam-policy-binding ${PROJECT} \
--member=serviceAccount:${EMAIL} \
--role=roles/${ROLE}
# Deploy the Cloud Functions to run as the Service Account
# Cloud Functions uses ADCs to auth as the Service Account
gcloud functions deploy ${FUNCTION} \
... \
--service-account=${EMAIL} \
--project=${PROJECT}
NOTE Using the above approach, your code is simpler using just const storage = new Storage();, the platform auth's using the Service Accounts credentials.
NOTE It's preferable to set the IAM policy on a specific bucket rather than the project (all its buckets), see: https://cloud.google.com/storage/docs/access-control/iam#project-level_roles_vs_bucket-level_roles
Perhaps, instead of gcloud projects add-iam-policy-binding, you could:
BUCKET=gs://[[YOUR-BUCKET]]
gsutil iam ch serviceAccount:${EMAIL}:roles/${ROLE} ${BUCKET}
https://cloud.google.com/storage/docs/gsutil/commands/iam#ch

Restrict gcloud service account to specific bucket

I have 2 buckets, prod and staging, and I have a service account. I want to restrict this account to only have access to the staging bucket. Now I saw on https://cloud.google.com/iam/docs/conditions-overview that this should be possible. I created a policy.json like this
{
"bindings": [
{
"role": "roles/storage.objectCreator",
"members": "serviceAccount:staging-service-account#lalala-co.iam.gserviceaccount.com",
"condition": {
"title": "staging bucket only",
"expression": "resource.name.startsWith(\"projects/_/buckets/uploads-staging\")"
}
}
]
}
But when i fire gcloud projects set-iam-policy lalala policy.json i get:
The specified policy does not contain an "etag" field identifying a
specific version to replace. Changing a policy without an "etag" can
overwrite concurrent policy changes.
Replace existing policy (Y/n)?
ERROR: (gcloud.projects.set-iam-policy) INVALID_ARGUMENT: Can't set conditional policy on policy type: resourcemanager_projects and id: /lalala
I feel like I misunderstood how roles, policies and service-accounts are related. But in any case: is it possible to restrict a service account in that way?
Following comments, i was able to solve my problem. Apparently bucket-permissions are somehow special, but i was able to set a policy on the bucket that allows access for my user, using gsutil:
gsutils iam ch serviceAccount:staging-service-account#lalala.iam.gserviceaccount.com:objectCreator gs://lalala-uploads-staging
After firing this, the access is as-expected. I found it a little bit confusing that this is not reflected on the service-account policy:
% gcloud iam service-accounts get-iam-policy staging-service-account#lalala.iam.gserviceaccount.com
etag: ACAB
Thanks everyone

Recovering access after initially provisioning wrong scopes for an instance

I recently created a VM, but mistakenly gave the default service account Storage: Read Only permissions instead of the intended Read Write under "Identity & API access", so GCS write operations from the VM are now failing.
I realized my mistake, so following the advice in this answer, I stopped the VM, changed the scope to Read Write and started the VM. However, when I SSH in, I'm still getting 403 errors when trying to create buckets.
$ gsutil mb gs://some-random-bucket
Creating gs://some-random-bucket/...
AccessDeniedException: 403 Insufficient OAuth2 scope to perform this operation.
Acceptable scopes: https://www.googleapis.com/auth/cloud-platform
How can I fix this? I'm using the default service account, and don't have the IAM permissions to be able to create new ones.
$ gcloud auth list
Credentialed Accounts
ACTIVE ACCOUNT
* (projectnum)-compute#developer.gserviceaccount.com
I will suggest you to try add the scope "cloud-platform" to the instance by running the gcloud command below
gcloud alpha compute instances set-scopes INSTANCE_NAME [--zone=ZONE]
[--scopes=[SCOPE,…] [--service-account=SERVICE_ACCOUNT
As a scopes put "https://www.googleapis.com/auth/cloud-platform" since it give Full access to all Google Cloud Platform resources.
Here is gcloud documentation
Try creating the Google Cloud Storage bucket with your user account.
Type gcloud auth login and access the link you are provided, once there, copy the code and paste it into the command line.
Then do gsutil mb gs://bucket-name.
The security model has 2 things at play, API Scopes and IAM permissions. Access is determined by the AND of them. So you need an acceptable scope and enough IAM privileges in order to do whatever action.
API Scopes are bound to the credentials. They are represented by a URL like, https://www.googleapis.com/auth/cloud-platform.
IAM permissions are bound to the identity. These are setup in the Cloud Console's IAM & admin > IAM section.
This means you can have 2 VMs with the default service account but both have different levels of access.
For simplicity you generally want to just set the IAM permissions and use the cloud-platform API auth scope.
To check if you have this setup go to the VM in cloud console and you'll see something like:
Cloud API access scopes
Allow full access to all Cloud APIs
When you SSH into the VM by default gcloud will be logged in as the service account on the VM. I'd discourage logging in as yourself otherwise you more or less break gcloud's configuration to read the default service account.
Once you have this setup you should be able to use gsutil properly.

Gcloud auth for all users on a server

I am trying to setup a Gcloud Auth Login for an account on a server that will cover all users.
i.e.
I login using an administrator account and issue the command..
e.g.
gcloud auth login auser#anemail.com
go through the steps required and when I issue the issue the Gcloud Auth List command I get the right result.
But other users cannot see it.
i.e. we use sap data services that use a proxy account on the server when it is running
e.g.
proxyaccount#mail.com
but that user cannot see the the authorized user I authorized using the administrator account.
I get error "you do not currently have an active account selected"
The "other" accounts do not have administration access nor do we want them to, and besides I don't want to have to go through this process for each and every account that connects to the server.
Ian
Each user gets its own gcloud configuration folder. You can see which configuration folder is used by gcloud by running gcloud info.
Note that if your server is a VM on GCP you do not need to configure credentials as they are obtained from metadata server for the VM.
Sharing user credentials is not a good practice. If you need to do this your users can set CLOUDSDK_CONFIG environment variable to point to one shared configuration folder. Also you should at least use service account for this purpose and activate it via gcloud auth activate-service-account instead of using credentials obtained via gcloud auth login.