Restrict gcloud service account to specific bucket - gcloud

I have 2 buckets, prod and staging, and I have a service account. I want to restrict this account to only have access to the staging bucket. Now I saw on https://cloud.google.com/iam/docs/conditions-overview that this should be possible. I created a policy.json like this
{
"bindings": [
{
"role": "roles/storage.objectCreator",
"members": "serviceAccount:staging-service-account#lalala-co.iam.gserviceaccount.com",
"condition": {
"title": "staging bucket only",
"expression": "resource.name.startsWith(\"projects/_/buckets/uploads-staging\")"
}
}
]
}
But when i fire gcloud projects set-iam-policy lalala policy.json i get:
The specified policy does not contain an "etag" field identifying a
specific version to replace. Changing a policy without an "etag" can
overwrite concurrent policy changes.
Replace existing policy (Y/n)?
ERROR: (gcloud.projects.set-iam-policy) INVALID_ARGUMENT: Can't set conditional policy on policy type: resourcemanager_projects and id: /lalala
I feel like I misunderstood how roles, policies and service-accounts are related. But in any case: is it possible to restrict a service account in that way?

Following comments, i was able to solve my problem. Apparently bucket-permissions are somehow special, but i was able to set a policy on the bucket that allows access for my user, using gsutil:
gsutils iam ch serviceAccount:staging-service-account#lalala.iam.gserviceaccount.com:objectCreator gs://lalala-uploads-staging
After firing this, the access is as-expected. I found it a little bit confusing that this is not reflected on the service-account policy:
% gcloud iam service-accounts get-iam-policy staging-service-account#lalala.iam.gserviceaccount.com
etag: ACAB
Thanks everyone

Related

Service account not assigned to CloudSQL instance

I would need to have a CloudSQL instace created with particular service account. Trying API call instances.insert:
POST https://www.googleapis.com/sql/v1beta4/projects/{project}/instances
{
"serviceAccountEmailAddress": "<my account>#managed-gcp.iam.gserviceaccount.com",
"name": "pvtest20200611-3",
"settings": {
"tier": "db-n1-standard-1"
},
"databaseVersion": "MYSQL_5_7"
}
The instance is created but it has a generated svc account (e.g. p754990076948-kf1bsf#gcp-sa-cloud-sql.iam.gserviceaccount.com) instead of mine.
For my SA, I have storage admin/storage object admin roles assiged (this is what I would need newly created instances to always have). I also added cloudsql admin role. When I thought it was a role problem so even tried the Project Editor role, but this didn't work.
I have tried MySQL and Postgres db types.
Would you know why is not my account picked up, why is CloudSQL engine always assigning it's own?
What are requirement/setup for custom SA to work with CloudSQL instance?
When you create an instance in Cloud SQL, it will use the default one during the creation, so, you won't be able to set a custom one during the creation.
It's possible, however, for you to give access and permissions for a Service Account after the creation. As explained in the official documentation Granting roles to a service account for specific resources, you can provide specific permissions to your Service Account. You can try using the gcloud command as follows:
gcloud projects add-iam-policy-binding my-project-123 \
--member serviceAccount:my-sa-123#my-project-123.iam.gserviceaccount.com \
--role roles/editor
Besides that, you can also check all your available Service Accounts using this link here, to verify if your custom one is there and even add the permissions via UI, if you think it's better via this way.
Let me know if the information helped you!

How To Use Service principal To Assign A Role To Another Service Principal

How can I give a service principal access to assign a role to a resource it created?
Here's the scenario.
I...
Created an Azure DevOps pipeline
Created a Service Connection (which creates a service principal and grants it Contributor access to the entire subscription).
Created a pipeline task AzureCLI#1 using the service connection
Executed az group create … - SUCCESS - made a resource group!
Executed az group deployment create … - SUCCESS - deployed some stuff!
^-- (Unless I do any role assignments as part of my ARM template)
Executed az role assignment create … - FAILURE
ERROR: Insufficient privileges to complete the operation.
I tried making the service principal Owner instead of Contributor. No difference.
This made me understand (kinda) why: Azure Service principal insufficient permissions to manage other service principals
Which lead me here: https://learn.microsoft.com/en-ca/azure/devops/pipelines/release/azure-rm-endpoint?view=azure-devops#failed-to-assign-contributor-role
But I'm a little stuck. I think I'm supposed to grant my service principal some sort of role within active directory so that it's allowed to manage role assignments.
I found this: https://learn.microsoft.com/en-us/azure/active-directory/users-groups-roles/roles-delegate-by-task#roles-and-administrators
Based on that, it seems I should give my service principal Privileged role administrator access. scary.
Then I found this: https://learn.microsoft.com/en-us/azure/role-based-access-control/custom-roles
Because I wanted to limit this service principal to only be able to flex the active directory powers within a single subscription, which seems to be possible in the AssignableScopes property.
But two things are giving me pause, which brings me here.
1) I'm relatively unfamiliar with what I'm doing, and I'm tossing around big scary terms like Administrator shudder. Time to consult some experts!
2) This seems complex. The task I'm performing seems like it should not be complex. I'm just trying to deploy AKS and a Container Registry in an Azure Pipeline and give AKS access to the registry. Which is what all the docs say to do (albeit at the commandline, not in a pipeline).
So, should I really be creating a custom role just for the subscription which gives Privileged role administrator type privileges assignable only to the subscription, then granting my service principal that role?
Or... How do I do this?
EDIT:
I did try creating a custom role with action Microsoft.Authorization/write. It failed with this error: 'Microsoft.Authorization/write' does not match any of the actions supported by the providers.
But I succeeded in creating one with action Microsoft.Authorization/*/write as well as Microsoft.Authorization/*
My .json definition looks like:
{
"Name": "...", "Description": "...", "IsCustom": true,
"Actions": [ "Microsoft.Authorization/*" ],
"AssignableScopes": [
"/subscriptions/[subscriptionid]"
]
}
After assigning the role to the service principal, it still failed with insufficient access. I logged in locally via az login --service-principal, tried to use my new powers, and got this message:
The client '...' with object id '...' does not have authorization to perform action 'Microsoft.Resources/subscriptions/resourceGroups/.../Microsoft.Authorization/read' over scope '/subscriptions/.../resourceGroups/.../.../providers/Microsoft.Authorization/roleDefinitions' or the scope is invalid. If access was recently granted, please refresh your credentials.
EDIT: SOLUTION
{
"Name": "...", "Description": "...", "IsCustom": true,
"Actions": [
"Microsoft.Authorization/roleAssignments/read",
"Microsoft.Authorization/roleAssignments/write"
],
"AssignableScopes": [
"/subscriptions/[subscriptionid]"
]
}
This works with az role definition create.
The service principal also needs to be a Directory Reader, unless you specify the role assignment by object-id. Azure Active Directory: Add Service Principal to Directory Readers Role with PowerShell
It can be assigned to the service principal, and when executing az commands as that service principal, it succeeds in creating role assignments.
you need to grant it Microsoft.Authorization/roleAssignments/write custom permission or the built-in role of owner. scope would be subscription if you want to be able to do that for every resource group\resource in the subscription or you can be more granular (say specific resource groups or even resources).
your custom role link is the right way to create custom roles.
edit: OP needed to add Microsoft.Authorization/roleAssignments/read as well, for me it works without it.

Performing the association of an existing service account to a newly created subscription in gcloud

I have a service account in gcloud that i am using to create a new topic and subscription to that topic in that order.
However i need to be able to assign the newly created subscription to the service account explicitly. In the UI this is done by going to
Pubsub > Subscription > Selecting the subscription and then clicking on "Search member" > Adding the service account.
However I want to automate this using the gcloud command.
So far I have been able to :
1) Activate a service account serviceAccountA
2) Create Topic
3) Create subscription to the Topic
While trying to use the following command to set iam policy on the service account so as to give pubsub.editor role to the service account.
gcloud iam service-accounts set-iam-policy serviceAccountA <json> file path>
Json file content is as below:
{
"bindings": [
{
"role": "roles/pubsub.editor",
"members": ["serviceAccountA"]
}
]
}
The above gcloud command results in the error:
ERROR: (serviceAccountA PERMISSION_DENIED: Not allowed to get project settings for project <id>
I am missing something. Is there an easy way to associate the subscription with a specific service account?
I suspect the problem is that the service account you've activated doesn't have permissions to give itself permissions. Try setting this with a gcloud account that has edit permissions for the project. You can set the current account with gcloud auth login or gcloud config configurations activate <your_config>.

Can I authenticate gcloud cli using both service account and user credentials?

Google API clients typically recognise the GOOGLE_APPLICATION_CREDENTIALS environment variable. If found, it's expected to point to a JSON file with credentials for either a service account or a user.
Service account credentials can be downloaded from the GCP web console and look like this:
{
"type": "service_account",
"project_id": "...",
"private_key_id": "...",
"private_key": "...",
"client_email": "...",
"client_id": "...",
"auth_uri": "...",
"token_uri": "...",
"auth_provider_x509_cert_url": "...",
"client_x509_cert_url": "..."
}
User credentials are often available in ~/.config/gcloud/application_default_credentials.json and look something like:
{
"client_id": "...",
"client_secret": "...",
"refresh_token": "...",
"type": "authorized_user"
}
Here's an example of the official google rubygem detecting the type of credentials provided via the environment var.
I'd like to authenticate an unconfigured gcloud install with both types of credential. In our case we happen to be passing the GOOGLE_APPLICATION_CREDENTIALS variable and path into a docker container, but I think this is a valid question for clean installs outside docker too.
If the credentials file is a service account type, I can do this:
gcloud auth activate-service-account --key-file=${GOOGLE_APPLICATION_CREDENTIALS}
However I can't see any way to handle the case where the credentials belong to a real user.
Questions:
Why doesn't the official gcloud tool follow the conventions that other google API clients use and use GOOGLE_APPLICATION_CREDENTIALS when available?
Is there a hidden method that will activate the user credentials case?
As you point out gcloud command line tool (CLI) does not use application default credentials. It has separate system for managing its own credentials.
GOOGLE_APPLICATION_CREDENTIALS are designed for client libraries to simplify wiring in credentials, and gcloud CLI is not a library. Even in the client code best practice is not to depend on this environment variable but instead explicitly provide credentials.
To answer your second question, user credentials can be obtained via
gcloud auth login
command. (NOTE this is different from gcloud auth application-default login) This besides saving actual credentials will also set account property in current configuration:
gcloud config list
gcloud can have many configurations, each with different credentials. See
gcloud config configurations list
You can create multiple configurations, one with user account another with service account and use it simultaneously by providing --configuration parameter, for example
gcloud compute instances list --configuration MY_USER_ACCOUNT_CONFIG
Similarly you can also switch which credentials are used by using --account flag, in which case it will use same configuration and will only swap out the account.
I've found a way to authenticate a fresh gcloud when GOOGLE_APPLICATION_CREDENTIALS points to a file with user credentials rather than service account credentials.
cat ${GOOGLE_APPLICATION_CREDENTIALS}
{
"client_id": "aaa",
"client_secret": "bbb",
"refresh_token": "ccc",
"type": "authorized_user"
}
gcloud config set auth/client_id aaa
gcloud config set auth/client_secret bbb
gcloud auth activate-refresh-token user ccc
This uses the undocumented auth activate-refresh-token subcommand - which isn't ideal - but it does work.
Paired with gcloud auth activate-service-account --key-file=credentials.json, this makes it possible to initialize gcloud regardless of the credential type available at $GOOGLE_APPLICATION_CREDENTIALS

PUT Object to AWS S3 via HTTP through VPC Endpoint with proper ACL?

I am using an HTTPS client to PUT an object to Amazon S3 from an EC2 instance within a VPC that has an S3 VPC Endpoint configured. The target Bucket has a Bucket Policy that only allows access from specific VPCs, so authentication via IAM is impossible; I have to use HTTPS GET and PUT to read and write Objects.
This works fine as described, but I'm having trouble with the ACL that gets applied to the Object when I PUT it to the Bucket. I've played with setting a Canned ACL using HTTP headers like the following, but neither results in the correct behavior:
x-amz-acl: private
If I set this header, the Object is private but it can only be read by the root email account so this is no good. Others need to be able to access this Object via HTTPS.
x-amz-acl: bucket-owner-full-control
I totally thought this Canned ACL would do the trick, however, it resulted in unexpected behavior, namely that the Object became World Readable! I'm also not sure how the Owner of the Object was decided either since it was created via HTTPS, in the console the owner is listed as a seemingly random value. This is the documentation description:
Both the object owner and the bucket owner get FULL_CONTROL over the
object. If you specify this canned ACL when creating a bucket, Amazon
S3 ignores it.
This is totally baffling me, because according to the Bucket Policy, only network resources of approved VPCs should even be able to list the Object, let alone read it! Perhaps it has to do with the union of the ACL and the Bucket Policy and I just don't see something.
Either way, maybe I'm going about this all wrong anyway. How can I PUT an object to S3 via HTTPS and set the permissions on that object to match the Bucket Policy, or otherwise make the Bucket Policy authoritative over the ACL?
Here is the Bucket Policy for good measure:
{
"Version": "2008-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:AbortMultipartUpload",
"s3:DeleteObject",
"s3:DeleteObjectVersion",
"s3:GetObject",
"s3:GetObjectTagging",
"s3:GetObjectTorrent",
"s3:GetObjectVersion",
"s3:GetObjectVersionTagging",
"s3:GetObjectVersionTorrent",
"s3:ListBucket",
"s3:ListBucketMultipartUploads",
"s3:ListBucketVersions",
"s3:ListMultipartUploadParts",
"s3:PutObject",
"s3:PutObjectTagging"
],
"Resource": [
"arn:aws:s3:::my-bucket",
"arn:aws:s3:::my-bucket/*"
],
"Condition": {
"StringEquals": {
"aws:SourceVpc": "vpc-12345678"
}
}
}
]
}
The way that S3 ACLs and Bucket Policies work is the concept of "Least Privilege".
Your bucket policy only specifies ALLOW for the specified VPC. No one else is granted ALLOW access. This is NOT the same as denying access.
This means that your Bucket or object ACL is granting access.
In the S3 console double check who the file owner is after the PUT.
Double check the ACL for the bucket. What rights have your granted at the bucket level?
Double check the rights that you are using for the PUT operation. Unless you have granted public write access or the PUT is being ALLOWED by the bucket policy, the PUT must be using a signature. This signature will determine the permissions for the PUT operation and who owns the file after the PUT. This is determined by the ACCESS KEY used for the signature.
Your x-amz-acl should contain bucket-owner-full-control.
[EDIT after numerous comments below]
The problem that I see is that you are approaching security wrong in your example. I would not use the bucket policy. Instead I would create an IAM role and assign that role to the EC2 instances that are writing to the bucket. This means that the PUTs are then signed with the IAM Role Access Keys. This preserves the ownership of the objects. You can then have the ACL being bucket-owner-full-control and public-read (or any supported ACL permissions that you want).