I have created a bucket and files in it with Google Cloud Storage. I have also edited the permissions of the bucket to allow access to persons within a Googlegroup account.
Now, if they need to access the data, do they need to "sign up" at the Google Cloud Platform?
Is there anyway they can copy all the files in the bucket using gsutil again without GCP account?
No, there's not a way to allow access to a bucket in that way, but Google accounts and GCP accounts are the same, so anyone with a gmail account could access it.
The closest thing to your use case is Signed URLs, which can grant access to individual objects.
Related
I have an external identity provider that supports OpenID Connect (OIDC) and want to access Google Cloud Storage(GCS) directly, using a short-lived access token. So I'm using workload identity federation in order to provide a credential from my external identity provider and get a federated token in exchange.
I have created the workload identity pool and provider and connected a service account to it, which has write access to a certain bucket in GCS.
How can I differentiate the access to specific folder in the bucket according to the token provided from my external identity provider? For example for userA to have access only to folderA in the bucket. Can I do this using one service account?
Any help would be highly appreciated.
The folders don't exist on Cloud Storage, it's a blob storage, all the object are stored at the bucket level. For human readability and representation, the / are the folder separator, by convention.
Therefore, because directory doesn't exist, you can't grant any permission on it. The finer granularity is the bucket.
In your use case, you can't grant a write access at folder level, but you can create 1 bucket per user and therefore grant the impersonated service account on the bucket.
We've just moved files off of a 10 year old FTP server and are now using Google Cloud Storage for our company's files. This was setup to use the web-hosting feature of GCS, and the access logging capability was also enabled.
The access logs are dumped into a bucket. I would like to deny access to this bucket, but allow them to continue using the main GCS bucket (we use Cyberduck for this).
We are presently allowing anybody with our company's email address to read/write to the buckets in this project, by giving the "Domain" the "Storage Admin" and "Storage Object Admin" permissions. This was granted through the IAM permissions.
I want to know the right/best way of having one machine copying data to Google Storage.
I need one machine to be able to write to a bucket, but not be able to create or delete other buckets.
While researching, I found out that you should create a account service so this account can log in to GC and then use the storage.
But the problem is, when the machine is from GCE, there are scopes. When setting up the scope "Default" it can Read from Google Storage, but can not write to it. Even after authenticated with a service account.
When the scope is Devstorage.read_write now the machine can create and remove buckets from that storage without login. I find that to risk.
Does anyone have any recommendations?
Thanks
The core problem here is that the "write" scope covers both write and delete, and that the GCE service account is likely a member of project-editors, which can create and delete buckets. It sounds like what you want to do is restrict a service account to only being able to affect a single bucket. You should be able to do this with these steps:
Create a service account in your project (and save the private key file).
In the permissions page for the project, make sure that service account is not a project editor for your project.
Using an account that does have full permissions to your project, create the bucket, then grant the service account write access to the bucket. Example gsutil commands to do this:
gsutil mb gs://yourbucket
gsutil acl ch -u your-service-account-name#gserviceaccount.com:W gs://yourbucket
Create a VM that does not have a GCE service account enabled.
Push the service account's private key file to that VM.
On the VM, gcloud auth activate-service-account --key-file=your-key-file.json
Now gsutil commands run on the VM should be able to write to (and delete) objects in that bucket, but not any other buckets in your project.
I have a django app running in my Google Compute Engine, and it needs to upload video files to my bucket in Google Cloud Storage. When searching for authentication methods, I found this doc. Under Setting the scope of service account access for instances section, it says I need to enable the Cloud Platform access in the settings when creating the VM. I wonder if it is a must and if there's any other way that I can access my cloud storage bucket from my apps in the compute engine. Because creating a new VM and set up the environment is very time-consuming. Any input would be greatly appreciated. Thanks in advance.
As documented on the page you linked to, to authenticate from Google Compute Engine to Google Cloud Storage, you have several options:
Use VM scopes: this must be set before creating the VM, because scopes are immutable once the VM is created. If you want read-only access, you need to add the scope devstorage.read_only (short form) or https://www.googleapis.com/auth/devstorage.read_only (full path). If you want read-write access, you should use the scope devstorage.read_write (short form) or https://www.googleapis.com/auth/devstorage.read_write (full path).
Note: there's also a feature gcloud beta compute instances set-scopes to update GCE VM scopes at runtime.
An alternative to using scopes is to use JSON authentication tokens, such as via Service accounts which can be used by Google API client libraries to connect to Google Cloud Storage.
I want to backup multiple servers using Google Cloud Storage. Each server needs access to the bucket containing his backups. A server should not have access to the bucket of any other server.
I have used Amazon S3 before and simply created one user per server in IAM and assigned a policy to the user that allows accessing a specific bucket.
On Google Cloud Storage it seems like Authentication is based on Google Accounts and OAuth 2.0 so every server uses the same Google Account (mine) and the result is that every server has full access to all buckets.
How to give each server his own access credentials (that has access to his own bucket only) without the need to create a new Google Account for each server?
You can create/use service accounts for each bucket and use them to authenticate access to storage bucket from your Google Compute Engine instance. You can limit the access you your bucket by changing the bucket ACLs to allow the access to service account only.