Upload to Google Cloud Storage via signed URL - object not publicly readable - google-cloud-storage

I followed up this tutorial to allow upload of files from GWT frontend directly to Google Cloud Storage using signed URLs. I've extended the Java example by specifying content type which worked just fine. Then, I saw that files uploaded this way weren't publicly readable. To get this working I've tried:
I've set up default ACL for newly uploaded objects gsutil defacl set public-read gs://<bucket>. Uploaded file again - no luck, stil not visible.
Then tried to set ACL on that object directly gsutil acl set public-read gs://<bucket>/<file> but it gave me AccessDeniedException: 403 Forbidden. It makes sense since gsutil is authenticated with my Google account and signed URL is being created with service account and it's P12 key.
I've tried to set up ACL at upload phase therefore added "x-goog-acl:public-read\n" canonicalized extension headers and appropriate query string param to pass signature check. Damn, stil no luck!
My assumption is that maybe this extension header I'm using is wrong? Then according to documentation all authenticated requests to GCS will apply private ACL by default.
Anyway - why I can't make these files publicly readable from Google Console when I'm logged in as project owner? I can make so for all files uploaded through console (I know that in that case the owner is project owner and not the service account).
What I'm doing wrong? How can I make them publicly readable by anyone?
Thanks in advice!

I think if you gone through the given docs. It clearly mention that, if you need the user to download the object without using the google account then this method provides an assigned URL for specific time to the User to download the object. I am assuming that might be its not possible to make those objects publicly available as they are signed. If still you need that functionality I would recommend you to go through the resumable upload or simple upload of the object.
Also try to put the service account of your project as the owner in the "Edit default permission of Object" in the developer console on the right side of your bucket name.

Related

How do you set object access from shared publicly to only being able to access the file momentarily?

I've been playing around in google cloud storage. I've been able to upload files without any signature mismatch errors by doing the following.
set cors using gsutils from a json file.
get signed urls with an additional 'x-goog-acl': 'project-private' key value (or at least I think I have)
and PUT my object to storage with an additional body object 'x-goog-acl': 'project-private'
It doesn't seem to do anything. When I look in the console my image file is still shared publicly.
What I'm trying to do is that the user that is authenticated in my web app is the only person that can access that file. How can I do that I thought it was ACL permissions but I'm not sure anymore.

GCS Signed Urls with subfolder in bucket

I have a bucket with a sub-folder structure to add media
e.g.
bucket/Org1/ ...
bucket/Org2/ ...
and I want to generate a signed url for all the media inside each subfolder, so users that belongs to organization 1 only can view they files.
Of course I don't want to generate a signed url for each file (can be a lot) and also ACL doesn't work, because my users are logged with a non-google account (and can haven't)
so there is any way to allow like bucket/Org1/* ?
Unfortunately, no. For retrieving objects, signed URLs need to be for exact objects. You'd need to generate one per object.
One way to accomplish this would be to write a small App Engine app that they attempt to download from instead of directly from GCS which would check authentication according to whatever mechanism you're using and then, if they pass, generate a signed URL for that resource and redirect the user.

Using signed url for subfolder

I am evaluating google cloud storage for following use case. I need restrict my users (they do not have gmail accounts) so they can access only their files.
I know that can be done using gsutil signurl. But its gonna be lots of small files and generating signed url for every file is crazy. So wondering if there is trick to provide access to some subfolder using signed url?
Mentioned documentation says that wildcards can be used. Does it mean that it will generate many urls or one url that will apply to all files within wildcard?
You should use per-object ACL for this, absolutely. Signed URLs might be more difficult to implement, and if you're already thinking of managing user accounts, you'll want to do this through OAuth2.0 for Login anyways, so sending the user's Bearer token with any requests you make to the API should come as a magical bonus of doing your user accounts in this way. Read more about Auth with Cloud Storage here.
Unlike the gsutil ls command, the signurl command does not support operations on sub-directories. For example, unless you have an object named some-directory/ stored inside the bucket some-bucket, the following command returns an error: gsutil signurl gs://some-bucket/some-directory/

How do you get or generate a URL to the object in a bucket?

I'm storing objects in buckets on google cloud storage. I would like to provide a http url to the object for download. Is there a standard convention or way to expose files stored in cloud storage as http urls?
Yes. Assuming that the objects are publicly accessible:
http://BUCKET_NAME.storage.googleapis.com/OBJECT_NAME
You can also use:
http://storage.googleapis.com/BUCKET_NAME/OBJECT_NAME
Both HTTP and HTTPS work fine. Note that the object must be readable by anonymous users, or else the download will fail. More documentation is available at https://developers.google.com/storage/docs/reference-uris
If it is the case that the objects are NOT publicly accessible and you only want the one user to be able to access them, you can generate a signed URL that will allow only the holder of the URL to download the object, and even then only for a limited period of time. I recommend using one of the GCS client libraries for this, as it's easy to get the signing code slightly wrong: https://developers.google.com/storage/docs/accesscontrol#Signed-URLs
One way is to use https://storage.cloud.google.com// see more documentation at
https://developers.google.com/storage/docs/collaboration#browser
If the file is not public, you can use this link to the file and it will authenticate with your signed in Google account:
https://storage.cloud.google.com/{bucket-name}/{folder/filename}
Otherwise generate a signed URL:
gsutil signurl -d 10m Desktop/private-key.json gs://example-bucket/cat.jpeg

I unable to use Amazon web services s3 uploader in iphone sdk

I have developed a app for using aws services, but i could not.I have "access key" and "secret key".But when go for s3 uploader i found error "No such bucket" and you are not sing up. I think , when i created the account i was not complete the "payment method" process.
So aws not provide the test mode . I am confuse please suggest me right way to do it.
Thanks in advance
I help maintain the AWS SDK for iOS. Building off the suggestions from Brad:
Make sure you can access the S3 console from AWS website. This will ensure you have an active and valid account
Make sure you have copied the access and secret keys correctly into the S3_Uploader sample application Constants.h correctly
The sample creates a unique name based on your access key. If this is failing for some reason you can update Constants.m in the sample to use your own custom name (or use a bucket that you've already created via the console)
It sounds like you don't have an active AWS account.
Do you have one? Can you access your bucket from a regular PC? I am guessing you don't. Make sure you can access your account and bucket from a regular dekstop before doing it on your iPhone. You need to go into the management console and create an S3 bucket, if not -you will get that error. (Either that, you are trying to access the wrong one).