GCS Signed Urls with subfolder in bucket - google-cloud-storage

I have a bucket with a sub-folder structure to add media
e.g.
bucket/Org1/ ...
bucket/Org2/ ...
and I want to generate a signed url for all the media inside each subfolder, so users that belongs to organization 1 only can view they files.
Of course I don't want to generate a signed url for each file (can be a lot) and also ACL doesn't work, because my users are logged with a non-google account (and can haven't)
so there is any way to allow like bucket/Org1/* ?

Unfortunately, no. For retrieving objects, signed URLs need to be for exact objects. You'd need to generate one per object.
One way to accomplish this would be to write a small App Engine app that they attempt to download from instead of directly from GCS which would check authentication according to whatever mechanism you're using and then, if they pass, generate a signed URL for that resource and redirect the user.

Related

Can I exchange a github access token with raw file token

Does anyone know if there is any information about how the raw tokens are created?
TLDR is that I want to create links to files (specifically images) from private repos with the raw token attached. I need this to happen automatically, I do not want to "click the raw button" to get the token, that being said I do have access to the logged in users personal access token. Can I use this access token in order to automatically create a raw link with the raw token attached?
Further info:
GHE is a bit broken, and it doesn't seem top of the list from the github developers to fix it. Trying to access images from a different domain results in CORB issues. I can get the files I need using octokit, as mentioned above the users do need to login to GHE, so I have access to their access token.
What I want to do is to show markdown information, I get the markdown file through octokit, but in markdown you can of course link to images. These images will often be stored along with the markdown file in github, resulting in either relative or direct urls in the markdown file. I want to render this markdown file along with whatever images that is specified in the markdown file, but as I mentioned earlier rendering it directly will result in CORB issues.
The idea I had was that I instead could swap these GHE urls to urls with the raw token attached, using a url like that for an image would definitely work, and it does not matter that it isnt a permanent url. On the contrary it is more secure with a temporary token, and the urls would be recreated every time the user hits the page anyway, so no need for permanent links.
If I could use the users auth token to create a link to a raw image it would solve my issues, is this possible? If not, do you have any suggestions on an alternative way to do this?
The only other way I can think of is to create a proxy, that authenticates and fetches the files through octokit and returns them. This would however need to use a service account instead of the currently logged in user, which opens up a security hole where users who shouldn't have access to certain files suddenly can use the proxy instead.
Am I missing something?
Thankful for any help!
No, personal access tokens and other similar tokens can't be used there. If you want to use a personal access token, you have two options:
Use the /repos/OWNER/NAME/contents/ endpoints with Accept: application/vnd.github.raw and pass the token in the Authorization header. This will return the raw file, but it won't use the correct content type, so it probably won't render in the browser, but it can be programmatically downloaded.
Use the same endpoint without that Accept header but with the Authorization header and then you'll get a JSON response with download_url, which contains the correct token for that URL.
Note that all tokens in raw file URLs for private repositories are temporary and expire after a while, or when the user changes their password.
I will recommend that for your purpose, you probably want to deploy these documents and images to some sort of static server on a periodic basis (say, with your CI system) and host them there. That's going to be a lot easier than trying to write a proxy.

How do you set object access from shared publicly to only being able to access the file momentarily?

I've been playing around in google cloud storage. I've been able to upload files without any signature mismatch errors by doing the following.
set cors using gsutils from a json file.
get signed urls with an additional 'x-goog-acl': 'project-private' key value (or at least I think I have)
and PUT my object to storage with an additional body object 'x-goog-acl': 'project-private'
It doesn't seem to do anything. When I look in the console my image file is still shared publicly.
What I'm trying to do is that the user that is authenticated in my web app is the only person that can access that file. How can I do that I thought it was ACL permissions but I'm not sure anymore.

Upload to Google Cloud Storage via signed URL - object not publicly readable

I followed up this tutorial to allow upload of files from GWT frontend directly to Google Cloud Storage using signed URLs. I've extended the Java example by specifying content type which worked just fine. Then, I saw that files uploaded this way weren't publicly readable. To get this working I've tried:
I've set up default ACL for newly uploaded objects gsutil defacl set public-read gs://<bucket>. Uploaded file again - no luck, stil not visible.
Then tried to set ACL on that object directly gsutil acl set public-read gs://<bucket>/<file> but it gave me AccessDeniedException: 403 Forbidden. It makes sense since gsutil is authenticated with my Google account and signed URL is being created with service account and it's P12 key.
I've tried to set up ACL at upload phase therefore added "x-goog-acl:public-read\n" canonicalized extension headers and appropriate query string param to pass signature check. Damn, stil no luck!
My assumption is that maybe this extension header I'm using is wrong? Then according to documentation all authenticated requests to GCS will apply private ACL by default.
Anyway - why I can't make these files publicly readable from Google Console when I'm logged in as project owner? I can make so for all files uploaded through console (I know that in that case the owner is project owner and not the service account).
What I'm doing wrong? How can I make them publicly readable by anyone?
Thanks in advice!
I think if you gone through the given docs. It clearly mention that, if you need the user to download the object without using the google account then this method provides an assigned URL for specific time to the User to download the object. I am assuming that might be its not possible to make those objects publicly available as they are signed. If still you need that functionality I would recommend you to go through the resumable upload or simple upload of the object.
Also try to put the service account of your project as the owner in the "Edit default permission of Object" in the developer console on the right side of your bucket name.

Using signed url for subfolder

I am evaluating google cloud storage for following use case. I need restrict my users (they do not have gmail accounts) so they can access only their files.
I know that can be done using gsutil signurl. But its gonna be lots of small files and generating signed url for every file is crazy. So wondering if there is trick to provide access to some subfolder using signed url?
Mentioned documentation says that wildcards can be used. Does it mean that it will generate many urls or one url that will apply to all files within wildcard?
You should use per-object ACL for this, absolutely. Signed URLs might be more difficult to implement, and if you're already thinking of managing user accounts, you'll want to do this through OAuth2.0 for Login anyways, so sending the user's Bearer token with any requests you make to the API should come as a magical bonus of doing your user accounts in this way. Read more about Auth with Cloud Storage here.
Unlike the gsutil ls command, the signurl command does not support operations on sub-directories. For example, unless you have an object named some-directory/ stored inside the bucket some-bucket, the following command returns an error: gsutil signurl gs://some-bucket/some-directory/

How do you get or generate a URL to the object in a bucket?

I'm storing objects in buckets on google cloud storage. I would like to provide a http url to the object for download. Is there a standard convention or way to expose files stored in cloud storage as http urls?
Yes. Assuming that the objects are publicly accessible:
http://BUCKET_NAME.storage.googleapis.com/OBJECT_NAME
You can also use:
http://storage.googleapis.com/BUCKET_NAME/OBJECT_NAME
Both HTTP and HTTPS work fine. Note that the object must be readable by anonymous users, or else the download will fail. More documentation is available at https://developers.google.com/storage/docs/reference-uris
If it is the case that the objects are NOT publicly accessible and you only want the one user to be able to access them, you can generate a signed URL that will allow only the holder of the URL to download the object, and even then only for a limited period of time. I recommend using one of the GCS client libraries for this, as it's easy to get the signing code slightly wrong: https://developers.google.com/storage/docs/accesscontrol#Signed-URLs
One way is to use https://storage.cloud.google.com// see more documentation at
https://developers.google.com/storage/docs/collaboration#browser
If the file is not public, you can use this link to the file and it will authenticate with your signed in Google account:
https://storage.cloud.google.com/{bucket-name}/{folder/filename}
Otherwise generate a signed URL:
gsutil signurl -d 10m Desktop/private-key.json gs://example-bucket/cat.jpeg