Google storage external authorization - google-cloud-storage

I need to store my service data in Google Storage and let my users download files depending on their (users) access rights.
I've already made service that connects to Google Storage using server-centric mechanism, and transfers them to client-side, but I need client-side to go to Storage and download file without server-side.
I've tried to use temporary links for files, but I can't check, if user downloaded file or not to properly delete temporary link.
I've tried to look for oauth2 support, but it seems Google doesn't support oauth in such way (When my service decides to allow access or no).
The best solution is to generate tokens for users and if Google Storage would call my service before every file download.
How can I achieve that?

Related

Cloud Storage - Disabled Public Access Prevention, but Failed

Okay, I was using Flutter and Firebase to upload data into Cloud Storage. I gained the downloadURL which can be accessible on web if people know the URL. I had enabled Public Access Prevention in Google Cloud Storage Console based on this doc and chose Access Control Uniform for this on doc.
I also had added Security Rule in Firebase Cloud Storage, so only Users with certain custom token can use it. But, it seems useless as everyone can get its downloaded URL. My question is why is that I still able to access the file if I am using the same URL which was I stored in Firestore? You can test it on this url.
Can hacker get the download URL I downloaded from Firestore?
Is there a secure way to download song from Firebase Cloud Storage so hacker won't get its URL?
Thank you for helping me out.
Updated v2:
I just found out that current audio file has its own AuthenticatedUrl as shown on this picture below. How can I get access to this url?
Updated v1:
I think I haven't activated Firebase App Check. Does this feature have ability to prevent it from being accessed publicly or maybe there is other things that I have to do to be able to prevent it being accessed publicly, beside all ways I described above???
Security rules only check if a user can get the download URL and do not restrict anyone from using it. You can use the getData() method instead. It doesn't return any URL and downloads the files directly and is controlled by security rules. So a user must be authenticated to fetch them.
As mentioned in the Answer :
If you're using the FlutterFire Storage library in your app, you can
call getData on a reference to the file to get its data. So with
that you just need to know the path to the data, and you won't need
the download URL in your application. Once you have the data locally,
you can create an image out of it with: Converting a byte array to
image in Flutter?
Unlike download URLs, the call to getData() is
checked by security rules, so you'll have to ensure that the user is
permitted to access the file.
You can also refer to this Answer :
For web apps: in the JavaScript/Web SDK using a download URL is the
only way to get at the data, while for the native mobile SDKs we also
have getData() and getFile() methods, which are enforced through
security rules.
Until that time, if signed URLs fit your needs
better, you can use those. Both signed URLs and download URLs are just
URLs that provide read-only access to the data. Signed URLs just
expire, while download URLs don't.
For more information, you can refer to this Github issue where a similar issue has been discussed.

How to make Google Cloud Storage direct download links compliant with ACLs?

If a .txt file is saved to GCS and clicked on through the developer console browser by an authorized user, the contents are displayed in the web browser. That's fine, but that URL can be sent to anyone, authorized or not, allowing them to view the contents of the file.
"Share publicly" is unchecked, and no changes have been made to the default ACLs. And this isn't specific to .txt files -- that's just the easiest way to replicate the behavior since they're displayed directly in the browser (so you can easily get to that URL).
How do I configure GCS to either disable direct download links or ensure they're compliant with ACLs?
EDIT: It appears that the link expires after a few minutes, which reduces the associated risk a little, but not entirely. I'm still extremely nervous about how easily an authorized user could use this to inadvertently provide an unauthorized user direct access to something they ought not...
Left vs. right-clicking on files
First, regarding the difference between left-or-right clicking: I could not establish a difference between left- or right-clicking on a filename in the Google Cloud Storage storage browser.
To verify this, I opened a Google Cloud Project and opened a private object in a private bucket and opened it using both methods. I copied the URLs and opened them in a Chrome incognito window, where I was not logged in, to verify that my ACLs were not applied.
I was able to see both of the URLs in the incognito window. After some time, my access to them expired. However, interestingly enough, my access to them expired just as well in the window where I was logged-in and authenticated to access Google Cloud Storage.
This is where things get interesting.
Security and ACLs for user data in Google Cloud Storage browser
TL;DR: I believe the behavior you observed, namely that the URL can be viewed by anyone, is working as intended and it cannot be changed beyond what Google Cloud Storage already does with automatic timeouts; let me explain why.
When you are browsing Google Cloud Storage via the Developers Console, you are using the storage browser on the domain console.developers.google.com which means that you are authenticated with Google and proper ACLs can be applied to allow/deny access.
However, the only things you can view on that domain are bucket names, object names, and metadata, not the file content itself.
If Google were to serve you file content on the google.com domain, it would create a security issue by allowing an adversary to force your browser to execute Javascript on your behalf with your Google credentials, thus allowing them to do anything you can do through the web UI. This is typically referred to as an XSS attack.
To disallow this from happening, Google Cloud Storage (and Google in general, e.g., cached web pages) serve user-originating data on a different domain, typically *.googleusercontent.com, where users can't take advantage of any sensitive cookies or credentials, since nothing that Google provides is served on the same domain.
However, as a result, since the data is being served from one domain (*.googleusercontent.com) but your authentication is on a different domain (*.google.com), there is no way to apply the standard Google Cloud Storage bucket or object ACLs to the file contents themselves, while protecting you from XSS attacks by malevolent users.
Thus, ALL users, even those that have direct access to the file, upon viewing them in their browser, will have the content served with a time-limited signed URL on a different domain.
As a side-effect, this does allow users to copy-paste the URL and share it with others, who will have similar time-limited access to the file contents.

Correct way to handle user permissions with Google Cloud Storage?

I'm quite new to Cloud Storage solutions, and I'm currently researching options to upgrade our current solution (we currently just upload on a SVN server).
What I have is a native application running on client computers, which will upload data to the Cloud Storage. Afterwards, client should be able to download and browse their data (source is not set in stone, could be a website or from other applications). They should not be able to access other user's data.
I'm not sure how I'm supposed to proceed. As far as I understand, the native application will upload using a Native Application Credential, using JSON.
Do I need multiple credentials to track multiple users? That seems wrong to me. Besides when they come back as 'users' through the web interface, they wouldn't be using that authentification, would they?
Do I need to change the ACL of the uploaded files afterwards?
Should I just not give write/read access to any particular users and handle read requests through Signed URLs, dealing with permission details by myself using something else on the side? (not forcing a Google Account is probably a requirement)
Sorry if this is too many questions, and thanks!
Benjamin
The "individual credentials per instance of an app" question has come up before, and unfortunately there's not a great answer. If you want every user to have different permissions, you need every user to be associated with a different account.
Like you point out, the best current answer, other than requiring users to have Google accounts, is to have a centralized service that vends signed URLs to the end applications. That service would be the only owner of all of the objects and would give out permission to read or upload as needed.

How to upload Files to Cloud Storage?

I have a Google Cloud Endpoints wich is using Cloud SQL to store data. I want to provide a file upload for Clients and the files should be stored in Cloud Storage but I also want to store file meta data and the file storage url in Cloud SQL.
What's the best was to do this?
Can I upload files through cloud endpoints or do I need an extra upload Servlet?
How can I update my database entities which needs a reference to the uploaded files.
Any examples on how to combine those 3 technologies?
Assuming your clients are not added to your google cloud project (which is typically the case), your users don't have write access to your GCS bucket. You can either submit files to your application and move to GCS from there (not recommended as consumes more network and CPU) or a better way is to submit to GCS directly.
To let the client write to your GCS bucket directly, you will need to either:
1. put your access key on client for write access (not recommended), if the client is used by limited trusted people.
2. generate a time-bound token and put it on the client as signed URL to upload directly.
Endpoints APIs themselves cannot do this, but you can generate the signed GCS URL at the server and get it using endpoints on client. then set it as form action (on web client, other clients have similar ways for signed upload) and submit the form to upload the file.
<form action="SIGNED_URL_FROM_ENDPOINTS" method="post" enctype="multipart/form-data">
I don't see an open-source code out there doing exactly this, but closest is this project that does generate the signed URL with a time-out (the only unintuitive part).
Best way to update the metadata in your database is to watch GCS bucket using 'Object Change Notifications'. Another way is to send the metadata to your server from client itself, which can be an endpoints call. You can also use a mix of both where the metadata goes to server using endpoints even before the the file is uploaded and the notification updates the record with confirmation that it is available to serve.

Securing a web page and required resources with Google Cloud Storage and signed urls?

I am trying to secure access to premium content on my app engine web application. The content is an Articulate module, it is a player.html with dozens of associated javascript, audio, and video resources. I am trying to use Google Cloud Storage to host the content.
My idea is that when a user who is authenticated with my app and has appropriate access requests the premium content, my app could sign a url to the player.html. What I am not sure about is how to handle all the associated resource files? Signed urls are simple enough for securing single files, but what about groups of files? Do I have to sign urls for all the content or is it possible to have a single signed url allow access to related files?
ACLs are not an option, because I rolled my own authentication rather than using oAuth and Google accounts.
Any thoughts or alternate strategies would be appreciated.
Update 8.7.13
After reflecting and researching some more, I am wondering about configuring the GCS bucket as a website as per the instructions here.
If I understand the docs correctly, I would create a CNAME to point requests from my custom domain content.example.com to c.storage.googleapis.com, and request that arrive via this CNAME are served up as if they were a static webpage. Does anybody know what access controls are available (if any) to files served in this manner? Would files served this way also require signing / ACLs if they are not public?
Unfortunately, your options here are somewhat limited. Below are two ideas, but both of them have fairly significant drawbacks. Neither's super great, unfortunately.
Obfuscation
Make all associated resources publicly accessible and kept under an unguessable subdirectory. The user would use a signed URL to load the secured root resource, which would include links to the publicly accessible secondary resources. Downside: the sub-resources are not secured and could be downloaded by any party that knows the URL. Possible mitigation: periodically rotate sub-resource directory.
Appengine Redirects
Change includes to route all resources through your appengine app, which would authenticate using its own scheme and redirect to a new signed URL for each subresource. Downside: lots of extra hops through appengine. So for example, player.html would include a movie as "https://yourapp.appspot.com/resources/movie.mov", which would hit your appengine app, which would in turn authenticate then redirect to the signed "https://storage.googleapis.com/yourbucket/movie.mov?signaturestuff"