I have two PWAs in individual repositories on my GitHub account (e.g., https://github.com/1John419/pwa01 and https://github.com/1John419/pwa02). The apps are installed from their respective GitHub Pages (e.g., https://1john419.github.io/pwa01/ and https://1john419.github.io/pwa02/).
The problem is the Local Storage and Cache Storage for both apps are pointing to the domain URL (https://1john419.github.io/) rather than the app URL (https://1john419.github.io/pwa01 and https://1john419.github.io/pwa01).
As a result, even though each app's sw.js is only caching its respecting data, DevTools is indicating that each app contains all caches from the domain URL. The service workers are pointing to the app URL, but the Local Storage and Cache Storage are pointing to the domain URL.
When either app is updated, files with common names appear to be overwritten (despite being in uniquely name caches).
Is there a way to make the storage URL point to the app URL rather than the domain URL? If not, what solution would you suggest to keep the apps caches separate?
Update: PWA repos have now been deleted.
This is the solution I found. If you know of a better way, post another answer.
In short: No there is no way to specify PWA Storage URL. Browsers use the FQDN as the storage URL.
In order to have a unique storage URL for each app, each app needs to be in its own domain or subdomain.
The best solution I found was to purchase a domain and setup a subdomain for each app. I found these resources very useful:
Mike Conrad
Trent Yang
GitHub Docs
Using this approach, each app has its own storage URL and the apps don't step on each other any more.
Related
Okay, I was using Flutter and Firebase to upload data into Cloud Storage. I gained the downloadURL which can be accessible on web if people know the URL. I had enabled Public Access Prevention in Google Cloud Storage Console based on this doc and chose Access Control Uniform for this on doc.
I also had added Security Rule in Firebase Cloud Storage, so only Users with certain custom token can use it. But, it seems useless as everyone can get its downloaded URL. My question is why is that I still able to access the file if I am using the same URL which was I stored in Firestore? You can test it on this url.
Can hacker get the download URL I downloaded from Firestore?
Is there a secure way to download song from Firebase Cloud Storage so hacker won't get its URL?
Thank you for helping me out.
Updated v2:
I just found out that current audio file has its own AuthenticatedUrl as shown on this picture below. How can I get access to this url?
Updated v1:
I think I haven't activated Firebase App Check. Does this feature have ability to prevent it from being accessed publicly or maybe there is other things that I have to do to be able to prevent it being accessed publicly, beside all ways I described above???
Security rules only check if a user can get the download URL and do not restrict anyone from using it. You can use the getData() method instead. It doesn't return any URL and downloads the files directly and is controlled by security rules. So a user must be authenticated to fetch them.
As mentioned in the Answer :
If you're using the FlutterFire Storage library in your app, you can
call getData on a reference to the file to get its data. So with
that you just need to know the path to the data, and you won't need
the download URL in your application. Once you have the data locally,
you can create an image out of it with: Converting a byte array to
image in Flutter?
Unlike download URLs, the call to getData() is
checked by security rules, so you'll have to ensure that the user is
permitted to access the file.
You can also refer to this Answer :
For web apps: in the JavaScript/Web SDK using a download URL is the
only way to get at the data, while for the native mobile SDKs we also
have getData() and getFile() methods, which are enforced through
security rules.
Until that time, if signed URLs fit your needs
better, you can use those. Both signed URLs and download URLs are just
URLs that provide read-only access to the data. Signed URLs just
expire, while download URLs don't.
For more information, you can refer to this Github issue where a similar issue has been discussed.
I need to store my service data in Google Storage and let my users download files depending on their (users) access rights.
I've already made service that connects to Google Storage using server-centric mechanism, and transfers them to client-side, but I need client-side to go to Storage and download file without server-side.
I've tried to use temporary links for files, but I can't check, if user downloaded file or not to properly delete temporary link.
I've tried to look for oauth2 support, but it seems Google doesn't support oauth in such way (When my service decides to allow access or no).
The best solution is to generate tokens for users and if Google Storage would call my service before every file download.
How can I achieve that?
If a .txt file is saved to GCS and clicked on through the developer console browser by an authorized user, the contents are displayed in the web browser. That's fine, but that URL can be sent to anyone, authorized or not, allowing them to view the contents of the file.
"Share publicly" is unchecked, and no changes have been made to the default ACLs. And this isn't specific to .txt files -- that's just the easiest way to replicate the behavior since they're displayed directly in the browser (so you can easily get to that URL).
How do I configure GCS to either disable direct download links or ensure they're compliant with ACLs?
EDIT: It appears that the link expires after a few minutes, which reduces the associated risk a little, but not entirely. I'm still extremely nervous about how easily an authorized user could use this to inadvertently provide an unauthorized user direct access to something they ought not...
Left vs. right-clicking on files
First, regarding the difference between left-or-right clicking: I could not establish a difference between left- or right-clicking on a filename in the Google Cloud Storage storage browser.
To verify this, I opened a Google Cloud Project and opened a private object in a private bucket and opened it using both methods. I copied the URLs and opened them in a Chrome incognito window, where I was not logged in, to verify that my ACLs were not applied.
I was able to see both of the URLs in the incognito window. After some time, my access to them expired. However, interestingly enough, my access to them expired just as well in the window where I was logged-in and authenticated to access Google Cloud Storage.
This is where things get interesting.
Security and ACLs for user data in Google Cloud Storage browser
TL;DR: I believe the behavior you observed, namely that the URL can be viewed by anyone, is working as intended and it cannot be changed beyond what Google Cloud Storage already does with automatic timeouts; let me explain why.
When you are browsing Google Cloud Storage via the Developers Console, you are using the storage browser on the domain console.developers.google.com which means that you are authenticated with Google and proper ACLs can be applied to allow/deny access.
However, the only things you can view on that domain are bucket names, object names, and metadata, not the file content itself.
If Google were to serve you file content on the google.com domain, it would create a security issue by allowing an adversary to force your browser to execute Javascript on your behalf with your Google credentials, thus allowing them to do anything you can do through the web UI. This is typically referred to as an XSS attack.
To disallow this from happening, Google Cloud Storage (and Google in general, e.g., cached web pages) serve user-originating data on a different domain, typically *.googleusercontent.com, where users can't take advantage of any sensitive cookies or credentials, since nothing that Google provides is served on the same domain.
However, as a result, since the data is being served from one domain (*.googleusercontent.com) but your authentication is on a different domain (*.google.com), there is no way to apply the standard Google Cloud Storage bucket or object ACLs to the file contents themselves, while protecting you from XSS attacks by malevolent users.
Thus, ALL users, even those that have direct access to the file, upon viewing them in their browser, will have the content served with a time-limited signed URL on a different domain.
As a side-effect, this does allow users to copy-paste the URL and share it with others, who will have similar time-limited access to the file contents.
I'm quite new to Cloud Storage solutions, and I'm currently researching options to upgrade our current solution (we currently just upload on a SVN server).
What I have is a native application running on client computers, which will upload data to the Cloud Storage. Afterwards, client should be able to download and browse their data (source is not set in stone, could be a website or from other applications). They should not be able to access other user's data.
I'm not sure how I'm supposed to proceed. As far as I understand, the native application will upload using a Native Application Credential, using JSON.
Do I need multiple credentials to track multiple users? That seems wrong to me. Besides when they come back as 'users' through the web interface, they wouldn't be using that authentification, would they?
Do I need to change the ACL of the uploaded files afterwards?
Should I just not give write/read access to any particular users and handle read requests through Signed URLs, dealing with permission details by myself using something else on the side? (not forcing a Google Account is probably a requirement)
Sorry if this is too many questions, and thanks!
Benjamin
The "individual credentials per instance of an app" question has come up before, and unfortunately there's not a great answer. If you want every user to have different permissions, you need every user to be associated with a different account.
Like you point out, the best current answer, other than requiring users to have Google accounts, is to have a centralized service that vends signed URLs to the end applications. That service would be the only owner of all of the objects and would give out permission to read or upload as needed.
I am trying to secure access to premium content on my app engine web application. The content is an Articulate module, it is a player.html with dozens of associated javascript, audio, and video resources. I am trying to use Google Cloud Storage to host the content.
My idea is that when a user who is authenticated with my app and has appropriate access requests the premium content, my app could sign a url to the player.html. What I am not sure about is how to handle all the associated resource files? Signed urls are simple enough for securing single files, but what about groups of files? Do I have to sign urls for all the content or is it possible to have a single signed url allow access to related files?
ACLs are not an option, because I rolled my own authentication rather than using oAuth and Google accounts.
Any thoughts or alternate strategies would be appreciated.
Update 8.7.13
After reflecting and researching some more, I am wondering about configuring the GCS bucket as a website as per the instructions here.
If I understand the docs correctly, I would create a CNAME to point requests from my custom domain content.example.com to c.storage.googleapis.com, and request that arrive via this CNAME are served up as if they were a static webpage. Does anybody know what access controls are available (if any) to files served in this manner? Would files served this way also require signing / ACLs if they are not public?
Unfortunately, your options here are somewhat limited. Below are two ideas, but both of them have fairly significant drawbacks. Neither's super great, unfortunately.
Obfuscation
Make all associated resources publicly accessible and kept under an unguessable subdirectory. The user would use a signed URL to load the secured root resource, which would include links to the publicly accessible secondary resources. Downside: the sub-resources are not secured and could be downloaded by any party that knows the URL. Possible mitigation: periodically rotate sub-resource directory.
Appengine Redirects
Change includes to route all resources through your appengine app, which would authenticate using its own scheme and redirect to a new signed URL for each subresource. Downside: lots of extra hops through appengine. So for example, player.html would include a movie as "https://yourapp.appspot.com/resources/movie.mov", which would hit your appengine app, which would in turn authenticate then redirect to the signed "https://storage.googleapis.com/yourbucket/movie.mov?signaturestuff"