Static Web site served from Google Cloud storage in Google Apps Domain - google-apps

It seems like this would be really, really easy - but I can't get it to work. All I need to do is to be able to serve files from Google cloud storage while restricting access to my google apps domain. I easily did this before using Google App engine simply by choosing that I wanted to limit access to my domain and setting the app.yaml appropriately. I can't find anything that tells me what I might be missing - I've tried using gsutil to set the ACL to restrict to my domain, which processes successfully through the command line, but then when I try to look at the bucket or object permissions through the cloud web console, I get "unexpected ACL entity type: domain".
I'm trying to access using storage.googleapis.com/bucket/object (of course with my bucket and object name) and I always get a 403 error even though I'm definitely logged in to gmail, and as the administrator of the domain, it seems like it should work because even if the ACL's were otherwise wrong (and I've tried it both with and without the domain restriction), and that it would work for me at least. The only way I can serve content using the above url is if I make it public - which obviously is NOT what I want to do.
I'm sure I'm missing something completely stupid, or some fundamental principles about how this should work - can anyone give me any ideas?

I'm not 100% sure what your use case is, but I'm guessing that your users are attempting to access the objects directly from a web browser. storage.cloud.google.com accepts Google authorization cookies, which means that if a user is logged in to an appropriate Google account, they can access resources restricted to certain users, groups, or domains. However, the other endpoints do not accept cookies as authorization, and so this use case won't work.
These users have permission to access objects using storage.googleapis.com, but doing so requires explicitly authorizing requests.
In othe words, a simple <img src="http://storage.cloud.google.com/bucket/object" /> link will work fine for signed-in users, but using storage.googleapis.com requires explicitly authorizing requests with via OAuth 2.

Related

How to store app tokens and secrets for ionic apps

I'm using adjust and firebase in my ionic app but the app secrets for these integrations and others all show up in my app's js code if I extract the APK/IPA.
How do I keep credentials secure and package them with the app's APK/IPA for such hybrid apps?
This is an interesting question and it's good that you are asking it :)
For the Firebase settings, they are secret, but not secret-secret. They are just a starting point. Nothing can be done with those unless the user also logs in with their password which is hashed using the secret key and then sent over.
This proves that the person knows enough to identify themselves as a user.
Then on the server side, you have your rules that say "for the person that has identified themselves as user X they have permission to do Y"
If somebody has got your password then you are exposed just the same as you are always exposed.
You can also restrict your Firebase account by apps package id, hostname, IP address, in the Google Cloud admin panels.
As for your other things, like Adjust, they have their own solutions along the same lines. Either the API key is just enough for you to read the information, or if its a powerful level of access then normally there is some kind of authentication/account linking process so you can prove yourself to the other API.
If not, then you cannot just put it out there, you need to create your own proxy. Firebase supports cloud functions (aka serverless) so you can run snippets of code which are only accessible by users that have logged in, and then return that information back to the client as a proxy.

How to make Google Cloud Storage direct download links compliant with ACLs?

If a .txt file is saved to GCS and clicked on through the developer console browser by an authorized user, the contents are displayed in the web browser. That's fine, but that URL can be sent to anyone, authorized or not, allowing them to view the contents of the file.
"Share publicly" is unchecked, and no changes have been made to the default ACLs. And this isn't specific to .txt files -- that's just the easiest way to replicate the behavior since they're displayed directly in the browser (so you can easily get to that URL).
How do I configure GCS to either disable direct download links or ensure they're compliant with ACLs?
EDIT: It appears that the link expires after a few minutes, which reduces the associated risk a little, but not entirely. I'm still extremely nervous about how easily an authorized user could use this to inadvertently provide an unauthorized user direct access to something they ought not...
Left vs. right-clicking on files
First, regarding the difference between left-or-right clicking: I could not establish a difference between left- or right-clicking on a filename in the Google Cloud Storage storage browser.
To verify this, I opened a Google Cloud Project and opened a private object in a private bucket and opened it using both methods. I copied the URLs and opened them in a Chrome incognito window, where I was not logged in, to verify that my ACLs were not applied.
I was able to see both of the URLs in the incognito window. After some time, my access to them expired. However, interestingly enough, my access to them expired just as well in the window where I was logged-in and authenticated to access Google Cloud Storage.
This is where things get interesting.
Security and ACLs for user data in Google Cloud Storage browser
TL;DR: I believe the behavior you observed, namely that the URL can be viewed by anyone, is working as intended and it cannot be changed beyond what Google Cloud Storage already does with automatic timeouts; let me explain why.
When you are browsing Google Cloud Storage via the Developers Console, you are using the storage browser on the domain console.developers.google.com which means that you are authenticated with Google and proper ACLs can be applied to allow/deny access.
However, the only things you can view on that domain are bucket names, object names, and metadata, not the file content itself.
If Google were to serve you file content on the google.com domain, it would create a security issue by allowing an adversary to force your browser to execute Javascript on your behalf with your Google credentials, thus allowing them to do anything you can do through the web UI. This is typically referred to as an XSS attack.
To disallow this from happening, Google Cloud Storage (and Google in general, e.g., cached web pages) serve user-originating data on a different domain, typically *.googleusercontent.com, where users can't take advantage of any sensitive cookies or credentials, since nothing that Google provides is served on the same domain.
However, as a result, since the data is being served from one domain (*.googleusercontent.com) but your authentication is on a different domain (*.google.com), there is no way to apply the standard Google Cloud Storage bucket or object ACLs to the file contents themselves, while protecting you from XSS attacks by malevolent users.
Thus, ALL users, even those that have direct access to the file, upon viewing them in their browser, will have the content served with a time-limited signed URL on a different domain.
As a side-effect, this does allow users to copy-paste the URL and share it with others, who will have similar time-limited access to the file contents.

Correct way to handle user permissions with Google Cloud Storage?

I'm quite new to Cloud Storage solutions, and I'm currently researching options to upgrade our current solution (we currently just upload on a SVN server).
What I have is a native application running on client computers, which will upload data to the Cloud Storage. Afterwards, client should be able to download and browse their data (source is not set in stone, could be a website or from other applications). They should not be able to access other user's data.
I'm not sure how I'm supposed to proceed. As far as I understand, the native application will upload using a Native Application Credential, using JSON.
Do I need multiple credentials to track multiple users? That seems wrong to me. Besides when they come back as 'users' through the web interface, they wouldn't be using that authentification, would they?
Do I need to change the ACL of the uploaded files afterwards?
Should I just not give write/read access to any particular users and handle read requests through Signed URLs, dealing with permission details by myself using something else on the side? (not forcing a Google Account is probably a requirement)
Sorry if this is too many questions, and thanks!
Benjamin
The "individual credentials per instance of an app" question has come up before, and unfortunately there's not a great answer. If you want every user to have different permissions, you need every user to be associated with a different account.
Like you point out, the best current answer, other than requiring users to have Google accounts, is to have a centralized service that vends signed URLs to the end applications. That service would be the only owner of all of the objects and would give out permission to read or upload as needed.

Securing a web page and required resources with Google Cloud Storage and signed urls?

I am trying to secure access to premium content on my app engine web application. The content is an Articulate module, it is a player.html with dozens of associated javascript, audio, and video resources. I am trying to use Google Cloud Storage to host the content.
My idea is that when a user who is authenticated with my app and has appropriate access requests the premium content, my app could sign a url to the player.html. What I am not sure about is how to handle all the associated resource files? Signed urls are simple enough for securing single files, but what about groups of files? Do I have to sign urls for all the content or is it possible to have a single signed url allow access to related files?
ACLs are not an option, because I rolled my own authentication rather than using oAuth and Google accounts.
Any thoughts or alternate strategies would be appreciated.
Update 8.7.13
After reflecting and researching some more, I am wondering about configuring the GCS bucket as a website as per the instructions here.
If I understand the docs correctly, I would create a CNAME to point requests from my custom domain content.example.com to c.storage.googleapis.com, and request that arrive via this CNAME are served up as if they were a static webpage. Does anybody know what access controls are available (if any) to files served in this manner? Would files served this way also require signing / ACLs if they are not public?
Unfortunately, your options here are somewhat limited. Below are two ideas, but both of them have fairly significant drawbacks. Neither's super great, unfortunately.
Obfuscation
Make all associated resources publicly accessible and kept under an unguessable subdirectory. The user would use a signed URL to load the secured root resource, which would include links to the publicly accessible secondary resources. Downside: the sub-resources are not secured and could be downloaded by any party that knows the URL. Possible mitigation: periodically rotate sub-resource directory.
Appengine Redirects
Change includes to route all resources through your appengine app, which would authenticate using its own scheme and redirect to a new signed URL for each subresource. Downside: lots of extra hops through appengine. So for example, player.html would include a movie as "https://yourapp.appspot.com/resources/movie.mov", which would hit your appengine app, which would in turn authenticate then redirect to the signed "https://storage.googleapis.com/yourbucket/movie.mov?signaturestuff"

How to use the same facebook application for different websites

I'm developing a small CMS in PHP and we're putting on social integration.
The content is changed by a single administrator who as right for publishing news, events and so on...
I'd to add this feature, when the admin publishes something it's already posted on facebook wall. I'm not very familiar with facebook php SDK, and i'm a little bit confused about it.
If (make it an example) 10 different sites are using my CMS, do I have to create 10 different facebook application? (let's assume the 10 websites are all in different domains and servers)
2nd, is there a way for authenticating with just PHP (something like sending username&password directly) so that the user does not need to be logged on facebook?
thanks
You might want to break up your question in to smaller understandable units. Its very difficult to understand what you are driving at.
My understanding of your problem could be minimal, but here goes...
1_ No you do not create 10 different facebook application. Create a single facebook application and make it a service entry point. So that all your cms sites could talk to this one site to interact with facebook. ( A REST service layer).
2_ Facebook api does not support username and password authentication. They only support oauth2.0. Although Oauth is not trivial, but since they have provided library for that, implementing authentication is pretty trivial.
Please read up on http://developers.facebook.com/docs/.
Its really easy and straight forward and well explained.
Your question is so vague and extensive that it cannot be answered well here.
If you experience any specific implementation problems, this is the right place.
However to answer atleast a part of your question:
The most powerful tool when working with facebook applications is the Graph API.
Its principle is very simple. You can do almonst any action on behalf of any user or application. You have to generate a token first that identifies the user and the proper permissions. Those tokens can be made "permanent" so you can do background tasks. Usually they are only active a very short time so you can perform actions while interacting with the user. The process of generating tokens involves the user so that he/she has to confirm the privileges you are asking for.
For websites that publish something automatically you would probably generate a permanent token one time that is active as long as you remove the app in your privacy settings.
Basically yuo can work with any application on any website. There is no limitation. However there are two ways of generating tokens. One involves on an additional request and one is done client side, which is bound to one domain oyu specifiedin your apps settings.
Addendum:
#ArtoAle
you are right about every app beeing assighend to exactly one domain. however once you obtained a valid token it doesnt matter from where or who you use it within the graph api.
let me expalin this a little bit:
it would make no sense since it is you doing the request. there is no such thing as "where the request is coming from". of course there is the "referer" header information, but it can be freely specified and is not used in any context of this.
the domain you enter in your apps settings only restricts where facebook redirects the user to.
why?
this ensures that some bad guy cannot set up a website on any domain and let the user authorize an app and get an access token with YOUR application.
so this setting ensures that the user and the access token are redirected back to YOUR site and not to another bad site.
but there is an alternative. if you use the control flow for desktop applications you don't get an access token right after the user has been redirected back. you get a temporary SESSION-TOKEN that you can EXCCHANGE for an access token. this exchange is done server side over the REST api and requires your application secret. So at this point it is ensured that it is YOU who gets the token.
This method can be done on any domain or in case of desktop applications on no domain at all.
This is a quote from the faceboo docs:
To convert sessions, send a POST
request to
https://graph.facebook.com/oauth/exchange_sessions
with a comma-separated list of
sessions you want to convert:
curl client_id=your_app_id \
-F client_secret=your_app_secret \
-F sessions=2.DbavCpzL6Yc_XGEI0Ip9GA__.3600.1271649600-12345,2.aBdC...
\
https://graph.facebook.com/oauth/exchange_sessions
The response from the request is a
JSON array of OAuth access tokens in
the same order as the sessions given:
[ {
"access_token": "...",
"expires": 1271649600, }, ... ]
However you don't need this method as its a bit more complex. For your use case i would suggest using a central point of authorization.
So you would specify your ONE domain as a redirect url. This domain is than SHARED between your websites. there you can obtain the fully valid access token and seamlessly redirect the user back to your specific project website and pass along the access token.
This way you can use the traditional easy authentication flow that is probably also more future proof.
The fact remains. Once the access token is generated you can perform any action from any domain, there is no difference as ther is literally no "domain" where the request is coming from (see above).
apart from that, if you want some nice javascript features to work - like the comments box or like button, you need to setup up open graph tags correctly.
if you have some implementation problems or as you said "domain errors" please describe them more clearly, include the steps you made and if possible an error message.