Google Cloud Storage static site hosting: Bucket names must be at least 3 characters in length, got 1: 'c' - google-cloud-storage

I created a basic website for a school project and I am trying to host it somewhere so that my teacher can access it. I came across a tutorial that allows you to host a static website on Google Compute Cloud Storage for free. https://web.archive.org/web/20211023134543/https://cloud.google.com/storage/docs/hosting-static-website-http
Steps from the tutorial that I followed (as requested by the comments):
In the Google Cloud Console, on the project selector page, select Google Cloud project.
Make sure that billing is enabled for your Cloud project.
Have a domain that you own or manage.
Verify that you own or manage the domain that you will be using.
Make sure you are verifying the top-level domain
Create a CNAME record (school.mydomain.com) that points to
c.storage.googleapis.com..
In the Google Cloud Console, go to the Cloud Storage Browser page.
Click Create bucket to open the bucket creation form.
Enter your bucket information and click Continue to complete each
step:
The Name of your bucket, which matches the hostname associated
with your CNAME record.
Select the Location type and Location of your bucket. For
example, Region and us-east1.
Select Standard Storage for the Storage class.
Select Uniform for Access control.
Click Create.
In the list of buckets, click on the name of the bucket that you
created.
Click the Upload files button in the Objects tab.
In the file dialog upload the html, js, xml, and css files that
make up the website
Select the Permissions tab near the top of the page.
Click the + Add button.
The Add principals dialog box appears.
In the New principals field, enter allUsers.
In the Select a role drop down, select the Cloud Storage sub-menu,
and click the Storage Object Viewer option.
Click Save.
Click Allow public access.
In the Google Cloud Console, go to the Cloud Storage Browser page.
In the list of buckets, find the bucket you created.
Click the Bucket overflow menu (...) associated with the bucket
and select Edit website configuration.
In the website configuration dialog, specify the main page.
Click Save.
Verify that content is served from the bucket by requesting the
domain name in a browser. You can do this with a path to an object
or with just the domain name, if you set the MainPageSuffix
property.
For example, if you have an object named test.html stored in a
bucket named www.example.com, check that it's accessible by going
to www.example.com/test.html in your browser.
Everything completed without error messages until step 25
I followed this tutorial and everything is accessible using direct links but, it doesn't work with domain forwarding. I set up a CNAME entry for school.mydomain.com and forwarded that to c.storage.googleapis.com. and I named the bucket school.mydomain.com as the tutorial states, but I keep getting an error and I am not sure why. I verified to google that I own the domain and the subdomain I am trying to use.
<Error>
<Code>InvalidBucketName</Code>
<Message>The specified bucket is not valid.</Message>
<Details>Bucket names must be at least 3 characters in length, got 1: 'c'</Details>
</Error>
After getting the error after following the tutorial I tried recreating the CNAME record with path forwarding on and off as well as both temporary and permanant redirects. The domain is managed by Google Domains on a seperate Google account. I tried using their domain forwarding wizard and just creating a standard CNAME rule.
Going to school.mydomain.com.storage.googleapis.com works. But if I forward to this, it doesnt follow the forwarding rules set for the bucket, namely the main page is at /5500/index.html.
I am totally new to this and I have no idea what to try next. I was able to turn in my assignment by providing a direct link to the file in the bucket, but I would like to get domain forwarding to work for any future projects.

Related

Google Cloud Services not giving me permission to view a bucket I've just created?

I am an organisation of one person, just me. I've been using GCS with no problem for a few years. Today I've created a new bucket, and am currently using gsutil to populate it, with no obvious problems.
In the GCS web app I've just tried to click into the bucket via the Storage browser, just to verify it was being populated, and was told
Additional permissions required to list objects in this bucket: Ask a project or bucket owner to grant you 'storage.buckets.list' permissions (e.g. by giving your account the IAM Storage Object Viewer role).
Ok... but I created it? Whatever, I'll click on the menu button (three vertical dots) next to the bucket name and select Edit bucket permissions.
You do not have permission to view the permissions of the selected resource
Right...
Any ideas?!
You figured it out based on your comments. To reduce future guesswork, a really good reference exists for figuring out what roles get which permissions.

GSuite Permissions on Google Cloud Storage

Initial Question
I'm trying to do something that I think is somewhat simple, but I can't seem to get it nailed down correctly. I've been trying to create a bucket on GCS that is accessible to anyone in my GSuite organization, but not the larger internet.
I've created an org#mydomain.com group and added all users. I then granted that user permission to view the file in the bucket, but it always says access denied. If the file is marked public then it's accessible without issue. How do I get this setup?
Additional Information
I have transferred the project and bucket to my organization
I have setup the index and 404 pages
If marked public, everything works as expected
When I check the permissions of individual files, I don't see anything inherited or more specific than the general project security settings.
I added the Storage Object Viewer permission to the bucket for my org#domain.com group
When trying to access a file, I get the following response:
<Error>
<Code>AccessDenied</Code>
<Message>Access denied.</Message>
<Details>
Anonymous caller does not have storage.objects.get access to compliance.microcimaging.com/test_xray.jpg.
</Details>
</Error>
So, thinking that it might be thinking I was using a different account, I opened an Incognito Window, logged in as my organization, then attempted to access. That gave me the same message.
I tried adding the org#domain.com user to a single file, which resulted in the same error. I then attempted to add my personal username to the file, which resulted in the same error.
Permission errors have got to be the MOST BORING errors!
Seeing that you already created a Google group you can accomplish this quite easily.
On Google Cloud Platform Console go to "Storage -> Browser", and on your bucket, on the menu on the right select "edit bucket permissions".
On "Add members" put org#mydomain.com and give the role of "Storage -> Storage Object Viewer" to give the whole group read only permissions when authenticated or any other permission combination you need.
Alternatively see this documentation about how to set IAM policies on a Gsuite domain, so you can even skip the group part and set access control policies to the Google Cloud products for your domain as a whole.

Google Cloud storage doesn't show the bucket in browser for a user who has access to it

In our project, we have a group of people which should have full access to ONLY a bucket and they should not see other buckets or the object on the other buckets.
so, i changed the permission of the bucket, and i added the users as Storage Admin for that specific bucket (not for whole project).
In this case, when they use console/Storage they see the following message:
But when they open cloud Shell and they use Gsutil, they can access to the bucket objects (no access to other buckets).
Is this a bug on the interface of Console/storage?
This is not a bug, but it is a subtlety of the Console. In order to access a bucket from the Console, you typically navigate to it using the Browser, which is what appears you attempt in the screenshot. This fails, though, because to do this you need permission to list buckets for a project, even if you otherwise have free reign to work within the bucket.
There are three ways to deal with this:
1) Give your users the Viewer permission for the project that contains the bucket. There are pros and cons to this. I'd say it's probably not worth going this route (though not as much because your users will see other buckets - bucket namespace is publicly viewable anyway - but because doing so brings up some additional permission nuances you probably don't want to deal with).
2) Link directly to the desired bucket, thus avoiding the "listing buckets" portion of the Console. The URL for a bucket has the form: console.cloud.google.com/storage/browser/[BUCKET_NAME]. I believe this will work without any additional modifications to your permissions.
3) Create a custom role that only contains the storage.buckets.list permission, and use that role on the project for affected users.

How to make Google Cloud Storage direct download links compliant with ACLs?

If a .txt file is saved to GCS and clicked on through the developer console browser by an authorized user, the contents are displayed in the web browser. That's fine, but that URL can be sent to anyone, authorized or not, allowing them to view the contents of the file.
"Share publicly" is unchecked, and no changes have been made to the default ACLs. And this isn't specific to .txt files -- that's just the easiest way to replicate the behavior since they're displayed directly in the browser (so you can easily get to that URL).
How do I configure GCS to either disable direct download links or ensure they're compliant with ACLs?
EDIT: It appears that the link expires after a few minutes, which reduces the associated risk a little, but not entirely. I'm still extremely nervous about how easily an authorized user could use this to inadvertently provide an unauthorized user direct access to something they ought not...
Left vs. right-clicking on files
First, regarding the difference between left-or-right clicking: I could not establish a difference between left- or right-clicking on a filename in the Google Cloud Storage storage browser.
To verify this, I opened a Google Cloud Project and opened a private object in a private bucket and opened it using both methods. I copied the URLs and opened them in a Chrome incognito window, where I was not logged in, to verify that my ACLs were not applied.
I was able to see both of the URLs in the incognito window. After some time, my access to them expired. However, interestingly enough, my access to them expired just as well in the window where I was logged-in and authenticated to access Google Cloud Storage.
This is where things get interesting.
Security and ACLs for user data in Google Cloud Storage browser
TL;DR: I believe the behavior you observed, namely that the URL can be viewed by anyone, is working as intended and it cannot be changed beyond what Google Cloud Storage already does with automatic timeouts; let me explain why.
When you are browsing Google Cloud Storage via the Developers Console, you are using the storage browser on the domain console.developers.google.com which means that you are authenticated with Google and proper ACLs can be applied to allow/deny access.
However, the only things you can view on that domain are bucket names, object names, and metadata, not the file content itself.
If Google were to serve you file content on the google.com domain, it would create a security issue by allowing an adversary to force your browser to execute Javascript on your behalf with your Google credentials, thus allowing them to do anything you can do through the web UI. This is typically referred to as an XSS attack.
To disallow this from happening, Google Cloud Storage (and Google in general, e.g., cached web pages) serve user-originating data on a different domain, typically *.googleusercontent.com, where users can't take advantage of any sensitive cookies or credentials, since nothing that Google provides is served on the same domain.
However, as a result, since the data is being served from one domain (*.googleusercontent.com) but your authentication is on a different domain (*.google.com), there is no way to apply the standard Google Cloud Storage bucket or object ACLs to the file contents themselves, while protecting you from XSS attacks by malevolent users.
Thus, ALL users, even those that have direct access to the file, upon viewing them in their browser, will have the content served with a time-limited signed URL on a different domain.
As a side-effect, this does allow users to copy-paste the URL and share it with others, who will have similar time-limited access to the file contents.

S3 Bucket Region - www. subdomain issue

I created a bucket for my root folder in the US Standard region but when I created my www. subdomain that could be redirected to the root folder I placed it in the Oregon region.
The redirect from the address bar is failing (I set it up using buckets>properties>redirect). AWS doesn't seem to allow this swapping between regions, so I deleted and tried to recreate the www. subdomain again, this time in the US Standard region, but it now gives the error, "A conflicting conditional operation is currently in progress against this resource. Please try again."
In short, is there a way to change the region, as AWS is apparently not allowing multiple buckets with the same name (even in separate regions)? I am planning to redirect from the domain name I registered online using Route 53 anyway, so does this issue matter (as I won't use the 'http://example.com.s3-website-us-east-1.amazonaws.com' or 'http://www.example.com.s3-website-us-east-1.amazonaws.com' because I will hopefully be using 'example.com' or 'www.example.com'.
Thank you all for the help; I hope this post is specific enough. Cheers from a first post.
AWS doesn't seem to allow this swapping between regions,
That's not correct. A bucket configured for redirection does not care where it's redirecting to -- it can redirect to any web site, and the destination doesn't have to be another bucket...so this is a misdiagnosis of the problem you were/are experiencing.
AWS is apparently not allowing multiple buckets with the same name (even in separate regions)?
Well... no:
“The bucket namespace is global - just like domain names”
— http://aws.amazon.com/articles/1109#02
Only one bucket of a given name can exist within S3 at any point in time. Because S3 is a massive distributed global system, it can take time (though it should typically only take a few minutes) before you can create the bucket again. That's your conflict -- the deletion hasn't globally propagated.
“After a bucket is deleted, the name becomes available to reuse, but the name might not be available for you to reuse for various reasons. For example, some other account could create a bucket with that name. Note, too, that it might take some time before the name can be reused.”
— http://docs.aws.amazon.com/AmazonS3/latest/dev/BucketRestrictions.html
Once you get it created, focus on fixing the redirect. If you haven't yet configured the DNS in Route 53, then that would be the reason the redirect didn't work -- it can't redirect to something that isn't working. S3 accomplishes this magic by sending a browser redirect -- which is why you can redirect anywhere -- it doesn't resolve the new bucket destination internally.
You should be able to redirect using Redirect all requests to another host name as long as you have Static Website enabled on the bucket where you are redirecting too.
No there's no way to change the region other than deleting the bucket in a region and recreating it in another region. Bucket names are unique across all of S3.
You can use Route 53 to create an alias for any bucket by adding a CNAME record that way www.yoursite.com maps to something like http://www.example.com.s3-website-us-east-1.amazonaws.com
Hope this helps.