I am trying to change the cache control for my CDN by doing:
gsutil setmeta -h "Cache-Control: max-age=0, s-maxage=86400" gs://<BUCKET>/*
However when I do I get the error: AccessDeniedException: 403 Forbidden
I've tried to change the ACL to a project but get the same error.
gsutil acl set -R public-read gs://<BUCKET>/
Ideally, I'd like this cache control to be on every bucket I create and have a default ACL to allow this.
Does anyone know how I can make this AccessDenied go away. I am signed into the owner of the project.
Related
I'm trying to download files from Firebase Storage through a XMLHttpRequest, but Access-Control-Allow-Origin is not set on the resource, so it's not possible. Is there any way to set this header on the storage server?
(let [xhr (js/XMLHttpRequest.)]
(.open xhr "GET" url)
(aset xhr "responseType" "arraybuffer")
(aset xhr "onload" #(js/console.log "bin" (.-response xhr)))
(.send xhr)))
Chrome error message:
XMLHttpRequest cannot load
https://firebasestorage.googleapis.com/[EDITED]
No 'Access-Control-Allow-Origin' header is present on the requested
resource. Origin 'http://localhost:3449' is therefore not allowed
access.
From this post on the firebase-talk group/list:
The easiest way to configure your data for CORS is with the gsutil command line tool.
The installation instructions for gsutil are available at https://cloud.google.com/storage/docs/gsutil_install.
Once you've installed gsutil and authenticated with it, you can use it to configure CORS.
For example, if you just want to allow object downloads from your custom domain, put this data in a file named cors.json (replacing "https://example.com" with your domain):
[
{
"origin": ["https://example.com"],
"method": ["GET"],
"maxAgeSeconds": 3600
}
]
Then, run this command (replacing "exampleproject.appspot.com" with the name of your bucket):
gsutil cors set cors.json gs://exampleproject.appspot.com
and you should be set.
If you need a more complicated CORS configuration, check out the docs at https://cloud.google.com/storage/docs/cross-origin#Configuring-CORS-on-a-Bucket.
The above is now also included in the Firebase documentation on CORS Configuration
Google Cloud now has an inline editor to make this process even easier. No need to install anything on your local system.
Open the GCP console and start a cloud terminal session by clicking the >_ icon button in the top navbar. Or search for "cloud shell editor" in the search bar.
Click the pencil icon to open the editor, then create the cors.json file.
Run gsutil cors set cors.json gs://your-bucket
Just want to add to the answer. Just go to your project in google console (console.cloud.google.com/home) and select your project. There open the terminal and just create the cors.json file (touch cors.json) and then follow the answer and edit this file (vim cors.json) as suggested by #frank-van-puffelen
This worked for me. Cheers!
I am working on a project using firebase storage and the end-user needs a way to download the file they uploaded. I was getting a cors error when the user tried to download the file but after some research, I solved the issue.
Here is how I figured it out:
Download Google Cloud CLI
Log in using the CLI
Create cors.json file in the project directory and type the code below.
[
{
"origin": ["*"],
"method": ["GET"],
"maxAgeSeconds": 3600
}
]
Navigate to the directory containing cors.json with the Google Cloud CLI
In the CLI type: gsutil cors set cors.json gs://<app_name>.appspot.com
another approach to do this is using Google JSON API.
step 1 : get access token to use with JSON API
To get a token use go to : https://developers.google.com/oauthplayground/
Then search for JSON API or Storage
Select required options i.e read ,write , full_access (tick those which are required)
Follow the process to get Access Token, which will be valid for an hour.
Step 2: Use token to hit google JSON API to update CORS
Sample Curl :
curl -X PATCH \
'https://www.googleapis.com/storage/v1/b/your_bucket_id?fields=cors' \
-H 'Accept: application/json' \
-H 'Accept-Encoding: gzip, deflate' \
-H 'Authorization: Bearer ya29.GltIB3rTqQ2tJgh0cMj1SEa1UgQNJnTMXUjMlMIRGG-mBCbiUO0wqdDuEpnPD6cbkcr1CuLItuhaNCTJYhv2ZKjK7yqyIHNgkCBup-T8Z1B1RiBrCgcgliHOGFDz' \
-H 'Content-Type: application/json' \
-H 'Postman-Token: d19f29ed-2e80-4c34-85ee-c46c9058fac0' \
-H 'cache-control: no-cache' \
-d '{
"location": "us",
"storageClass": "Standard",
"cors": [
{
"maxAgeSeconds": "360000000",
"method": [
"GET",
"HEAD",
"DELETE"
],
"origin": [
"*"
],
"responseHeader":[
"Content-Type"
]
}
]
}'
I am attempting to change the asl of a file (100KB.file) I have within IBM COS: bucket: 'devtest1.ctl-internal.nasv.cos' and am receiving the following message:
An error occurred (AccessDenied) when calling the PutObjectAcl
operation: Access Denied
It seems like my AWS credentials (or call) do not have the correct permissions to allow the ACL update.
Command:
aws --endpoint-url=https://s3.us-south.objectstorage.softlayer.net
s3api put-object-acl --bucket devtest1.ctl-internal.nasv.cos --key
100KB.file --acl public-read
Return:
An error occurred (AccessDenied) when calling the PutObjectAcl
operation: Access Denied
You haven’t mentioned that you have configured hmac credentials on your bucket, so I’ll assume you haven't. I'm also assuming that operations other than PutObjectAcl do not work for you.
Try adding hmac credentials:
Then ...
Source: https://console.bluemix.net/docs/services/cloud-object-storage/hmac/credentials.html#using-hmac-credentials
I am having the same issue as well using the AWS CLI. However, you can do the same operation using cURL and providing your IBM Cloud IAM token.
curl -X "PUT" "https://{endpoint}/{bucket-name}/{object-name}?acl" \
-H "x-amz-acl: public-read" \
-H "Authorization: Bearer {token}" \
-H "Content-Type: text/plain; charset=utf-8" \
I am getting "HTTP/1.1 403 Forbidden" returned when making CURL requests via soundcloud.com/oembed for SoundCloud sets URLs despite them being set to public. Individual tracks work fine including those belonging to a set.
This seems to be a recent issue as some of the sets I've been testing were working up until a few weeks ago.
Example curl request via command line that is forbidden despite being public:
$ curl -v "http://soundcloud.com/oembed" -d 'format=json' -d 'url=https://soundcloud.com/edsheeran/sets/bloodstream' -L -v -s -o /dev/null
If you do the same for one of the tracks in this set (eg soundcloud.com/rudimentaluk/bloodstream) then you get a "HTTP/1.1 200 OK" header response.
I know that you can run a command at upload to set the cache-control of the image being uploaded
gsutil -h "Cache-Control:public,max-age=2628000" cp -a public-read \\
-r html gs://bucket
But I'm using carrierwave in rails and don't think its possible to set it up to run this command each time an image is uploaded.
I was looking around to see if you can change the default cache-control number but cant find any solutions. Currently I run gsutil -m setmeta -h "Cache-Control:public, max-age=2628000" gs://bucket/*.png every now and then to update new images but this is a horrible solution.
Any ideas on how to set the default cache-control for files uploaded to a bucket?
There's no way to set a default Cache-Control header on newly uploaded files. It either needs to be set explicitly (by setting the header) at the time the object is written, or after the upload by updating the object's metadata using something like the gsutil command you noted.
I have static assets stored in GCS and I'd like to serve them gzipped (but they were uploaded without compression). Is there any way to set files to be compressed without downloading and re-uploading them in gzipped format?
I tried setting the content-encoding header with gsutil (i.e., gsutil setmeta -h 'Content-Encoding:gzip' <some_object_uri>, but it just led to a "Service Unavailable" on the file (which I assume is from the server attempting to ungzip the file and failing or something like that).
There is no way to compress the objects without downloading them and re-uploading.
However, you can have gsutil do this for you, and if you run it from a Google Compute Engine (GCE) Virtual Machine (VM), you'll only be charged for operation counts, not for bandwidth.
Also, regarding setting the content-encoding header with setmeta, you're right in your interpretation of what happened. You set the metadata on the object to indicate that it contained gzip data, but the contents did not contain a valid gzip stream, so when you try to download it with Accept-Encoding: gzip, the GCS service tries to decompress the stream and fails.
I'd suggest downloading the bucket to the local disk on a GCE VM:
gsutil cp -r gs://bucket /path/to/local/disk
Then, use the -z option to indicate which file extensions to gzip:
gsutil cp -z js,css,html -r /path/to/local/disk gs://bucket