Getting CORS error on uploading file using Google Cloud Storage signed request - google-cloud-storage

I am using fetch on client side to upload an image file.
const uploadResponse = await fetch(response.signedURL, {
method: 'PUT',
mode: 'cors',
body: selectedFile,
credentials: 'omit',
headers: {
'Access-Control-Allow-Origin':'*'
}
});
Configured my storage bucket using following config
[
{
"origin": ["https://zipsym.eu.loclx.io"],
"responseHeader": ["Content-Type","Access-Control-Allow-Origin"],
"method": ["PUT"],
"maxAgeSeconds": 3600
}
]
But still I am getting the following error
These are my response headers

When you configured the storage bucket with the listed config, did the request succeed?

Related

'Access-Control-Allow-Origin' header value not equal to the supplied origin, POST method

I get the following message in the Chrome dev tools console when submitting a contact form (making a POST request) on the /about.html section my portfolio web site:
Access to XMLHttpRequest at 'https://123abc.execute-api.us-east-1.amazonaws.com/prod/contact' from origin 'https://example.net' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: The 'Access-Control-Allow-Origin' header has a value 'https://example.net/' that is not equal to the supplied origin.
I don't know how to troubleshoot this properly, any help is appreciated.Essentially, this is happening (https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS/Errors/CORSAllowOriginNotMatchingOrigin) and I don't know where within my AWS assets to fix it. This person had same problem, but i'm unsure of how to apply their fix (CORS header 'Access-Control-Allow-Origin' does not match... but it does‼)
Here is a description of the AWS stack:
Context, I am using an S3 bucket as static website using CloudFront and Route 53, this stuff works fine, has for years. When I added the form, I did the following to allow the HTTP POST request:
Cloudfront, On the site's distribution I added a behavior with all settings default except:
Path pattern: /contact (I am using this bc this is the API Gateway resource path ending)
Origin and origin groups: S3-Website-example.net.s3-website... (Selected correct origin)
Viewer protocol policy: HTTP and HTTPS
Allowed HTTP methods: GET, HEAD, OPTIONS, PUT, POST, PATCH, DELETE
Cache HTTP methods GET and HEAD methods are cached by default: Checked OPTIONS box
Origin request policy - optional: CORS-S3Origin
Response headers policy - optional: CORS-With-Preflight
API Gateway, Created a REST API with all default settings except:
Created a resource: /contact
Created a method: POST
For /contact, Resource Actions > Enable CORS:
Methods: OPTIONS and POST both checked
Access-Control-Allow-Origin: 'https://example.net' (no ending slash)
Clicked "Enable CORS and Replace existing headers"
Results are all checked green:
✔ Add Access-Control-Allow-Headers, Access-Control-Allow-Methods, Access-Control-Allow-Origin Method Response Headers to OPTIONS method
✔ Add Access-Control-Allow-Headers, Access-Control-Allow-Methods, Access-Control-Allow-Origin Integration Response Header Mappings to OPTIONS method
✔ Add Access-Control-Allow-Origin Method Response Header to POST method
✔ Add Access-Control-Allow-Origin Integration Response Header Mapping to POST method
Created a stage called "prod", ensured it had the /contact resource, and deployed.
At the /contact - POST - Method Execution, The test works as expected (triggers Lambda func that uses SES to send email, which I do actually receive).
The only thing I feel unsure about with API Gateway is after I enable the CORS, I can't seem to find a place where that setting has been saved, and if I click again on enable CORS, it is back to the default form ( with Access-Control-Allow-Origin: '')*
Amazon SES, set up 2 verified identities for sending/receiving emails via lamda.
Lamda, set up a basic javascript function with default settings, the REST API is listed as a trigger, and does actually work as previously mentioned. The function code is:
var AWS = require('aws-sdk');
var ses = new AWS.SES({ region: "us-east-1" });
var RECEIVER = 'myemail#email.com';
var SENDER = 'me#example.net';
var response = {
"statusCode": 200,
"headers": {
"Content-Type": "application/json",
"Access-Control-Allow-Origin": "*"
},
"isBase64Encoded": false,
"body": "{ \"result\": \"Success\"\n}"
}
exports.handler = async function (event, context) {
console.log('Received event:', event);
var params = {
Destination: {
ToAddresses: [
RECEIVER
]
},
Message: {
Body: {
Text: {
Data: 'first name: ' + event.fname + 'last name: ' + event.lname + '\nemail: ' + event.email + '\nmessage: ' + event.message,
Charset: 'UTF-8'
}
},
Subject: {
Data: 'Website Query Form: ' + event.name,
Charset: 'UTF-8'
}
},
Source: SENDER
};
return ses.sendEmail(params).promise();
};
The only thing i can think of here is to maybe update the response to have "headers": {"Access-Control-Allow-Origin": "https://example.net"}
S3 bucket that holds the site contents, in permissions > CORS, I have the following JSON to allow a post of the contact form (notice no slash):
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"POST"
],
"AllowedOrigins": [
"https://example.net"
],
"ExposeHeaders": []
}
]
Permissions/Roles, Established Roles and permissions per
AWS guide: create dynamic contact forms for s3 static websites using aws lambda amazon api gateway and amazon ses
video titled: "Webinar: Dynamic Contact Forms for S3 Static Websites Using AWS Lambda, API Gateway & Amazon SES"
Client code, this is a very milk toast function being called to post the form on click.
function submitToAPI(event) {
event.preventDefault();
URL = "https://123abc.execute-api.us-east-1.amazonaws.com/prod/contact";
const namere = /[A-Za-z]{1}[A-Za-z]/;
const emailre = /^([\w-\.]+#([\w-]+\.)+[\w-]{2,6})?$/;
let fname = document.getElementById('first-name-input').value;
let lname = document.getElementById('last-name-input').value;
let email = document.getElementById('email-input').value;
let message = document.getElementById('message-input').value;
console.log(`first name: ${fname}, last name: ${lname}, email: ${email}\nmessage: ${message}`);
if (!namere.test(fname) || !namere.test(lname)) {
alert ("Name can not be less than 2 characters");
return;
}
if (email == "" || !emailre.test(email)) {
alert ("Please enter valid email address");
return;
}
if (message == "") {
alert ("Please enter a message");
return;
}
let data = {
fname : fname,
lname: lname,
email : email,
message : message
};
$.ajax(
{
type: "POST",
url : URL,
dataType: "json",
crossDomain: "true",
contentType: "application/json; charset=utf-8",
data: JSON.stringify(data),
success: function () {
alert("Successful");
document.getElementById("contact-form").reset();
location.reload();
},
error: function () {
alert("Unsuccessful");
}
});
}
The problem was that the response in the lambda function had "Access-Control-Allow-Origin" set to "*".
This should have been set to the exact origin (no trailing slash), so if the origin is 'https://example.net', then the response in the lamda function should have "Access-Control-Allow-Origin" set to 'https://example.net' as shown below:
var response = {
"statusCode": 200,
"headers": {
"Content-Type": "application/json",
"Access-Control-Allow-Origin": "https://example.net"
},
"isBase64Encoded": false,
"body": "{ \"result\": \"Success\"\n}"
}```

CORS error: Request header field authorization is not allowed by Access-Control-Allow-Headers in preflight response

I'm trying to fetch an image resource that's part of a conversation message.
I've tried both FETCH as well as using AXIOS but I'm getting the same error message.
Here's an example of my FETCH request
const token = `${accountSid}:${authToken}`;
const encodedToken = Buffer.from(token).toString('base64');
let response = await fetch('https://mcs.us1.twilio.com/v1/Services/<SERVICE_SID>/Media/<MEDIA_SID>',
{
method:'GET',
headers: {
'Authorization': `Basic ${encodedToken}`,
}
});
let data = await response.json();
console.log(data);
And here's what Axios looked like
let config = {
method: 'get',
crossdomain: true,
url: 'https://mcs.us1.twilio.com/v1/Services/<SERVICE_SID>/Media/<MEDIA_SID>',
headers: {
'Authorization': `Basic ${encodedToken}`,
},
};
try {
const media = await axios(config);
console.dir(media);
} catch(err) {
console.error(err);
}
Both ways are NOT working.
After looking into it more, I found out that Chrome makes a pre-flight request and as part of that requests the allowed headers from the server.
The response that came back was this
as you can see, in the "Response Headers" I don't see the Access-Control-Allow-Headers which should have been set to Authorization
What am I missing here?
I have made sure that my id/password as well as the URL i'm using are fine. In fact, I've ran this request through POSTMAN on my local machine and that returned the results just fine. The issue is ONLY happening when I do it in my code and run it in the browser.
I figured it out.
I don't have to make an http call to get the URL. It can be retrieved by simply
media.getContentTemporaryUrl();

CORS headers missing when using Axios in NextJS

I'm using a NestJS backend with a NextJS frontend, both hosted seperately.
NestJS Backend
I enabled CORS in the backend as follows:
app.enableCors({ credentials: true, origin: process.env.FRONTEND_URL });
When using cors-test.codehappy.dev to check the CORS headers everything looks good. All headers are present and the access-control-allow-origin header points to the right domain where the front-end is hosted on.
NextJS Frontend
On the NextJS frontend I'm using Axios to make request to the backend (the exact same url as used above).However, when creating a request the preflight request in Chrome is missing all CORS headers. The Axios instance below is imported when a HTTP request is needed.
import Axios from 'axios';
const api = Axios.create({
baseURL: process.env.BACKENDURL,
withCredentials: true
});
export default api;
The error in the console:
The preflight request:
in next.config.js
module.exports = {
//avoiding CORS error, more here: https://vercel.com/support/articles/how-to-enable-cors
async headers() {
return [
{
// matching all API routes
source: "/api/:path*",
headers: [
{ key: "Access-Control-Allow-Credentials", value: "true" },
{ key: "Access-Control-Allow-Origin", value: "*" },
{ key: "Access-Control-Allow-Methods", value: "GET,OPTIONS,PATCH,DELETE,POST,PUT" },
{ key: "Access-Control-Allow-Headers", value: "X-CSRF-Token, X-Requested-With, Accept, Accept-Version, Content-Length, Content-MD5, Content-Type, Date, X-Api-Version" },
]
}
]
},
}

Google Cloud Storage Bucket CORS error although CORS policy set

I have a signedURL for a google cloud storage bucket, and I want to use it with axios to make a PUT request.
My putFileData function is called when the variable signedURL is not empty, via useEffect.
const putFileData = async () => {
await axios
.put(signedURL, "HELLO TXT!!!!")
.then((response) => console.log(response))
.catch(err => {console.log("AXIOS ERROR: ", err)})
}
I set the CORS policy on the bucket with a json file, and when I query the bucket's cors policy I get:
[{"maxAgeSeconds": 360, "method": ["PUT"], "origin": ["http://localhost:3000"], "responseHeader": ["Content-Type"]}]
The options on my signedURL are:
const options = {
version: 'v4',
action: 'write',
expires: Date.now() + 15 * 60 * 1000, // 15 minutes
contentType: 'application/octet-stream',
};
Yet I still can't do the PUT request from http://localhost:3000.
I get:
Access to XMLHttpRequest at 'mysignedurl' from origin 'http://localhost:3000' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.
and:
PUT mysignedurl::ERR_FAILED
In my Network tab of Chrome Developer tools it shows the request is made twice - first unsuccessfully with status 403, then successfully with status 200 - yet nothing is uploaded to the bucket.
My signed url is generated with this function in my Cloud Functions:
async function generateV4UploadSignedUrl(bucketName, fileName) {
// These options will allow temporary uploading of the file with outgoing
// Content-Type: application/octet-stream header.
const options = {
version: 'v4',
action: 'write',
expires: Date.now() + 15 * 60 * 1000, // 15 minutes
contentType: 'application/octet-stream',
};
// Get a v4 signed URL for uploading file
const [url] = await storage
.bucket(bucketName)
.file(fileName)
.getSignedUrl(options)
return url;
};
With the help of the Google Cloud Support team, I figured out the ContentType was incorrect. I changed the options to:
contentType: 'application/x-www-form-urlencoded',
and it worked!

Uploading to Google cloud storage signed url from client

I have generated a signed url from the google cloud storage that works fine when I upload files via postman, but it is impossible to upload a file from the client. I have tried everything: fetch, axios, xmlRequest––you name it. Please help me out
const files = Array.from(event.target.files)
const file:any = files[0]
const {data: {signedUrl}} = await axios.get('https://localhost:3000/signedUrlUpload')
try {
const resp = await fetch(signedUrl, {
method: 'PUT',
body: file,
headers: {
'Content-Type': 'audio/wave'
}
})
console.log(resp);
} catch (e) {
console.error(e);
}
Error: TypeError: Failed to fetch