Im trying do do a HEAD Object request to the S3 REST API but I keep getting a 403 Forbidden error, even though I have the policy setup with the necessary permissions on S3. The response body is empty, so I don't think its a signature problem. I've tried several changes to the policy, nothing seems to make it work. I'm able to PUT objects and DELETE objects normally, just HEAD doesn't work.
Here's my bucket policy:
{
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam:: 999999999999:user/User"
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::my-bucket"
},
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-bucket/*"
},
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::999999999999:user/User"
},
"Action": [
"s3:GetObject",
"s3:GetObjectVersion",
"s3:DeleteObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::my-bucket/*"
}
]
}
Any ideas?
Update:
As Michael pointed out it seems to be a problem with my signature, though Im failing to see what.
def generate_url options={}
options[:action] = options[:action].to_s.upcase
options[:expires] ||= Time.now.to_i + 100
file_path = "/" + #bucket_name + "/" + options[:file_name]
string_to_sign = ""
string_to_sign += options[:action]
string_to_sign += "\n\n#{options[:mime_type]}\n"
string_to_sign += options[:expires].to_s
string_to_sign += "\n"
string_to_sign += file_path
signature = CGI::escape(
Base64.strict_encode64(
OpenSSL::HMAC.digest('sha1', SECRET_KEY, string_to_sign)
)
)
url = "https://s3.amazonaws.com"
url += file_path
url += "?AWSAccessKeyId=#{ACCESS_KEY}"
url += "&Expires=#{options[:expires]}"
url += "&Signature=#{signature}"
url
end
The generated string to sign looks like this:
HEAD\n\n\n1418590715\n/video-thumbnails/1234.jpg"
Solution:
It seems at some point while developing the file PUT part I actually have broken GET and HEAD. I was passing an empty string as the body of the request, instead of passing nothing, making the mime type required on the signature and breaking it because I was providing no mime type. I simply removed the empty request body and it worked perfectly. Thanks Michael for pointing me out of the wrong direction I was(I wasted so much time changing the bucket policy).
It still could be your signature, and I suspect that it is, for the following reasons:
Your observation that the message body is a good observation; however, it doesn't mean what you have concluded it means.
The lack of a response body does not give you any information at all about the nature of the error, in this case, because a web server is not supposed to return a body along with a HEAD response, no matter what:
The HEAD method is identical to GET except that the server MUST NOT return a message-body in the response
— http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html (RFC-2616)
Testing this on my side, I've confirmed that S3's response to an unsigned HEAD request and to an incorrectly-signed HEAD request is no different: it's always HTTP/1.1 403 Forbidden with no message body.
Note, also, that a signed URL for GET is not valid for HEAD, and vice versa.
In both S3 Signature Version 2 and S3 Signature Version 4, the "String to Sign" includes the "HTTP Verb," which would be GET or HEAD, meaning that a signature that's valid for GET would not be valid for HEAD, and vice versa... the request method must be known at the time of signing, because it's an element that's used in the signing process.
The s3:GetObject permission is the only documented permission required for using HEAD, which seems to eliminate permissions as the problem, if GET is working, which points back to the signature as the potential issue.
Confirmed that HEAD a presigned-URL will get 403 Forbidden.
If set custom headers such as content-type of the object.
The 403 response will not contain the custom header and still get application/xml.
Additional comment on #Michael-sqlbot 's answer above ...
I faced the identical symptoms but I had a different root cause.
If you are trying to HEAD a file which does not exist, then this will also return a 403-forbidden error, UNLESS you have the s3:ListBucket permission.
In my case, I had the s3.GetObject, s3.PutObject, and s3.HeadBucket permissions, but it wasn't until I added s3.ListBucket that I got the correct 404 - not found error.
This is also explained here: https://aws.amazon.com/premiumsupport/knowledge-center/s3-rest-api-cloudfront-error-403/
Had the same issue but with a different root cause - was trying to create a bucket, and instead of getting a 404, got 403. As S3 is globally namespaced, someone else had created the bucket, so while I had the correct permissions and setup for my account, I still would get 403 from a HEAD request. Solution was to check if the bucket exists globally first, and if so, try a different bucket name.
I was also getting this error as a red herring; during pytest using freezegun. I had frozen time to a time in the past, and was getting a 403 error. So clock skew could cause this.
I found this by trying another API call, where I received:
E botocore.exceptions.ClientError: An error occurred (RequestTimeTooSkewed) when calling the ListObjects operation: The difference between the request time and the current time is too large.
Related
I am trying to create an API that logs JSON request bodies in an SQS queue.
I have set up a basic queue in SQS in both the FIFO and non-FIFO layouts. I have the same problem each time. My policy for the SQS queue is as follows:
{
"Version": "2012-10-17",
"Id": "arn:aws:sqs:us-east-1:2222222222222:API-toSQS.fifo/SQSDefaultPolicy",
"Statement": [
{
"Sid": "Sid22222222222",
"Effect": "Allow",
"Principal": "*",
"Action": "SQS:*",
"Resource": "arn:aws:sqs:us-east-1:2222222222222:API-toSQS.fifo"
}
]
}
I have created a policy which i give all access to SQS for writing abilities. And I have created a role for API Gateway in which i assign the aforementioned policy to. Here is the policy i have assigned to this role:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"sqs:DeleteMessage",
"sqs:ChangeMessageVisibility",
"sqs:DeleteMessageBatch",
"sqs:SendMessageBatch",
"sqs:PurgeQueue",
"sqs:DeleteQueue",
"sqs:SendMessage",
"sqs:CreateQueue",
"sqs:ChangeMessageVisibilityBatch",
"sqs:SetQueueAttributes"
],
"Resource": "*"
}
]
}
I have set up an API gateway. I have created a POST method. I've tried enabling the CORS option (which create an OPTIONS method) and i've done it without CORS enabled. My ARN for my security policy is correct, i have triple checked it. and i opt for the override path and have the full https URL of my SQS queue there, i have triple checked this as well. My endpoint is SQS of course.
For integration request i have a HTTP header for Content-Type and then a Mapped From as 'application/x-www-form-urlencoded'
in mapping templates i have passthrough set as never and have a Content-Type set to application/json is also have included the template Action=SendMessage&MessageBody=$input.body to translate from body to url as per a walkthrough i found.
i am getting the following error in the API Gateway test area
<AccessDeniedException>
<Message>Unable to determine service/operation name to be authorized</Message>
</AccessDeniedException>
Is there a AWS guru out there who can steer me in the right direction?
to clarify my issue is that it should be adding my test body
{"peanutbutter":"jelly"}
to the SQS queue, but no luck.
I can send url encoded messages to SQS all day from postman, but i want my business partners to be able to send a clean JSON object via http (postman, node, etc, whatever..)
thank you!
i opt for the override path and have the full https URL of my SQS queue there
In Path Override type only path part of SQS queue URL 2222222222222/API-toSQS.fifo.
Also, MessageGroupId is required for fifo queues and if ContentBasedDeduplication is not enabled MessageDeduplicationId is required too.
Example of mapping template:
Action=SendMessage&MessageGroupId=$input.params('MessageGroupId')&MessageDeduplicationId=$input.params('MessageDeduplicationId')&MessageBody=$input.body
in this case you need to define MessageGroupId and MessageDeduplicationId as required query string parameters in Method Request and obviously pass them on requests to the API endpoint.
For anyone having this same issue, removing all of the settings from the integration request in API Gateway and using Lambda as a "middleman" worked. Lambda is a great go-between for almost all of the AWS services. I would prefer to have a API Gateway -> SQS stack instead of using API Gateway -> Lambda -> SQS, but for whatever reason the way the lambda handles the HTTP request as opposed to trying to configure API Gateway to do the same, works without issue.
you will not need any external resources in Lambda, so no importing Zip files. Just import AWS and SQS. use the basic structure to accept the event, then take the body (as JSON in my case) and sqs.sendMessage to your queue.
hope this helps anyone with the same issue.
I'm having a hard time implementing requestSync. It always returns
"error": {
"code": 404,
"message": "Requested entity was not found.",
"status": "NOT_FOUND"
}
I use Node.js/Express for the backend. The linking/unlinking with the Google Home app work and my actions work as well. It's really the requestSync part that fails.
The closest ticket I've found, though not exactly the same, is this one.
Things I've tried
agentUserId is a string, but if I pass it a number it returns a 400 with the message "Invalid value at 'agent_user_id'".
tried sending agent_user_id instead of agentUserId, this returns a 404 the same error as when I send a agentUserId
tried removing the "async:true" part of the body. did not notice a difference.
during SYNC, hardcoded the value of the agentUserId to eliminate possibility that I'm sending the wrong one. I use that same agentUserId during requestSync and this fails.
tried linking/unlinking multiple times in the google home app
another interesting thing to note : when opening up the "Test Suite" from the actions on google console, I put in that same agentUserId + service account key, and it registers it well : I'm able to see my devices listed correctly. Which leads me to believe that my agentUserId is correct (this may be a false assumption)
I'm 100% sure HomeGraph is enabled as I can see data on the charts in the "Overview" section of the HomeGraph API part of the console.
This is what the curl looks like (same as from the example)
curl -i -s -X POST -H "Content-Type: application/json" -d "{agent_us
er_id: \"1\"}" "https://homegraph.googleapis.com/v1/devices:requestSyn
c?key=API_KEY"
(my agentUserId is 1 in this case)
And this is what it looks like in code :
const res = await fetch(
`https://homegraph.googleapis.com/v1/devices:requestSync?key=${config.googleApiKey}`,
{
method: 'POST',
body: JSON.stringify({
agentUserId: String(userId),
async: true,
}),
headers: { 'Content-Type': 'application/json' },
},
);
Regardless of what I do, the result is always :
"error": {
"code": 404,
"message": "Requested entity was not found.",
"status": "NOT_FOUND"
}
I don't know where else to look to identify this problem. Any pointers would help. Thank you
Finally found the answer.
It wasn't too far from the one I had posted above. Although my problem is that when I generated an API key, the google home cloud console opened the wrong project by default. I had the wrong API key all along.
I'm testing out a few things in the OAuth 2.0 Playground and trying to get data in and out of Google Fit using their REST API
I have done this previously with success, I just didn't write down what I did.. now I've come back to make it a proper thing and can't get it working again.
I have access to Google Fit datasources via the dashboard. I can get a list of the dataSources that exist from:
https://www.googleapis.com/fitness/v1/users/me/dataSources
And that is successful. I have also created my own stream which has a single floating point weight value on it called
raw:com.google.weight:b6ac18c0:dten.sync
It already has data in it, I put it there last time I used it. I can select all that data by requesting a GET on the following
https://www.googleapis.com/fitness/v1/users/me/dataSources/raw:com.google.weight:b6ac18c0:dten.sync/datasets/0-1432193482000000000
It returns me all the data points I entered last time as JSON
I then try to PATCH the data adding my own data to the folliwng URL
https://www.googleapis.com/fitness/v1/users/me/dataSources/raw:com.google.weight:b6ac18c0:dten.sync/datasets/1432193482000000000-1432193482000000000
With this as a the request body
{
"minStartTimeNs": "1421912895000000000",
"maxEndTimeNs": "1432193482000000000",
"dataSourceId": "raw:com.google.weight:b6ac18c0:dten.sync",
"point": [
{
"startTimeNanos": "1421912895000000000",
"modifiedTimeMillis": "1421912895000",
"endTimeNanos": "1421912895000000000",
"value": [
{
"fPVal": 89.1
}
],
"dataTypeName": "com.google.weight"
}
]
}
But I get back
{
"error": {
"code": 400,
"message": "Unable to fetch DataSource for Dataset: raw:com.google.weight:b6ac18c0:dten.sync",
"errors": [
{
"domain": "global",
"message": "Unable to fetch DataSource for Dataset: raw:com.google.weight:b6ac18c0:dten.sync",
"reason": "invalidArgument"
}
]
}
}
I can't find any one referencing a similar anywhere soo I'm here
Also note if I miss spell my source it tells me off because they don't match the URL, if i include an empty list of data points I get the same error. I'm quite lost so I'm throwing it out there to see if anyone knows what that means
Thanks in advance
edit: i tried changing the hex code for my project's integer code and got an error about untrusted source. so i tried making a new test data source which works as expected. Slightly annoyed but guess I'll just start over..
OK I was stupid and didn't set up my own credentials in the OAuth settings in top right of the dashboard as it said to here. I forgot that bit -_- now I can access my own stream again and it shows my integer project id in the stream id not the hex one
https://developers.google.com/fit/rest/v1/get-started
Now I get invalid argument, but.. whatever >_<
edit 2:
invalid argument was because I have fPVal instead of fpVal and modifiedTimeMillis mills is not supposed to be submitted, obviously
For a error case when calling some HTTP Rest service API, the response is as follows:
{
"statusCode": "400",
"error": "Bad Request",
"message": "Can not construct instance of java.math.BigDecimal from String value 'a': not a valid representation\n at [Source: org.apache.cxf.transport.http.AbstractHTTPDestination$1#2f650e17; line: 1, column: 2] (through reference chain: com.foo.services.dto.request.ItemToUpdate[\"quantity\"])",
"validation": {
"source": "PAYLOAD",
"keys": ["key"]
},
"errorIdentifiers": [],
}
I am wondering if the message field in the response is appropriate. It does reveal certain level of implementation to the end user. Is this considered as
no particular problem at all
just a bad cosmetic issue that won't cause serious problem, just not readable to end user
potential security risk that definitely needs to be fixed
I think that you should only log the stacktrace on the server side. IMO it's technical hints (in addition, perhaps the end user even doesn't use Java to interact with your API) and the only thing that interests the end user of your API is that there is here a validation error within the provided data.
Another remark is that you use the status code and statusmessage within your response payload. I think that you don't need to duplicate this since it's already present in the response.
I would suggest an error message like that:
{
"messages": {
"quantity": "this must be a valid number"
}
}
I use a JSON structure for the field messages since there could be several validation errors within the provided data. Note that it's only a suggestion and you could extend this to your exact needs.
Hope it helps.
Thierry
I'm trying to batch update a bunch of existing records through Marketo's REST API. According to the documentation, the Import Lead function seems to be ideal for this.
In short, I'm getting the error "610 Resource Not Found" upon using the curl sample from the documentation. Here are some steps I've taken.
Fetching the auth_token is not a problem:
$ curl "https://<identity_path>/identity/oauth/token?
grant_type=client_credentials&client_id=<my_client_id>
&client_secret=<my_client_secret>"
Proving the token is valid, fetching a single lead isn't a problem as well:
# Fetch the record - outputs just fine
$ curl "https://<rest_path>/rest/v1/lead/1.json?access_token=<access_token>"
# output:
{
"requestId": "ab9d#12345abc45",
"result": [
{
"id": 1,
"updatedAt": "2014-09-18T13:00:00+0000",
"lastName": "Potter",
"email": "harry#hogwartz.co.uk",
"createdAt": "2014-09-18T12:00:00+0000",
"firstName": "Harry"
}
],
"success": true
}
Now here's the pain, when I try to upload a CSV file using the Import Lead function. Like so:
# "Import Lead" function
$ curl -i -F format=csv -F file=#test.csv -F access_token=<access_token>
"https://<rest_path>/rest/bulk/v1/leads.json"
# results in the following error
{
"requestId": "f2b6#14888a7385a",
"success": false,
"errors": [
{
"code": "610",
"message": "Requested resource not found"
}
]
}
The error codes documentation only states Requested resource not found, nothing else. So my question is: what is causing the 610 error code - and how can I fix it?
Further steps I've tried, with no success:
Placing the access_token as url parameter (e.g. appending '?access_token=xxx' to the url), with no effect.
Stripping down the CSV (yes, it's comma seperated) to a bare minimum (e.g. only fields 'id' and 'lastName')
Looked at the question Marketo API and Python, Post request failing
Verified that the CSV doesn't have some funky line endings
I have no idea if there are specific requirements for the CSV file, like column orders, though...
Any tips or suggestions?
Error code 610 can represent something akin to a '404' for urls under the REST endpoint, i.e. your rest_path. I'm guessing this is why you are getting that '404': Marketo's docs show REST paths as starting with '/rest', yet their rest endpoint ends with /rest, so if you follow their directions you get an url like, xxxx.mktorest.com/rest/rest/v1/lead/..., i.e. with '/rest' twice. This is not correct. Your url must have only one 'rest/'.
I went through the same trouble, just want to share some points that help resolve my problem.
Bulk API endpoints are not prefixed with ‘/rest’ like other endpoints.
Bulk Import uses the same permissions model as the Marketo REST API and does not require any additional special permissions in order to use, though specific permissions are required for each set of endpoints.
As #Ethan Herdrick suggested, the endpoints in the documentation are sometimes prefixed with an extra /rest, make sure to remove that.
If you're a beginner and need step-by-step instructions to set up permissions for Marketo REST API: Quick Start Guide for Marketo REST API