Access Denied Error when trying to post messages to SQS - aws-api-gateway

I am trying to create an API that logs JSON request bodies in an SQS queue.
I have set up a basic queue in SQS in both the FIFO and non-FIFO layouts. I have the same problem each time. My policy for the SQS queue is as follows:
{
"Version": "2012-10-17",
"Id": "arn:aws:sqs:us-east-1:2222222222222:API-toSQS.fifo/SQSDefaultPolicy",
"Statement": [
{
"Sid": "Sid22222222222",
"Effect": "Allow",
"Principal": "*",
"Action": "SQS:*",
"Resource": "arn:aws:sqs:us-east-1:2222222222222:API-toSQS.fifo"
}
]
}
I have created a policy which i give all access to SQS for writing abilities. And I have created a role for API Gateway in which i assign the aforementioned policy to. Here is the policy i have assigned to this role:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"sqs:DeleteMessage",
"sqs:ChangeMessageVisibility",
"sqs:DeleteMessageBatch",
"sqs:SendMessageBatch",
"sqs:PurgeQueue",
"sqs:DeleteQueue",
"sqs:SendMessage",
"sqs:CreateQueue",
"sqs:ChangeMessageVisibilityBatch",
"sqs:SetQueueAttributes"
],
"Resource": "*"
}
]
}
I have set up an API gateway. I have created a POST method. I've tried enabling the CORS option (which create an OPTIONS method) and i've done it without CORS enabled. My ARN for my security policy is correct, i have triple checked it. and i opt for the override path and have the full https URL of my SQS queue there, i have triple checked this as well. My endpoint is SQS of course.
For integration request i have a HTTP header for Content-Type and then a Mapped From as 'application/x-www-form-urlencoded'
in mapping templates i have passthrough set as never and have a Content-Type set to application/json is also have included the template Action=SendMessage&MessageBody=$input.body to translate from body to url as per a walkthrough i found.
i am getting the following error in the API Gateway test area
<AccessDeniedException>
<Message>Unable to determine service/operation name to be authorized</Message>
</AccessDeniedException>
Is there a AWS guru out there who can steer me in the right direction?
to clarify my issue is that it should be adding my test body
{"peanutbutter":"jelly"}
to the SQS queue, but no luck.
I can send url encoded messages to SQS all day from postman, but i want my business partners to be able to send a clean JSON object via http (postman, node, etc, whatever..)
thank you!

i opt for the override path and have the full https URL of my SQS queue there
In Path Override type only path part of SQS queue URL 2222222222222/API-toSQS.fifo.
Also, MessageGroupId is required for fifo queues and if ContentBasedDeduplication is not enabled MessageDeduplicationId is required too.
Example of mapping template:
Action=SendMessage&MessageGroupId=$input.params('MessageGroupId')&MessageDeduplicationId=$input.params('MessageDeduplicationId')&MessageBody=$input.body
in this case you need to define MessageGroupId and MessageDeduplicationId as required query string parameters in Method Request and obviously pass them on requests to the API endpoint.

For anyone having this same issue, removing all of the settings from the integration request in API Gateway and using Lambda as a "middleman" worked. Lambda is a great go-between for almost all of the AWS services. I would prefer to have a API Gateway -> SQS stack instead of using API Gateway -> Lambda -> SQS, but for whatever reason the way the lambda handles the HTTP request as opposed to trying to configure API Gateway to do the same, works without issue.
you will not need any external resources in Lambda, so no importing Zip files. Just import AWS and SQS. use the basic structure to accept the event, then take the body (as JSON in my case) and sqs.sendMessage to your queue.
hope this helps anyone with the same issue.

Related

How to know the structure (body) of rest api azure POST request?

i am new at rest api azure and i dont know how to get correct body template of policy.
For example i used :
GET https://dev.azure.com/organization/project/_apis/policy/types?api-version=7.0
and the response are types of policies which i can use but how do i know the construction of the request body? Like this one:
{
"isEnabled": true,
"isBlocking": false,
"type": {
"id": "fa4e907d-c16b-4a4c-9dfa-4906e5d171dd"
},
"settings": {
"minimumApproverCount": 4,
"creatorVoteCounts": false,
"scope": [
{
"repositoryId": "a957e751-90e5-4857-949d-518cf5763394",
"refName": "refs/heads/master",
"matchKind": "exact"
}
]
}
}
Where should I find those request body templates? :(
Resources: https://learn.microsoft.com/en-us/rest/api/azure/devops/policy/configurations/create?view=azure-devops-rest-5.1&tabs=HTTP
Usually, when you could list or get the repo policy correctly, you could use the parameter configuration part of the returning result as the request body in creating the policy with post method.
rest api to list the branch policy.
GET https://dev.azure.com/{organization}/{project}/_apis/policy/configurations?api-version=5.1
with optional parameter
GET https://dev.azure.com/{organization}/{project}/_apis/policy/configurations?scope={scope}&policyType={policyType}&api-version=5.1
You could check the templates below for different configurations in Policy template examples.
Examples
Approval count policy
Build policy
Example policy
Git case enforcement policy
Git maximum blob size policy
Merge strategy policy
Work item policy
If you still don't know how to compose the request body, you could also share your scenario.
i finally made it, it was very hard and i dont understand why Microsoft has so bad documentation.... i had to made it by sending randoms request and look at the elements how the names are... so bad so much time spend...

How do add an apitoken as part of a custom VSTS service endpoint datasource?

I'm am trying to implement a VSTS extension which adds a new service endpoint. Crucially, the authentication method for this service includes the API as part of the querystring.
I am using the "type": "ms.vss-endpoint.endpoint-auth-scheme-token" for AuthenticationScheme.
I've defined the dataSources like so:
"dataSources": [
{
"name": "TestConnection",
"endpointUrl": "{{endpoint.url}}projects?token={{endpoint.apitoken}}"
}
]
However, in performing a test to Verify Connection:
Failed to query service endpoint api: https://myserver.com/projects?token=.
endpoint.apitoken is always blank.
Is there a placeholder/replacement value that can be used to get access to this value or another way of achieving the same end result?
I've tried using different authentication schemes (such as 'none') and included a inputDescriptor to capture my apitoken, but I have the same result. There doesn't seem to be a way to reference these values?
No, it is not supported. This article may benefit you: Service endpoint authentication schemes

Escaping AWS IAM Policy Variable for API Gateway Permissions

I currently have SAML integration setup and working as expected between my authentication provider (auth0) and AWS/AWS API Gateway.
The complications arise however when defining an AWS Policy with the ${saml:sub} variable.
Here's an example of my configuration:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"execute-api:*"
],
"Resource": [
"arn:aws:execute-api:us-west-2:[removed]/*/GET/customers/${saml:sub}"
]
}
]
}
Basically I want to ensure that this endpoint is only accessible by the currently auth'd in user (based on their saml:sub). The currently auth'd user should not be able to access another customers record. Seems like this should be a potentially common use-case.
Auth0 automatically assigns saml:sub and the format of the id is something like this
auth0|429
I'm assuming the issue currently lies with the pipe character being there and it comparing it to an automatically escaped value when the request is made to the API Gateway URL via the browser. Because of this, i'm assuming access is denied to the resource because
auth0|429 != auth0%7C429.
Is there a way within an IAM policy to work around this?
Is there a potential workaround on the Auth0 side to assign a different value to ${saml:sub}?
Appreciate all the potential solutions above! Ultimately I ended up abandoning SAML integration between Auth0 and AWS and opting for a custom authorizer via a lambda function inside of API Gateway. This allowed for a little more flexible setup.
For anyone else facing a similar scenario, I came across this GitHub project that's been working great so far:
https://github.com/jghaines/lambda-auth0-authorizer
I modified the project for our own purposes a little bit, but essentially what we've done is mapped our internal user ID to the AWS principalId.
On the API Gateway side we've setup a /customers/me resource and then on the integration request modified the URL Path Parameters like so:
Integration Request Screenshot
Our policy in our lambda function is setup like so
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "324342",
"Effect": "Allow",
"Action": [
"execute-api:Invoke"
],
"Resource": [
"arn:aws:execute-api:us-west-2:[removed]/*/GET/customers/me"
]
}
]
}
This allows for dynamic access to the endpoint and only returns data specific to the logged in user.
In my opinion the issue you described should be solved/handled from within the AWS Policy configuration, but since I'm not knowledgeable on that I'll offer you a workaround from the perspective of avoiding potential troublesome characters.
You can configure and override the default SAML mappings that Auth0 uses to output user information and as such control the attributes used for each of the output claims and the SAML subject.
Check SAML attributes mapping for an overview on how to do this.
Additionally, check SAML configuration via rules for a detailed view of all the available options.
By default, Auth0 will check the following claims in order to decide the one to be used as the SAML subject:
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name
The IAM Policy won't be able to recognize ${saml:sub} in the actual resource ARN. Beyond that, API GW won't automatically understand a SAML assertion.
Are you using a custom authorizer Lambda function to parse the SAML assertion? If so you would want to parse out the 'sub' field and insert it directly into the policy returned from the authorizer like so
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"execute-api:*"
],
"Resource": [
"arn:aws:execute-api:us-west-2:[removed]/*/GET/customers/auth0|429"
]
}
]
}
If you're already that far and it's still not working as expected, then you're right, it may be that the URI is not being normalized depending on the client/browser encoding. I'd have to test that. But as long as your backend treats /customers/auth0|429 == /customers/auth0%7C429, you could safely build a policy that allows both unencoded and encoded versions of the resource:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"execute-api:*"
],
"Resource": [
"arn:aws:execute-api:us-west-2:[removed]/*/GET/customers/auth0|429",
"arn:aws:execute-api:us-west-2:[removed]/*/GET/customers/auth0%7C429"
]
}
]
}
If you're not using custom authorizers, please elaborate on what your setup looks like. But either way, unfortunately the IAM policy won't ever be able to evaluate the ${var} syntax in the resource block.

403 forbidden error on S3 REST API HEAD request

Im trying do do a HEAD Object request to the S3 REST API but I keep getting a 403 Forbidden error, even though I have the policy setup with the necessary permissions on S3. The response body is empty, so I don't think its a signature problem. I've tried several changes to the policy, nothing seems to make it work. I'm able to PUT objects and DELETE objects normally, just HEAD doesn't work.
Here's my bucket policy:
{
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam:: 999999999999:user/User"
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::my-bucket"
},
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-bucket/*"
},
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::999999999999:user/User"
},
"Action": [
"s3:GetObject",
"s3:GetObjectVersion",
"s3:DeleteObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::my-bucket/*"
}
]
}
Any ideas?
Update:
As Michael pointed out it seems to be a problem with my signature, though Im failing to see what.
def generate_url options={}
options[:action] = options[:action].to_s.upcase
options[:expires] ||= Time.now.to_i + 100
file_path = "/" + #bucket_name + "/" + options[:file_name]
string_to_sign = ""
string_to_sign += options[:action]
string_to_sign += "\n\n#{options[:mime_type]}\n"
string_to_sign += options[:expires].to_s
string_to_sign += "\n"
string_to_sign += file_path
signature = CGI::escape(
Base64.strict_encode64(
OpenSSL::HMAC.digest('sha1', SECRET_KEY, string_to_sign)
)
)
url = "https://s3.amazonaws.com"
url += file_path
url += "?AWSAccessKeyId=#{ACCESS_KEY}"
url += "&Expires=#{options[:expires]}"
url += "&Signature=#{signature}"
url
end
The generated string to sign looks like this:
HEAD\n\n\n1418590715\n/video-thumbnails/1234.jpg"
Solution:
It seems at some point while developing the file PUT part I actually have broken GET and HEAD. I was passing an empty string as the body of the request, instead of passing nothing, making the mime type required on the signature and breaking it because I was providing no mime type. I simply removed the empty request body and it worked perfectly. Thanks Michael for pointing me out of the wrong direction I was(I wasted so much time changing the bucket policy).
It still could be your signature, and I suspect that it is, for the following reasons:
Your observation that the message body is a good observation; however, it doesn't mean what you have concluded it means.
The lack of a response body does not give you any information at all about the nature of the error, in this case, because a web server is not supposed to return a body along with a HEAD response, no matter what:
The HEAD method is identical to GET except that the server MUST NOT return a message-body in the response
— http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html (RFC-2616)
Testing this on my side, I've confirmed that S3's response to an unsigned HEAD request and to an incorrectly-signed HEAD request is no different: it's always HTTP/1.1 403 Forbidden with no message body.
Note, also, that a signed URL for GET is not valid for HEAD, and vice versa.
In both S3 Signature Version 2 and S3 Signature Version 4, the "String to Sign" includes the "HTTP Verb," which would be GET or HEAD, meaning that a signature that's valid for GET would not be valid for HEAD, and vice versa... the request method must be known at the time of signing, because it's an element that's used in the signing process.
The s3:GetObject permission is the only documented permission required for using HEAD, which seems to eliminate permissions as the problem, if GET is working, which points back to the signature as the potential issue.
Confirmed that HEAD a presigned-URL will get 403 Forbidden.
If set custom headers such as content-type of the object.
The 403 response will not contain the custom header and still get application/xml.
Additional comment on #Michael-sqlbot 's answer above ...
I faced the identical symptoms but I had a different root cause.
If you are trying to HEAD a file which does not exist, then this will also return a 403-forbidden error, UNLESS you have the s3:ListBucket permission.
In my case, I had the s3.GetObject, s3.PutObject, and s3.HeadBucket permissions, but it wasn't until I added s3.ListBucket that I got the correct 404 - not found error.
This is also explained here: https://aws.amazon.com/premiumsupport/knowledge-center/s3-rest-api-cloudfront-error-403/
Had the same issue but with a different root cause - was trying to create a bucket, and instead of getting a 404, got 403. As S3 is globally namespaced, someone else had created the bucket, so while I had the correct permissions and setup for my account, I still would get 403 from a HEAD request. Solution was to check if the bucket exists globally first, and if so, try a different bucket name.
I was also getting this error as a red herring; during pytest using freezegun. I had frozen time to a time in the past, and was getting a 403 error. So clock skew could cause this.
I found this by trying another API call, where I received:
E botocore.exceptions.ClientError: An error occurred (RequestTimeTooSkewed) when calling the ListObjects operation: The difference between the request time and the current time is too large.

What policy Action(s) on what Resource(s) are required to run New-EC2Tag from Powershell?

I'm attempting to run New-EC2Tag, getting the following error:
New-EC2Tag : You are not authorized to perform this operation.
The user policy is as follows:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["ec2:DescribeInstances","ec2:CreateTags"],
"Resource": "arn:aws:ec2:ap-southeast-2:<my_account_id>:instance/*",
"Condition": {
"StringEquals": {
"ec2:ResourceTag/OctopusTentacle": "yes"
}
}
}
]
}
It works fine in the Policy Simulator as above.
If I remove the condition and set Resource to * it works. Removing the condition or setting Resource to * alone do not work. I am running this as local Administrator on the instance.
What else is New-EC2Tag accessing/doing that I need to grant access to?
If New-EC2Tag works when clearing the Condition and wildcarding the Resource, then we should be inspecting both of those.
From some investigation, New-EC2Tag's related API action is CreateTags. According to Supported Resources and Conditions for Amazon EC2 API Actions, some API actions do not support ARNs. This seems to be the case with CreateTags, as it requests that you specify a resource ID instead. This is also corraborated by the "Supported Resources..." documentation I linked above, which does not list CreateTags as supporting arns.
In this case, the documentation recommends that you set the policy as such:
If the API action does not support ARNs, use the * wildcard to specify
that all resources can be affected by the action.
So that leaves the condition... the tag. The tag that you are using as a condition needs to already exist on the instance for the policy to be applied as you expect. An example from the policy simulator, where the tag already exists:
Another consideration is that the action may likewise not support conditions, but I haven't found anything to back that up.