Lambda Authorizer Policy not restricting access to Api Gateway proxy resource - aws-api-gateway

I have a Lambda authorizer (python) that returns a resource-based policy similar to the following:
import json
def lambda_handler(event, context):
resource = "*"
headerValue = _get_header_value(event, 'my-header')
if headerValue == 'a':
resource = "arn:aws:execute-api:*:*:*/*/GET/a"
return {
"principalId": f"somebody",
"policyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Action": "execute-api:Invoke",
"Effect": "Allow",
"Resource": f"{resource}"
}
]
}
}
Basically, this authorizer will return an unrestricted api resource policy by default, using *. However, if a specific header value is passed, the policy will restrict access to only allow GET /a.
On the ApiGateway side of things, the only resource I have is ANY /{proxy+} which proxies into a .NET Core WebApi using APIGatewayProxyFunction. Inside the APIGatewayProxyFunction/WebApi, I have a number of Controllers and routes available, including GET /a. After all this is deploying into AWS, I can construct an http request using my-header with value a. I'm expecting this request to only provide access to GET /a, and return a 403 in all other cases. Instead, it provides access to everything in the api, similar to the star policy.
Is this the expected behavior when using a Lambda Authorizer in front of a proxy resource? It seems to really only enforce Allow * or Deny *. Thank you.
Note - When using the same authorizer against an Api Gateway where all the resources defined inside it (instead of inside .NET Controllers by proxy), the expected behavior does appear to happen - the http request with my-header set to 'a' will grant access to GET /a, but return 403 otherwise.

Related

CloudFront to API Gateway request returns 403: "The request signature we calculated does not match the signature you provided."

I have an API Gateway fronted by CloudFront. The API Gateway has a regional endpoint with api key disabled. An Authorization header must be sent to the regional endpoint or the endpoint returns "Missing Authentication Token" as expected.
Using the same request on the CloudFront endpoint returns the following 403 Forbidden error:
{
"message": "The request signature we calculated does not match the signature you
provided. Check your AWS Secret Access Key and signing method. Consult the service
documentation for details.\n\nThe Canonical String for this request should have been\n'POST
// sensitive data here...
}
The Auth token is created from an AWS signature. The signature originates from an IAM role that allows invocation on the endpoint: "Action": "execute-api:Invoke"
Any ideas on why CloudFront isn't able to use these credentials to hit the API Gateway endpoint?
In summary,
"Postman w/ Authorization header -> API Gateway endpoint" works.
"Postman w/ Authorization header -> CloudFront -> API Gateway endpoint" returns the above 403.
UPDATE: Adding information on how I obtain the signature.
IAM Role:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "execute-api:Invoke",
"Resource": "arn:aws:execute-api:us-west-2:{ACCOUNT}:{ENDPOINT}",
"Effect": "Allow"
}
]
}
AccessKey, SecretKey, Session Token are obtained in CloudShell:
$ aws sts assume-role --role-arn arn:aws:iam::{ACCOUNT}:{ROLE} --role-session-name {SESSION_NAME}
These 3 keys are then used in Postman's Authorization tab. I select "AWS Signature" type and provide the AccessKey, SecretKey, and SessionToken.
From here, I can hit the API Gateway endpoint and receive 200 response. With the same request and headers, hitting the CloudFront endpoint results in the 403.
UPDATE #2: Adding information on CloudFront configuration.
The distribution behavior for the API GW origin is using the CachingOptimized policy. Its also allowing all HTTP methods.

How to implement cross-account RBAC using Cognito User groups and API Gateway?

I have 2 AWS accounts.The Front end along with cognito is hosted in Account 1 and the backend with the API GW is hosted in Account 2. I want to setup RBAC to prevent the users in the Cognito group to 'DELETE' API's using cognito groups. I have created a permission policy as below and attached it to a Role and then attached the Role to the Cognito group. I have then created a Authoriser for the API GW in Account 2 using the Cognito user pool available in Account 1 and then attached the Authoriser to the API's Delete Method Request.
Deny Policy, where I have replaced the resource parameters with my account/API details:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Action": [
"execute-api:Invoke"
],
"Resource": [
"arn:aws:execute-api:region:account-id:api-id/stage/METHOD_HTTP_VERB/Resource-path"
]
}
]
}
But when I try to delete the API, I am still able to successfully delete it. But I expect to get unauthorised as per the setup. I am able to see the Cognito user group details when I decode the token response, so my guess is the Cognito call is happening properly with API GW, but the Role/Deny Policy attached is not being enforced. Can someone please help me know what I am doing wrong, since this is cross account do I have to do something else with the IAM Role I have attached to the Cognito group or is there a issue with the Policy I am using?

AppSync request from API Gateway: Valid authorization header not provided

I have an AWS architecture like this:
An API Gateway with many endpoints. One of them is "/graphql"
The "/graphql" API Gateway endpoint points to a "/graphql" AppSync endpoint
My API Gateway uses COGNITO_USER_POOL to authentificate users. When an user makes a request to "/graphql" endpoint of API Gateway, he must to add id_token to "Authorization" header on the request. It works well.
My integration method on API Gateway gets the "Authorization" header and puts it on AppSync request using this HTTP Headers mapping:
Authorization = method.request.header.Authorization
It seems to work correctly also. Nevermind, I got this AppSync error when requesting the API Gateway endpoint:
{
"errors": [
{
"errorType": "UnauthorizedException",
"message": "Valid authorization header not provided."
}
]
}
It doesn't seems to be a token problem, because it works correctly when I request the AppSync endpoint directly (with the same Authorization header).
I observed that API Gateway adds some headers on the AppSync request, to generate a Signature. So my question is: Is there any way to do a request on AppSync from API Gateway without pass the Signature, only the id_token that user got from Cognito User Pool? I'd like to ignore IAM and use only the token (as I do when the request is done directly on AppSync from Postman).
Many thanks!

PUT Object to AWS S3 via HTTP through VPC Endpoint with proper ACL?

I am using an HTTPS client to PUT an object to Amazon S3 from an EC2 instance within a VPC that has an S3 VPC Endpoint configured. The target Bucket has a Bucket Policy that only allows access from specific VPCs, so authentication via IAM is impossible; I have to use HTTPS GET and PUT to read and write Objects.
This works fine as described, but I'm having trouble with the ACL that gets applied to the Object when I PUT it to the Bucket. I've played with setting a Canned ACL using HTTP headers like the following, but neither results in the correct behavior:
x-amz-acl: private
If I set this header, the Object is private but it can only be read by the root email account so this is no good. Others need to be able to access this Object via HTTPS.
x-amz-acl: bucket-owner-full-control
I totally thought this Canned ACL would do the trick, however, it resulted in unexpected behavior, namely that the Object became World Readable! I'm also not sure how the Owner of the Object was decided either since it was created via HTTPS, in the console the owner is listed as a seemingly random value. This is the documentation description:
Both the object owner and the bucket owner get FULL_CONTROL over the
object. If you specify this canned ACL when creating a bucket, Amazon
S3 ignores it.
This is totally baffling me, because according to the Bucket Policy, only network resources of approved VPCs should even be able to list the Object, let alone read it! Perhaps it has to do with the union of the ACL and the Bucket Policy and I just don't see something.
Either way, maybe I'm going about this all wrong anyway. How can I PUT an object to S3 via HTTPS and set the permissions on that object to match the Bucket Policy, or otherwise make the Bucket Policy authoritative over the ACL?
Here is the Bucket Policy for good measure:
{
"Version": "2008-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:AbortMultipartUpload",
"s3:DeleteObject",
"s3:DeleteObjectVersion",
"s3:GetObject",
"s3:GetObjectTagging",
"s3:GetObjectTorrent",
"s3:GetObjectVersion",
"s3:GetObjectVersionTagging",
"s3:GetObjectVersionTorrent",
"s3:ListBucket",
"s3:ListBucketMultipartUploads",
"s3:ListBucketVersions",
"s3:ListMultipartUploadParts",
"s3:PutObject",
"s3:PutObjectTagging"
],
"Resource": [
"arn:aws:s3:::my-bucket",
"arn:aws:s3:::my-bucket/*"
],
"Condition": {
"StringEquals": {
"aws:SourceVpc": "vpc-12345678"
}
}
}
]
}
The way that S3 ACLs and Bucket Policies work is the concept of "Least Privilege".
Your bucket policy only specifies ALLOW for the specified VPC. No one else is granted ALLOW access. This is NOT the same as denying access.
This means that your Bucket or object ACL is granting access.
In the S3 console double check who the file owner is after the PUT.
Double check the ACL for the bucket. What rights have your granted at the bucket level?
Double check the rights that you are using for the PUT operation. Unless you have granted public write access or the PUT is being ALLOWED by the bucket policy, the PUT must be using a signature. This signature will determine the permissions for the PUT operation and who owns the file after the PUT. This is determined by the ACCESS KEY used for the signature.
Your x-amz-acl should contain bucket-owner-full-control.
[EDIT after numerous comments below]
The problem that I see is that you are approaching security wrong in your example. I would not use the bucket policy. Instead I would create an IAM role and assign that role to the EC2 instances that are writing to the bucket. This means that the PUTs are then signed with the IAM Role Access Keys. This preserves the ownership of the objects. You can then have the ACL being bucket-owner-full-control and public-read (or any supported ACL permissions that you want).

User access using kubectl

I want to set multiple accounts to only have access only to owned namespace, we try with authorization mode ABAC but we get when use kubectl "error: couldn't read version from server: the server does not allow access to the requested ressource" and it seems to be a bug. Is theire other way to do it ?
Before attempting to access your resources, kubectl first makes requests to the server's /version and /api endpoints to confirm compatibility and negotiate API version. In ABAC, the /version and /api endpoints are considered "nonResourcePaths", but those also require authorization. You can add a rule to your ABAC file allowing all users readonly access to nonResourcePaths as follows:
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"*", "nonResourcePath": "*", "readonly": true}}
From there, you can make it more restrictive if you need to.