how would one configure a custom SSL Policy for a V2 application load balancer via cloudformation?
This post describes a policy configured via CLI: AWS Cloudformation: Loadbalancer Custom SSL Negotiation Policy but wondering how to customize this on a listener.
SslPolicy in the docs is a type String - is this a reference to a security policy resource object? Or a string of ciphers to enable? I don't want to use a predefined policy.
Thanks.
The policy on ALB is a string because only the pre-defined, named policies are supported.
Application Load Balancers do not support custom security policies.
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-https-listener.html#describe-ssl-policies
The available canned policies for ALB are also documented at the link, above.
Related
I have a list of API's running in kubernetes behind a service (under different paths). Azure is our identity provider, and our clients are using client-credentials OAuth2 flow to generate the OAuth token and send to API, where authorization checks take place. Each of our APIs needs a different SLA for each user. Hence I am looking to rate-limit the API's per client-id that is encoded in the token (azp is the claim under which client-id is present for Azure v2.0 tokens)
We are already using Envoy as ingress gateway in our kubernetes cluster, but that supports only global or per-ip rate-limiting. We also looked at nginx, but did not find much difference. ChatGPT suggested other gateways like Tyk and Apigee-edge, but they don't seem to have this functionality. The closest suggestion given was to use Kong gateway, which rate-limits based on consumer-groups (but I did not find anything in Kong documentation about per OAuth client rate-limiting, or how a consumer can map to client-id).
Does any API gateway support such rate-limiting feature?
You can extend nginx with Lua scripting. I've not used it for this specifically, but it occurs to me that you can run a Lua script to parse the JWT and then use the client-id as the zone key for the normal nginx rate-limiting feature.
We have a custom domain feature, which allows our clients to use our services with their custom DNS records.
For example, our client ACME has a CNAME to ssl.company.com like so login.acme.com -> ssl.company.com. right now we are using a k8s cluster to provide such traffic. On each custom domain, we create an ingress, external service, and a certificate using LetsEncrypt cert-manager.
We started using Cloudflare WAF and they are providing CustomHostname feature which allows us to do the same as our CD cluster but without changing the host header. So
for the example above we get
host: login.acme.com -> login.acme.com
SNI: login.acme.com -> ssl.company.com
The issue is of course how to map a generic k8s ingress to allow such traffic.
when we did the POC we used this method and it worked, but now it stopped. We have also tried default backend and unhosted ingress path.
We are using nginx-ingress controller but migrating to another API gateway like kong.
Any help will be grateful.
We have our frontend application deployed on cloudfront & backend API's are hosted on kubernetes (EKS).
We have use cases where we are using backend APIs from cloudfont (front-end). We don't want to expose Backend API publicly which is obvious.
So now the question is how should we implement above use case? Can someone please help us?
Thansk in advance
You have multiple options to follow however more depends on you.
Option : 1
Change origin of frontend service instead of S3 use EKS as the origin with CloudFront.
This might require extra things to set up and manage so not a good idea.
Option : 2
Set the WAF with Nginx ingress controller or in ingress that will be running inside the EKS.
with WAF you can specify the domain (origin) from a specific domain only request should accepted.
Example : https://medium.com/cloutive/exposing-applications-at-aws-eks-and-integrating-with-other-aws-services-c9eaff0a3c0c
Option : 3
You can keep your EKS behind the API gateway and set auth like basic auth, API key etc, and protect the API that way running in EKS.
https://waswani.medium.com/expose-services-in-eks-via-aws-api-gateway-8f249db372bd
https://aws.amazon.com/blogs/containers/integrate-amazon-api-gateway-with-amazon-eks/
I have an AWS API gateway that I created with zappa and an ECR docker image. I assigned the lambda function to a VPC but can no longer access the API.
I created an internet gateway and have the route table routing 0.0.0.0/0 and ::/0 to it.
I have all traffic allowed on all ports on the security group as well.
However, whenever I try to access any endpoints I get a timeout error. If I take the lambda function out of the VPC I am able to access all the endpoints.
You cannot access API gateway from lambda directly, if your lambda inside VPC. In this case you have to use VPC endpoint.
You can use Lambda functions to proxy HTTP requests from API Gateway to an HTTP endpoint within a VPC without Internet access. This allows you to keep your EC2 instances and applications completely isolated from the internet while still exposing them via API Gateway. By using API Gateway to front your existing endpoints, you can configure authentication and authorization rules as well as throttling rules to limit the traffic that your backend receives.
Reference: https://aws.amazon.com/blogs/compute/using-api-gateway-with-vpc-endpoints-via-aws-lambda/#:~:text=Conclusion,exposing%20them%20via%20API%20Gateway
Kubernetes surfaces an API proxy, which allows querying the internal services via eg: https://myhost.com/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard/
This is all well, and good. However, for security & compliance reasons, all of our services expose an HTTPS endpoint. Attempting to access them by going to https://myhost/api/v1/proxy/namespaces/default/services/myhttpsservice:3000/ results in
Error: 'read tcp 172.20.122.129:48830->100.96.29.113:3000: read: connection reset by peer'
Trying to reach: 'http://100.96.29.113:3000/'
Because the endpoint, 100.96.29.113:3000 is in fact https.
Is there any way to configure the proxy to apply SSL to specific service endpoints?
(Edit: If this is not currently possible, a relevant github issue link for tracking the feature request is also acceptable answer until it will be)
As documented at https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#manually-constructing-apiserver-proxy-urls, (and pointed out on slack), you can access services behind HTTPS by prefixing the servicename with "https:" ;
Using the example from above, correctly it would be: https://myhost/api/v1/proxy/namespaces/default/services/https:myhttpsservice:3000/