We have our frontend application deployed on cloudfront & backend API's are hosted on kubernetes (EKS).
We have use cases where we are using backend APIs from cloudfont (front-end). We don't want to expose Backend API publicly which is obvious.
So now the question is how should we implement above use case? Can someone please help us?
Thansk in advance
You have multiple options to follow however more depends on you.
Option : 1
Change origin of frontend service instead of S3 use EKS as the origin with CloudFront.
This might require extra things to set up and manage so not a good idea.
Option : 2
Set the WAF with Nginx ingress controller or in ingress that will be running inside the EKS.
with WAF you can specify the domain (origin) from a specific domain only request should accepted.
Example : https://medium.com/cloutive/exposing-applications-at-aws-eks-and-integrating-with-other-aws-services-c9eaff0a3c0c
Option : 3
You can keep your EKS behind the API gateway and set auth like basic auth, API key etc, and protect the API that way running in EKS.
https://waswani.medium.com/expose-services-in-eks-via-aws-api-gateway-8f249db372bd
https://aws.amazon.com/blogs/containers/integrate-amazon-api-gateway-with-amazon-eks/
Related
I have an app that is deployed in k8s cluster and the frontend and the backend of the app are exposed, is there a way to not expose the backend ? i thought about the api gateway is it going to fulfills my requests if yes how ? and if not what's the alternatives ??
Thank you in advance
I tried the gateway kong and it did not work out very well
Your question requires more detail for me to be certain, however I will take a stab at pointing you in the right direction.
Typically, most apps have a frontend which serves the HTML, and any static assets such as images, css and javascript (like a single page app such as ReactJS)
If you have some SPA, then you will likely have a backend API written in something like Node / Python / PHP / Java to serve your frontend app with dynamic data.
If your frontend and backend are exposed to the internet, this is OK and expected.
If your backend was not exposed to the internet, then it would be impossible for your frontend to load dynamic data.
That said, you mentioned that you are using or at least tried to use an API Gateway. Typically, you would not expose your backend directly to the internet. Rather, you would expose your API Gateway to the internet, with the API Gateway acting as a reverse proxy to your backend.
In order to achieve this (in Kubernetes), you would typically create a Service type Load Balancer for your API Gateway. And configure a Service type Cluster IP for your backend.
ClusterIP: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default that is used if you don't explicitly specify a type for a Service. You can expose the service to the public with an Ingress or the Gateway API.
I have a list of API's running in kubernetes behind a service (under different paths). Azure is our identity provider, and our clients are using client-credentials OAuth2 flow to generate the OAuth token and send to API, where authorization checks take place. Each of our APIs needs a different SLA for each user. Hence I am looking to rate-limit the API's per client-id that is encoded in the token (azp is the claim under which client-id is present for Azure v2.0 tokens)
We are already using Envoy as ingress gateway in our kubernetes cluster, but that supports only global or per-ip rate-limiting. We also looked at nginx, but did not find much difference. ChatGPT suggested other gateways like Tyk and Apigee-edge, but they don't seem to have this functionality. The closest suggestion given was to use Kong gateway, which rate-limits based on consumer-groups (but I did not find anything in Kong documentation about per OAuth client rate-limiting, or how a consumer can map to client-id).
Does any API gateway support such rate-limiting feature?
You can extend nginx with Lua scripting. I've not used it for this specifically, but it occurs to me that you can run a Lua script to parse the JWT and then use the client-id as the zone key for the normal nginx rate-limiting feature.
I am using azure kubernetes for backend deployment. I have 2 URLs one is API URL(api.project.com) and other one is BFF URL(bff.project.com).
From Web application, instead of calling API URL(api.project.com) they use BFF URL(bff.project.com) which internally calls the API URL(api.project.com) and sends the response.
I now want to restrict direct usage of API URL(api.project.com) even from any REST API Clients(like postman, insomnia, ...) it should only work when triggered from BFF URL(bff.project.com).
We have used nginx-ingress for subdomain creation and both the URLs(BFF and API) are in same cluster.
Is there any firewall or inbuilt azure services to resolve the above mentioned problem ?
Thanks in Advance :)
You want to keep your api private, only accessible from another K8S service, so don't expose it using your ingress controller and it simply won't be accessible outside K8S to any client.
This means that you lose the api.project.com address (although you can get that back if you really want to, it seems unnecessary). The BFF would then access the API via the URL: http://<service-name>.<namespace>.svc.cluster.local:<service-port>, which in your case might be:
http://api.api_ns.svc.cluster.local
Assuming you haven't used TLS (http rather than https), the service is called api, it's running on port 80 (which it should be) and the namespace is called api_ns.
Should you need to provide temporary access to the API for developers to use, say, postman, then they can use port-forwarding to provide that in a dev environment without allowing external access all the time.
However, this won't restrict access to BFF alone. Any service running in K8S could access the API. If you need/want to restrict things further, then you have a lot of options.
I'm looking for a way to authenticate an Istio-enabled Kubernetes cluster with an external Oauth2 provider. The Nginx Ingress controller has a way to do this when using vanilla Ingres resources.
https://kubernetes.github.io/ingress-nginx/examples/auth/oauth-external-auth/
However, I'm not sure how to do this with Istio Gateway and VirtualService objects. Basically, I need to be able to provide an auth-url and an auth-sigin url to Istio, so it will authenticate the same way that the oauth Nginx ingress controller does. I've found a few examples of EnvoyFilters suggest ways to do this, but there isn't a lot of documentation on how to make this work.
Any advice to get Istio to integrate with an external Oauth would be much appreciated.
OriginAuthenticationMethod is the authentication policy that you are looking for.
Refer: https://istio.io/docs/reference/config/security/istio.authentication.v1alpha1/#OriginAuthenticationMethod
Currently, only JWT is supported for origin authentication.
A workaround would be using another type of Ingress.
Using the built in App Service Authentication / Authorization to populate the ClaimsPrincipal when hosting functions in Azure works great and is pretty well documented.
However, trying to accomplish this with a containerized app in Kubernetes is a different story. I can't find any information on how to support authentication in a way that would mimic the behavior of hosting the functions in Azure. I hope this is possible because I would like to use the same functions both on-premises and in Azure.
Is there any information available on how this can be accomplished?
App Service Authentication / Authorization is a feature provided as part of the PAAS offering. The Azure Functions Host, which is open-source, inherits such features when running on Azure PAAS.
But when running on kubernetes, the way Azure Functions works is different. For one, scaling is taken care of kubernetes (and knative/osiris/keda when setup). The same goes for any external authentication/authorization.
There are a couple of ways you could set this up
If you are using an ingress controller like nginx, you can pair it with oauth2_proxy for external oauth authentication. Depending on the ingress controller you are using, it may have built-in support for authentication.
If you are using a service mesh like istio, you could make use of its end-user authentication policies. Note that this just checks if there is a valid JWT and doesn't redirect users.
You would have to deploy an EnvoyFilter similar to this one. For an SSO scenario, you might need something like this.