Using HTTP API to call multiple services running on AWS ECS - aws-api-gateway

My goal here is to deploy two spring boot services using AWS ECS Fargate in a private subnet and access them via AWS API Gateway. Basically, I want to use a single HTTP API and then based on the path it should call the appropriate service. I am using VPC Links, and Cloud Map for linking services running in a private subnet, for service discovery. First of all - Is this assumption even correct, i.e. can we use a single HTTP API to call two different services based on a path?
Some considerations of how I created the ECS services.
ECS Service A is deployed in a private subnet, it has no public IP enabled and the service discovery has been enabled. While enabling service discovery I choose the DNS record type to be SRV, giving a port number and TTL as 60 secs.
ECS Service B is also deployed similarly.
Both ECS Service A and B have a separate Service discovery endpoint.
Now in the API Gateway, the steps I followed were
Created a new HTTP API using the defaults, this means the default stage and no routes and integrations configured yet.
Then I created a VPC Link for HTTP API by assigning it a name (service-a-vpclink), assigning a VPC, subnet and appropriate security group (security that was assigned to the ECS service for service A).
Now I created a route where the method is "ANY" and the path is "$default" and assigned an integration to it, I am able to reach all my endpoints of service A running in the private subnet. (So all good here, as this shows that I am able to reach the service running in a private subnet using API Gateway.)
For the integration that I mentioned in point 3, this was of type "Private Resource", target service as "Cloud Map" and then selecting the namespace and appropriate service (serviceA) along with the VPC link that was created in step 2.
But this is what I don't want to do. I want something like the below:
Hitting any endpoint like "https://uzhgtf6t8u.execute-api.eu-west-2.amazonaws.com/serviceA/any-serviceA-endpoints" where /serviceA is a path that is configured in API Gateway and then any-serviceA-endpoints are the actual endpoints configured in the backend service running, navigates to service A endpoints.
Hitting any endpoint like "https://uzhgtf6t8u.execute-api.eu-west-2.amazonaws.com/serviceB/any-serviceB-endpoints" where /serviceB is a path that is configured in API Gateway and then any-serviceB-endpoints are the actual endpoints configured in the backend service running, navigates to service B endpoints.
Here I attach separate integrations to path /serviceA and to path /serviceB, but this does not work. Rather this way the response is 404, not found.
What exactly am I not following?
Many thanks..
Screenshot of route

Related

Use AKS services with Azure API Management

I have set up my application to be served by a Kubernetes NGINX ingress in AKS. Today while experimenting with the Azure API management, I tried to set it up so that all the traffic to the ingress controller would go through the API management. I pointed its backend service to the current public address of the ingress controller but I was wondering when I make the ingress controller private or remove it altogether to rely on the Kubernetes services instead, how API management could access it and how I would define the backend service in API management. By the way, while provisioning the API management instance, I added a new subnet to the existing virtual network of the AKS instance so they are in the same network.
There are two modes of deploying API Management into a VNet – External and Internal.
If API consumers do not reside in the cluster VNet, the External mode (Fig below) should be used. In this mode, the API Management gateway is injected into the cluster VNet but accessible from public internet via an external load balancer. It helps to hide the cluster completely while still allowing external clients to consume the microservices. Additionally, you can use Azure networking capabilities such as Network Security Groups (NSG) to restrict network traffic.
If all API consumers reside within the cluster VNet, then the Internal mode (Figure below) could be used. In this mode, the API Management gateway is injected into the cluster VNET and accessible only from within this VNet via an internal load balancer. There is no way to reach the API Management gateway or the AKS cluster from public internet.
In both cases, the AKS cluster is not publicly visible. The Ingress Controller may not be necessary. Depending on your scenario and configuration, authentication might still be required between API Management and your microservices. For instance, if a Service Mesh is adopted, it always requires mutual TLS authentication.
Pros:
The most secure option because the AKS cluster has no public endpoint
Simplifies cluster configuration since it has no public endpoint
Ability to hide both API Management and AKS inside the VNet using the Internal mode
Ability to control network traffic using Azure networking capabilities such as Network Security Groups (NSG)
Cons:
Increases complexity of deploying and configuring API Management to work inside the VNet
Reference
To restrict access to your applications in Azure Kubernetes Service (AKS), you can create and use an internal load balancer. An internal load balancer makes a Kubernetes service accessible only to applications running in the same virtual network as the Kubernetes cluster.
You can either expose your the backends on the AKS cluster through internal Ingress or simply using Services of type internal load balancer.
You can then point the API Gateway's backend to the internal Ingress' Private IP address or the internal load balancers Service's EXTERNAL IP (which would also be a private IP address). These private IP addresses are accessible within the Virtual Network and any connected network (i.e. Azure virtual networks connected through peering or Vnet-to-Vnet Gateway, or on-premises networks connected to the AKS Vnet). In your case, if the API Gateway is deployed in the same Virtual Network then, it should be able to access these private IP addresses. If the API Gateway is deployed in a different Virtual Network, please connect it to the AKS virtual network using VNET Peering or Vnet-to-Vnet Gateway, depending on your use-case.
Is it working now. If not, please try to add that vnet and subnet in apim. Mostly it won't required, because both of them are in same vnet,we can access directly via privateip. Please check the routing is properly configured in the ingress controller. Another option is, just for testing, you can directly call the service from api by avoiding ingress controller. So that we can make sure that, there is no request is getting blocked by nsg or others
.

Cannot create API Management VPC Link in AWS Console

I'm failing to add a VPC Link to my API Gateway that will link to my application load balancer. The symptom in the AWS Console is that the dropdown box for Target NLB is empty. If I attempt to force the issue via the AWS CLI, an entry is created; but the status says NLB ARN is malformed.
I've verified the following:
My application load balancer is in the same account and region as my API Gateway.
My user account has admin privileges. I created and added the recommended policy just in case I was missing something.
The NLB ARN was copied directly from the application load balancer page for the AWS CLI creation scenario.
I can invoke my API directly on the ECS instance (it has a public IP for now).
I can invoke my API through the application load balancer public IP.
Possible quirks with my configuration:
My application load balancer has a security group which limits access to a narrow range of IPs. I didn't think this would matter since VPC links are suppose to connect with the private DNS.
My ECS instance has private DNS enabled.
My ECS uses EC2 launch type, not Fargate.
Indeed, as suggested in a related post, my problem stems from initially creating an ALB (Application Load Balancer) rather than an NLB (Network Load Balancer). Once I had an NLB configured properly, I was able to configure the VPC Link as described in the AWS documentation.

Securing an exposed load balancer service in kubernetes

I have a workload deployed in kubernetes. I have exposed it using a load balancer service because I need an external IP to communicate with the workload.
The external IP is now publicly accessible. How do I secure it so that only I will be able to access it from an external application?
Kubernetes doesn't come with out-of-the-box authentication for external services. If you have more services and security is important for you I would take a look into istio project. You can configure authentication for your services in decalarative way using authentication policy:
https://istio.io/docs/tasks/security/authn-policy/#end-user-authentication
Using istio you can secure not only incoming connections, but also outgoing and internal traffic.
If you are new to service mesh concept and you don't know how to start, you can check kyma-project where istio is already configured and you can apply token validation with one click in UI or single kubectl command. Check the example:
https://github.com/kyma-project/examples/tree/master/gateway

How to connect different deployments in Kubernetes?

I have two back-end deployments, REST server and a database server, each running on some specific ports. The REST server internally calls a database server.
Now how do I refer my database server deployment in my REST server deployment so that they can communicate with each other?
first, define a service for your DB server, that will create sort of a loadbalancer (internal kube integration based on iptables in most cases). With that, you will be able to refer to it by service name or fqdn like mydbsvc.namespace.svc.cluster.local. Which will return "Cluster IP" to that loadbalancer.
Then it's just an issue of regular app config to point it to your DB on mydbsvc, preferably by means of env variable like say DB_HOST=mydbsvc set in your REST API deployment manifest (pod template envs)
Expose your deployments as service. For example, kubectl expose ...
Connect/Allow these to communicate by creating network policies.
Service object (of database) will give you a virtual (stable) IP. Depending upon the type of service your rest code can call DB via clusterIP/externalName/externalIP/DNS.

Frontend communication with API in Kubernetes cluster

Inside of a Kubernetes Cluster I am running 1 node with 2 deployments. React front-end and a .NET Core app. I also have a Load Balancer service for the front end app. (All working: I can port-forward to see the backend deployment working.)
Question: I'm trying to get the front end and API to communicate. I know I can do that with an external facing load balancer but is there a way to do that using the clusterIPs and not have an external IP for the back end?
The reason we are interested in this, it simply adds one more layer of security. Keeping the API to vnet only, we are removing one more entry point.
If it helps, we are deploying in Azure with AKS. I know they have some weird deployment things sometimes.
Pods running on the cluster can talk to each other using a ClusterIP service, which is the default service type. You don't need a LoadBalancer service to make two pods talk to each other. According to the docs on this topic
ClusterIP exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster. This is the default ServiceType.
As explained in the Discovery documentation, if both Pods (frontend and API) are running on the same namespace, the frontend just needs to send requests to the name of the backend service.
If they are running on different namespaces, the frontend API needs to use a fully qualified domain name to be able to talk with the backend.
For example, if you have a Service called "my-service" in Kubernetes Namespace "my-ns" a DNS record for "my-service.my-ns" is created. Pods which exist in the "my-ns" Namespace should be able to find it by simply doing a name lookup for "my-service". Pods which exist in other Namespaces must qualify the name as "my-service.my-ns". The result of these name lookups is the cluster IP.
You can find more info about how DNS works on kubernetes in the docs.
The problem with this configuration is the idea that the Frontend app will be trying to reach out to the API via the internal cluster. But it will not. My app, on the client's browser can not reach services and pods in my Kluster.
My cluster will need something like nginx or another external Load Balancer to allow my client side api calls to reach my API.
You can alternatively used your front end app, as your proxy, but that is highly not advised!
I'm trying to get the front end and api to communicate
By api, if you mean the Kubernetes API server, first setup a service account and token for the front-end pod to communicate with the Kubernetes API server by following the steps here, here and here.
is there a way to do that using the clusterIPs and not have an external IP for the back end
Yes, this is possible and more secure if external access is not needed for the service. Service type ClusterIP will not have an ExternalIP and the pods can talk to each other using ClusterIP:Port within the cluster.