Cannot create API Management VPC Link in AWS Console - aws-api-gateway

I'm failing to add a VPC Link to my API Gateway that will link to my application load balancer. The symptom in the AWS Console is that the dropdown box for Target NLB is empty. If I attempt to force the issue via the AWS CLI, an entry is created; but the status says NLB ARN is malformed.
I've verified the following:
My application load balancer is in the same account and region as my API Gateway.
My user account has admin privileges. I created and added the recommended policy just in case I was missing something.
The NLB ARN was copied directly from the application load balancer page for the AWS CLI creation scenario.
I can invoke my API directly on the ECS instance (it has a public IP for now).
I can invoke my API through the application load balancer public IP.
Possible quirks with my configuration:
My application load balancer has a security group which limits access to a narrow range of IPs. I didn't think this would matter since VPC links are suppose to connect with the private DNS.
My ECS instance has private DNS enabled.
My ECS uses EC2 launch type, not Fargate.

Indeed, as suggested in a related post, my problem stems from initially creating an ALB (Application Load Balancer) rather than an NLB (Network Load Balancer). Once I had an NLB configured properly, I was able to configure the VPC Link as described in the AWS documentation.

Related

Using HTTP API to call multiple services running on AWS ECS

My goal here is to deploy two spring boot services using AWS ECS Fargate in a private subnet and access them via AWS API Gateway. Basically, I want to use a single HTTP API and then based on the path it should call the appropriate service. I am using VPC Links, and Cloud Map for linking services running in a private subnet, for service discovery. First of all - Is this assumption even correct, i.e. can we use a single HTTP API to call two different services based on a path?
Some considerations of how I created the ECS services.
ECS Service A is deployed in a private subnet, it has no public IP enabled and the service discovery has been enabled. While enabling service discovery I choose the DNS record type to be SRV, giving a port number and TTL as 60 secs.
ECS Service B is also deployed similarly.
Both ECS Service A and B have a separate Service discovery endpoint.
Now in the API Gateway, the steps I followed were
Created a new HTTP API using the defaults, this means the default stage and no routes and integrations configured yet.
Then I created a VPC Link for HTTP API by assigning it a name (service-a-vpclink), assigning a VPC, subnet and appropriate security group (security that was assigned to the ECS service for service A).
Now I created a route where the method is "ANY" and the path is "$default" and assigned an integration to it, I am able to reach all my endpoints of service A running in the private subnet. (So all good here, as this shows that I am able to reach the service running in a private subnet using API Gateway.)
For the integration that I mentioned in point 3, this was of type "Private Resource", target service as "Cloud Map" and then selecting the namespace and appropriate service (serviceA) along with the VPC link that was created in step 2.
But this is what I don't want to do. I want something like the below:
Hitting any endpoint like "https://uzhgtf6t8u.execute-api.eu-west-2.amazonaws.com/serviceA/any-serviceA-endpoints" where /serviceA is a path that is configured in API Gateway and then any-serviceA-endpoints are the actual endpoints configured in the backend service running, navigates to service A endpoints.
Hitting any endpoint like "https://uzhgtf6t8u.execute-api.eu-west-2.amazonaws.com/serviceB/any-serviceB-endpoints" where /serviceB is a path that is configured in API Gateway and then any-serviceB-endpoints are the actual endpoints configured in the backend service running, navigates to service B endpoints.
Here I attach separate integrations to path /serviceA and to path /serviceB, but this does not work. Rather this way the response is 404, not found.
What exactly am I not following?
Many thanks..
Screenshot of route

AWS ECS Fargate Outbound Internet Connectivity

I've a small Fargate cluster with a service running and found that if I disable the public IP the container won't build as it doesn't have a route to pull the image.
The ELB for ECS Fargate is part of a subnet which has:
internet gateway configured and attached
route table allowing unrestricted outgoing
security policy on the ECS service allows unrestricted outgoing
DNS enabled
My understanding is the internet gateway is a NAT and the above actions should permit outgoing internet access however I can't make it so. What else is missing?
Just like all other resources in your AWS VPC, if you don't attach a public IP address, then it needs either to be placed in a subnet with a route to a NAT Gateway to access things outside the VPC, or it needs VPC endpoints to access those resources.
I have set-up a EBL for a persistent public & subnet IP. As far as I
can tell my subnet has outgoing internet unrestricted (internet
gateway attached and route opens up all outgoing traffic to 0.0.0.0/0.
I'm unsure if the service setup will configure the EC2 to use this
first then attempt to set-up the container. If not then it probably
doesn't apply.
ELB is for inbound traffic only, it does not provide any sort of outbound networking functionality for your EC2 or Fargate instance. The ELB is not in any way involved when ECS tries to pull a container image.
Having a volatile public IP address is a bit annoying as my
understanding is the security policy will apply to both the
ELB/Elastic provided IP and this one.
What "security policy" are you referring to? I'm not aware of security policies on AWS that are applied directly to IP addresses. Assuming you mean the Security Group when you say "security policy", your understanding is incorrect. Both the EC2 or Fargate instance and the ELB should have different security groups assigned to them. The ELB would have a security group allowing all inbound traffic, if you want it to be public on the Internet. The EC2 or Fargate instance should have a security group only allowing inbound traffic from the ELB (by specifying the ELB's security group ID in the inbound rule).
I want to point out you say "EC2" in your question and never mention Fargate, but you tagged your question with Fargate twice and didn't tag it with EC2. EC2 and Fargate are separate compute services on AWS. You would either be using one or the other. It doesn't really matter in this case given the issue you are encountering, but it helps to be clear in your questions.

Use AKS services with Azure API Management

I have set up my application to be served by a Kubernetes NGINX ingress in AKS. Today while experimenting with the Azure API management, I tried to set it up so that all the traffic to the ingress controller would go through the API management. I pointed its backend service to the current public address of the ingress controller but I was wondering when I make the ingress controller private or remove it altogether to rely on the Kubernetes services instead, how API management could access it and how I would define the backend service in API management. By the way, while provisioning the API management instance, I added a new subnet to the existing virtual network of the AKS instance so they are in the same network.
There are two modes of deploying API Management into a VNet – External and Internal.
If API consumers do not reside in the cluster VNet, the External mode (Fig below) should be used. In this mode, the API Management gateway is injected into the cluster VNet but accessible from public internet via an external load balancer. It helps to hide the cluster completely while still allowing external clients to consume the microservices. Additionally, you can use Azure networking capabilities such as Network Security Groups (NSG) to restrict network traffic.
If all API consumers reside within the cluster VNet, then the Internal mode (Figure below) could be used. In this mode, the API Management gateway is injected into the cluster VNET and accessible only from within this VNet via an internal load balancer. There is no way to reach the API Management gateway or the AKS cluster from public internet.
In both cases, the AKS cluster is not publicly visible. The Ingress Controller may not be necessary. Depending on your scenario and configuration, authentication might still be required between API Management and your microservices. For instance, if a Service Mesh is adopted, it always requires mutual TLS authentication.
Pros:
The most secure option because the AKS cluster has no public endpoint
Simplifies cluster configuration since it has no public endpoint
Ability to hide both API Management and AKS inside the VNet using the Internal mode
Ability to control network traffic using Azure networking capabilities such as Network Security Groups (NSG)
Cons:
Increases complexity of deploying and configuring API Management to work inside the VNet
Reference
To restrict access to your applications in Azure Kubernetes Service (AKS), you can create and use an internal load balancer. An internal load balancer makes a Kubernetes service accessible only to applications running in the same virtual network as the Kubernetes cluster.
You can either expose your the backends on the AKS cluster through internal Ingress or simply using Services of type internal load balancer.
You can then point the API Gateway's backend to the internal Ingress' Private IP address or the internal load balancers Service's EXTERNAL IP (which would also be a private IP address). These private IP addresses are accessible within the Virtual Network and any connected network (i.e. Azure virtual networks connected through peering or Vnet-to-Vnet Gateway, or on-premises networks connected to the AKS Vnet). In your case, if the API Gateway is deployed in the same Virtual Network then, it should be able to access these private IP addresses. If the API Gateway is deployed in a different Virtual Network, please connect it to the AKS virtual network using VNET Peering or Vnet-to-Vnet Gateway, depending on your use-case.
Is it working now. If not, please try to add that vnet and subnet in apim. Mostly it won't required, because both of them are in same vnet,we can access directly via privateip. Please check the routing is properly configured in the ingress controller. Another option is, just for testing, you can directly call the service from api by avoiding ingress controller. So that we can make sure that, there is no request is getting blocked by nsg or others
.

How to expose AWS Fargate ECS containers to internet with Route53 DNS?

I have an ECS Task running on fargate and my service get an public ip address automatically. How do I simply expose fargate task to internet with Route53 dns? I was looking around the internet for a whole day today an I couldn't find an example about this simplest possible scenario where I just want to expose a port from said task to the internet and map a Route53 record to point to its ip address.
If I understood correctly from the minimal information I found is that I would have to create an vpc endpoint but I couldn't find information about how to route the traffic to a task/container.
I am also having this problem. I was able to route traffic to a single container service in public subnets via application load balancers. However, now, I cannot reproduce this. Did you try the ALB yet?

How to connect Api Gateway to private/internal Elastic Beanstalk?

I am trying to connect Api Gateway to make the request to internal Elastic Beanstalk(on custom VPC, LB facing internal private subnets, instances on private subnets).
I manage to create the VPC and configure Beanstalk app as internal (all is green). I read about the subject and you can connect Api Gateway to VPC using VPCLink. VPCLink is related to an Network Elastic Balancer. But this balancer sees only the Beanstalk EC2 instance which is not ok.
It should target the Beanstalk Load Balancer because Beanstalk has auto-scaling(an can create multiple instances based on your configuration).
Is this possible? and how to do it.
Thank you,
*From Lambda inside VPC is working ok, so one solution is Api Gateway->Lambda->Internal Beanstalk.
Actually is possible by using the IP of the Application Load Balancer(ALB of Beanstalk app) in the NLB(network load balancer) target config. The thing that is needed is to use Lambda to update the NLB (based on some event - CloudWatch). Ip of ALB can change so based on DNS get the new ip(there is on the internet some python script, also is easy to do with node/js).
So in the end you can use VPCLink. But I realise that this is more of an exercise and another approach would be better for this kind of application.