I have an EKS with fargate alone setup with 3 microservices exposed via NLB each using AWS Load balancer controller to the API Gateway using the VPC links for REST APIs. I was asked to maintain a single LB for the three services.
So I have tried the ALB using ingress but the problem is that VPC links for REST APIs cannot be formed with ALBs.
I have manually tried to remove the target groups from the rest of the load balancers and added them to new listener ports to a single NLB, it worked with API Gateway.
I am not sure if this is the correct way. Is there any other way to combine the NLBs in EKS?
Related
In microservices environment deployed to the Kubernetes cluster, why will we use API gateway (for example Spring cloud gateway) if Kubernetes supplies the same service with Ingress?
Ingress controller makes one Kubernetes service that gets exposed as LoadBalancer.For simple understanding, you can consider ingress as Nginx server which just do the work of forwarding the traffic to services based on the ruleset.ingress don't have much functionality like API gateway. Some of ingress don't support authentication, rate limiting, application routing, security, merging response & request, and other add-ons/plugin options.
API gateway can also do the work of simple routing but it mostly gets used when you need higher flexibility, security and configuration options.While multiple teams or projects can share a set of Ingress controllers, or Ingress controllers can be specialized on a per‑environment basis, there are reasons you might choose to deploy a dedicated API gateway inside Kubernetes rather than leveraging the existing Ingress controller. Using both an Ingress controller and an API gateway inside Kubernetes can provide flexibility for organizations to achieve business requirements
For accessing database
If this database and cluster are somewhere in the cloud you could use internal Database IP. If not you should provide the IP of the machine where this Database is hosted.
You can also refer to this Kubernetes Access External Services article.
I have set up my application to be served by a Kubernetes NGINX ingress in AKS. Today while experimenting with the Azure API management, I tried to set it up so that all the traffic to the ingress controller would go through the API management. I pointed its backend service to the current public address of the ingress controller but I was wondering when I make the ingress controller private or remove it altogether to rely on the Kubernetes services instead, how API management could access it and how I would define the backend service in API management. By the way, while provisioning the API management instance, I added a new subnet to the existing virtual network of the AKS instance so they are in the same network.
There are two modes of deploying API Management into a VNet – External and Internal.
If API consumers do not reside in the cluster VNet, the External mode (Fig below) should be used. In this mode, the API Management gateway is injected into the cluster VNet but accessible from public internet via an external load balancer. It helps to hide the cluster completely while still allowing external clients to consume the microservices. Additionally, you can use Azure networking capabilities such as Network Security Groups (NSG) to restrict network traffic.
If all API consumers reside within the cluster VNet, then the Internal mode (Figure below) could be used. In this mode, the API Management gateway is injected into the cluster VNET and accessible only from within this VNet via an internal load balancer. There is no way to reach the API Management gateway or the AKS cluster from public internet.
In both cases, the AKS cluster is not publicly visible. The Ingress Controller may not be necessary. Depending on your scenario and configuration, authentication might still be required between API Management and your microservices. For instance, if a Service Mesh is adopted, it always requires mutual TLS authentication.
Pros:
The most secure option because the AKS cluster has no public endpoint
Simplifies cluster configuration since it has no public endpoint
Ability to hide both API Management and AKS inside the VNet using the Internal mode
Ability to control network traffic using Azure networking capabilities such as Network Security Groups (NSG)
Cons:
Increases complexity of deploying and configuring API Management to work inside the VNet
Reference
To restrict access to your applications in Azure Kubernetes Service (AKS), you can create and use an internal load balancer. An internal load balancer makes a Kubernetes service accessible only to applications running in the same virtual network as the Kubernetes cluster.
You can either expose your the backends on the AKS cluster through internal Ingress or simply using Services of type internal load balancer.
You can then point the API Gateway's backend to the internal Ingress' Private IP address or the internal load balancers Service's EXTERNAL IP (which would also be a private IP address). These private IP addresses are accessible within the Virtual Network and any connected network (i.e. Azure virtual networks connected through peering or Vnet-to-Vnet Gateway, or on-premises networks connected to the AKS Vnet). In your case, if the API Gateway is deployed in the same Virtual Network then, it should be able to access these private IP addresses. If the API Gateway is deployed in a different Virtual Network, please connect it to the AKS virtual network using VNET Peering or Vnet-to-Vnet Gateway, depending on your use-case.
Is it working now. If not, please try to add that vnet and subnet in apim. Mostly it won't required, because both of them are in same vnet,we can access directly via privateip. Please check the routing is properly configured in the ingress controller. Another option is, just for testing, you can directly call the service from api by avoiding ingress controller. So that we can make sure that, there is no request is getting blocked by nsg or others
.
We have all our services running with Kubernetes. We want to know what is the best practice to deploy our own API gateway, we thought of 2 solutions:
Deploy API gateways outside the Kubernetes cluster(s), i.e. with Kong. This means the clusters' ingress will connect to the external gateways. The gateway is either VM or physical machines, and you can scale by replicating many gateway instances
Deploy gateway from within Kubernetes (then maybe connect to external L4 load balancer), i.e. Ambassador. However, with this approach, each cluster can only have 1 gateway. The only way to prevent fault-tolerance is to actually replicate the entire K8s cluster
What is the typical setup and what is better?
The typical setup for an api gateway in kubernetes is either using a load balancer service, if the cloud provider that you are using support dynamic provision of load balancers (all major cloud vendors like gcp, aws or azure support it), or even more common to use an ingress controller.
Both of these options can scale horizontally so you have fault tolerance, in fact there is already a solution for ingress controller using kong
https://github.com/Kong/kubernetes-ingress-controller
I am working on a complicated structure on AWS, which includes an API gateway for users connecting the website located inside a VPC. In this VPC, I have planned to use ALB to load balancing the traffic from outside to different ECS Fargate tasks.
For own purpose, I have planned to use Blue/Green Deployment in CodeDeploy session for deploying the services located in ECS fargate. From the documentations of AWS, I understand the listener port of production and test environment can be set up at load balancer.
I would like to know whether the listener port can be set up at API gateway. As I hope to use CloudFormation for this approach, it would be better related to it. Thanks!
I have two Kubernetes clusters with one application (two TCP load balancers) in different zones - "us-central1" and "us-west1". We would like to set a load balancer (or traffic director) with one address (IP, one domain) that could retrieve user request and redirect his request to closest cluster. It should be extensible solution for making next zones in "europe" and "asia" zones.
Is it possible to achieve that aim with Google Engine? Could you recommend any article or advice?
You could use GKE Ingress which works on Layer 7.
Here is some docs about, Exposing applications using services, Setting up HTTP(S) Load Balancing with Ingress and Configuring Ingress for external load balancing.
You can also check the kubernetes/ingress-gce on GitHub.
If you could be more specific I will provide more details.
Ingress for Anthos is a cloud-hosted multi-cluster Ingress controller for Anthos GKE clusters. It's a Google-hosted service that supports deploying shared load balancing resources across clusters and across regions.
Also refer to blog post by JetStack on various approaches of solving it.