Use AKS services with Azure API Management - kubernetes

I have set up my application to be served by a Kubernetes NGINX ingress in AKS. Today while experimenting with the Azure API management, I tried to set it up so that all the traffic to the ingress controller would go through the API management. I pointed its backend service to the current public address of the ingress controller but I was wondering when I make the ingress controller private or remove it altogether to rely on the Kubernetes services instead, how API management could access it and how I would define the backend service in API management. By the way, while provisioning the API management instance, I added a new subnet to the existing virtual network of the AKS instance so they are in the same network.

There are two modes of deploying API Management into a VNet – External and Internal.
If API consumers do not reside in the cluster VNet, the External mode (Fig below) should be used. In this mode, the API Management gateway is injected into the cluster VNet but accessible from public internet via an external load balancer. It helps to hide the cluster completely while still allowing external clients to consume the microservices. Additionally, you can use Azure networking capabilities such as Network Security Groups (NSG) to restrict network traffic.
If all API consumers reside within the cluster VNet, then the Internal mode (Figure below) could be used. In this mode, the API Management gateway is injected into the cluster VNET and accessible only from within this VNet via an internal load balancer. There is no way to reach the API Management gateway or the AKS cluster from public internet.
In both cases, the AKS cluster is not publicly visible. The Ingress Controller may not be necessary. Depending on your scenario and configuration, authentication might still be required between API Management and your microservices. For instance, if a Service Mesh is adopted, it always requires mutual TLS authentication.
Pros:
The most secure option because the AKS cluster has no public endpoint
Simplifies cluster configuration since it has no public endpoint
Ability to hide both API Management and AKS inside the VNet using the Internal mode
Ability to control network traffic using Azure networking capabilities such as Network Security Groups (NSG)
Cons:
Increases complexity of deploying and configuring API Management to work inside the VNet
Reference
To restrict access to your applications in Azure Kubernetes Service (AKS), you can create and use an internal load balancer. An internal load balancer makes a Kubernetes service accessible only to applications running in the same virtual network as the Kubernetes cluster.
You can either expose your the backends on the AKS cluster through internal Ingress or simply using Services of type internal load balancer.
You can then point the API Gateway's backend to the internal Ingress' Private IP address or the internal load balancers Service's EXTERNAL IP (which would also be a private IP address). These private IP addresses are accessible within the Virtual Network and any connected network (i.e. Azure virtual networks connected through peering or Vnet-to-Vnet Gateway, or on-premises networks connected to the AKS Vnet). In your case, if the API Gateway is deployed in the same Virtual Network then, it should be able to access these private IP addresses. If the API Gateway is deployed in a different Virtual Network, please connect it to the AKS virtual network using VNET Peering or Vnet-to-Vnet Gateway, depending on your use-case.

Is it working now. If not, please try to add that vnet and subnet in apim. Mostly it won't required, because both of them are in same vnet,we can access directly via privateip. Please check the routing is properly configured in the ingress controller. Another option is, just for testing, you can directly call the service from api by avoiding ingress controller. So that we can make sure that, there is no request is getting blocked by nsg or others
.

Related

Can a private Kubernetes Cluster (on a VPC) expose services to the internet via load balancers and ingress?

This is going to be more of a conceptual question.
I'm fairly new to Kubernetes and VPCs, and I'm currently studying in order to take part in designing a Kubernetes Cluster on GCP (Google Cloud Platform), and my role in that would be to address our security concerns.
Recently, I've been introduced to the concept of a "Private Kubernetes Cluster", which runs on a VPC and only allows traffic of allowed agents and from inside the VPC, with the Control Plane being accessible by a Bastion, for instance.
The thing is, I'm not sure if doing this would mean completely air-gapping the Cluster, blocking any access from the internet outside of the VPC or if I'm still able to use this to serve public web services, such as websites and APIs, whilst using the VPC to secure the control plane.
Any insights on that? I would also appreciate some documentation and related articles.
I still haven't got to the implementation part, since I'm trying to make sure I know what I'm doing beforehand.
Edit: According to the documentation, I am able to expose some of my cluster's nodes by using Cloud NAT. But would this defeat the purpose of even having a private cluster?
The thing is, I'm not sure if doing this would mean completely
air-gapping the Cluster, blocking any access from the internet outside
of the VPC or if I'm still able to use this to serve public web
services, such as websites and APIs, whilst using the VPC to secure
the control plane.
Yes, you will be able to Host your web application and you can expose those with the LoadBalancer even if you Cluster is private.
With a public cluster, your Worker node will be having the External/Public IPs while in private cluster worker nodes won't be having public IP.
You can create the service type LoadBalancer or use the Ingress to expose the application.
If public API access is required you can use the NAT gateway. you can configure your firewall rules to allow egress traffic to the specific public API endpoint you want to access.
Edit: According to the documentation, I am able to expose some of my
cluster's nodes by using Cloud NAT. But would this defeat the purpose
of even having a private cluster?
Yes right, The main advantage of Private GKE cluster I am seeing it does not have any Public/External IP address so can't be accessed from outside only accessed from within the VPC network. It can help protect clusters from un-auth access and reduce the surface of attacks on apps also.
Refer the Github for terraform and other details.

Why do we need API gateway when using Kubernetes?

In microservices environment deployed to the Kubernetes cluster, why will we use API gateway (for example Spring cloud gateway) if Kubernetes supplies the same service with Ingress?
Ingress controller makes one Kubernetes service that gets exposed as LoadBalancer.For simple understanding, you can consider ingress as Nginx server which just do the work of forwarding the traffic to services based on the ruleset.ingress don't have much functionality like API gateway. Some of ingress don't support authentication, rate limiting, application routing, security, merging response & request, and other add-ons/plugin options.
API gateway can also do the work of simple routing but it mostly gets used when you need higher flexibility, security and configuration options.While multiple teams or projects can share a set of Ingress controllers, or Ingress controllers can be specialized on a per‑environment basis, there are reasons you might choose to deploy a dedicated API gateway inside Kubernetes rather than leveraging the existing Ingress controller. Using both an Ingress controller and an API gateway inside Kubernetes can provide flexibility for organizations to achieve business requirements
For accessing database
If this database and cluster are somewhere in the cloud you could use internal Database IP. If not you should provide the IP of the machine where this Database is hosted.
You can also refer to this Kubernetes Access External Services article.

How to deploy kubernertes service (type LoadBalancer) on onprem VMs?

How to deploy kubernertes service (type LoadBalancer) on onprem VMs ? When I using type=LoadBalcer it's shows external IP as "pending" but everything works fine with the same yaml if I deployed on GKS. My question is-:
Do we need a Load balancer if I use type=LoadBalcer on Onprem VMs?
Can I assign LoadBalncer IP manually in yaml?
You need to setup metalLB.
MetalLB hooks into your Kubernetes cluster, and provides a network load-balancer implementation. In short, it allows you to create Kubernetes services of type LoadBalancer in clusters that don’t run on a cloud provider, and thus cannot simply hook into paid products to provide load-balancers.
To install run
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/metallb.yaml
For more details Click here to install
It might be helpful to check the Banzai Cloud Pipeline Kubernetes Engine (PKE) that is "a simple, secure and powerful CNCF-certified Kubernetes distribution" platform. It was designed to work on any cloud, VM or on bare metal nodes to provide a scalable and secure foundation for private clouds. PKE is cloud-aware and includes an ever-increasing number of cloud and platform integrations.
When I using type=LoadBalcer it's shows external IP as "pending" but everything works fine with the same yaml if I deployed on GKS.
If you create a LoadBalancer service — for example try to expose your own TCP based service, or install an ingress controller — the cloud provider integration will take care of creating the needed cloud resources, and writing back the endpoint where your service will be available. If you don't have a cloud provider integration or a controller for this purpose, your Service resource will remain in Pending state.
In case of Kubernetes, LoadBalancer services are the easiest and most common way to expose a service (redundant or not) for the world outside of the cluster or the mesh — to other services, to internal users, or to the internet.
Load balancing as a concept can happen on different levels of the OSI network model, mainly on L4 (transport layer, for example TCP) and L7 (application layer, for example HTTP). In Kubernetes, Services are an abstraction for L4, while Ingresses are a generic solution for L7 routing.
You need to setup metalLB.
MetalLB is one of the most popular on-prem replacements for LoadBalancer cloud integrations. The whole solution runs inside the Kubernetes cluster.
The main component is an in-cluster Kubernetes controller which watches LB service resources, and based on the configuration supplied in a ConfigMap, allocates and writes back IP addresses from a dedicated pool for new services. It maintains a leader node for each service, and depending on the working mode, advertises it via BGP or ARP (sending out unsolicited ARP packets in case of failovers).
MetalLB can operate in two ways: either all requests are forwarded to pods on the leader node, or distributed to all nodes with kubeproxy.
Layer 7 (usually HTTP/HTTPS) load balancer appliances like F5 BIG-IP, or HAProxy and Nginx based solutions may be integrated with an applicable ingress-controller. If you have such, you won't need a LoadBalancer implementation in most cases.
Hope that sheds some light on a "LoadBalancer on bare metal hosts" question.

Securing an exposed load balancer service in kubernetes

I have a workload deployed in kubernetes. I have exposed it using a load balancer service because I need an external IP to communicate with the workload.
The external IP is now publicly accessible. How do I secure it so that only I will be able to access it from an external application?
Kubernetes doesn't come with out-of-the-box authentication for external services. If you have more services and security is important for you I would take a look into istio project. You can configure authentication for your services in decalarative way using authentication policy:
https://istio.io/docs/tasks/security/authn-policy/#end-user-authentication
Using istio you can secure not only incoming connections, but also outgoing and internal traffic.
If you are new to service mesh concept and you don't know how to start, you can check kyma-project where istio is already configured and you can apply token validation with one click in UI or single kubectl command. Check the example:
https://github.com/kyma-project/examples/tree/master/gateway

Different Firewall Rules for Kubernetes Cluster

I am running some internal services and also some customer facing services in one K8s cluster. The internal ones should only be accessible from some specific ips and the customer facing services should be accessible worldwide.
So I created my Ingresses and an nginx Ingress Controller and some K8s LoadBalancer Services with the proper ip filters.
Now I see those Firewall rules in GCP are created behind the scenes. But they are conflicting and the "customer facing" firewall rules overrule the "internal" ones. And so everything of my K8s Cluster is visible worldwide.
The usecase sounds not that exotic to me - do you have an idea how to get some parts of a K8s cluster protected by firewall rules and some accessible everywhere?
As surprising as it is, the L7 (http/https) load balancer in GCP created by a Kubernetes Ingress object has no IP whitelisting capabilities by default, so what you described is working as intended. You can filter on your end using the X-Forwarded-For header (see Target Proxies under Setting Up HTTP(S) Load Balancing).
Whitelisting will be available trough Cloud Armour, which is in private beta at the moment.
To make this situation slightly more complicated: the L4 (tcp/ssl) load balancer in GCP created by a Kubernetes LoadBalancer object (so, not an Ingress) does have IP filtering capability. You simply set .spec.loadBalancerSourceRanges on the Service for that. Of course, a Service will not give you url/host based routing, but you can achieve that by deploying an ingress controller like nginx-ingress. If you go this route you can still create Ingresses for your internal services you just need to annotate them so the new ingress controller picks them up. This is a fairly standard solution, and is actually cheaper than creating L7s for each of your internal services (you will only have to pay for 1 forwarding rule for all of your internal services).
(By "internal services" above I meant services you need to be able to access from outside of the itself cluster but only from specific IPs, say a VPN, office, etc. For services you only need to access from inside the cluster you should use type: ClusterIP)