Securing an exposed load balancer service in kubernetes - kubernetes

I have a workload deployed in kubernetes. I have exposed it using a load balancer service because I need an external IP to communicate with the workload.
The external IP is now publicly accessible. How do I secure it so that only I will be able to access it from an external application?

Kubernetes doesn't come with out-of-the-box authentication for external services. If you have more services and security is important for you I would take a look into istio project. You can configure authentication for your services in decalarative way using authentication policy:
https://istio.io/docs/tasks/security/authn-policy/#end-user-authentication
Using istio you can secure not only incoming connections, but also outgoing and internal traffic.
If you are new to service mesh concept and you don't know how to start, you can check kyma-project where istio is already configured and you can apply token validation with one click in UI or single kubectl command. Check the example:
https://github.com/kyma-project/examples/tree/master/gateway

Related

Gcloud Cloud Armor for grpc web service in GKE - is there a working config?

I have a grpc-based web service that runs in Google Kubernetes Engine, and I have had no luck applying Cloud Armor to it.
Currently, this web service is exposed via a Kubernetes service of type External load balancer, which is bound to an External TCP/UDP Network load balancer in Gcloud, and that all works fine.
The issue is that Cloud Armor cannot be applied to an External TCP/UDP Network load balancer.
So I've tried exposing the web service via Kubernetes services of type Node Port and Cluster IP to be able to bind to an Ingress that will use a load balancer that is supported for Cloud Armor (Global External HTTP(S), Global External HTTP(S) (classic), External TCP proxy, or External SSL proxy).
But I can't seem to find a configuration that actually handles the grpc traffic correctly and has a working healthcheck.
Has anyone else been able to get a grpc based web service running out of GKE protected with Cloud Armor?
More background:
The web service is Go-based, and has two features to facilitate Kubernetes healthchecks. First, it supports the standard grpc health protocol with grpc-health-probe, and the container that it is built into also has the grpc-health-probe executable (and this looks to be working correctly for the pod liveness/readiness checks). Second, it also serves an http(1) 200/OK on the '/' route on the same port on which it listens for the http/2 grpc traffic.
The web service runs with TLS using a CA-signed cert and a 4096 bit key, and currently terminates the TLS client traffic itself. But I am open to having the TLS traffic terminated at the edge/load balancer, if it can be made to work for grpc calls.
The Cloud Armor SSL TCP proxy is available, but there are some limitations:
Users can reuse existing Cloud Armor security policies (Backend security policies) or create new ones.
Only security policies with the following rule properties are are supported for TCP/SSL Proxies backend services:
Match Conditions: IP, Geo, ASN
Action: Allow, deny, throttle, rate-based-ban
Availability and limitations:
Security policies can be created/configured in the Console or via API/CLI
New or existing security policies can be attached to backend services fronted by TCP/SSL Proxies only via API/CLI.
To enable Cloud Logging events, leverage CLI/API to enable TCP/SSL Proxy logging on the relevant backend service as described in the Load Balancer documentation
There is a Network Load Balancer option that will be coming to market and is currently pre-GA. It is expected to be generally available sometime in H1 2023.

Connecting to many kubernetes services from local machine

From my local machine I would like to be able to port forward to many services in a cluster.
For example I have services of name serviceA-type1, serviceA-type2, serviceA-type3... etc. None of these services are accessible externally but can be accessed using the kubectl port-forward command. However there are so many services, that port forwarding to each is unfeasible.
Is it possible to create some kind of proxy service in kubernetes that would allow me to connect to any of the serviceA-typeN services by specifying the them in a URL? I would like to be able to port-forward to the proxy service from my local machine and it would then forward the requests to the serviceA-typeN services.
So for example, if I have set up a port forward on 8080 to this proxy, then the URL to access the serviceA-type1 service might look like:
http://localhost:8080/serviceA-type1/path/to/endpoint?a=1
I could maybe create a small application that would do this but does kubernetes provide this functionality already?
kubectl proxy command provides this functionality.
Read more here: https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-services/#manually-constructing-apiserver-proxy-urls
Good option is to use Ingrees to achieve it.
Read more about what Ingress is.
Main concepts are:
Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.
An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name-based virtual hosting.
An Ingress controller is responsible for fulfilling the Ingress, usually with a load balancer, though it may also configure your edge router or additional frontends to help handle the traffic.
An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically uses a service of type Service.Type=NodePort or Service.Type=LoadBalancer.
In Kubernetes we have 4 types of Services and the default service type is Cluster IP which means the service is only reachable within the cluster.Ingress exposes your service outside the cluster so ingress acts as the entry point into your cluster.
If you plan to move to cloud (I assume you will, because all applications are going to work in cloud in future) with Ingress, it will be compatible with cloud services and eventually will save time and will be easier to migrate from local environment.
To start with ingress you need to install an Ingress controller first.
There are different ingress controllers which you can use.
You can start with most common ingress-nginx which is supported by kubernetes community.
If you're using a minikube than it can be enabled as an addon - see here
Once you have installed ingress in your cluster, you need to create a rule to have it work. Simple fanout is an example with two services and path based routing to it.

Use AKS services with Azure API Management

I have set up my application to be served by a Kubernetes NGINX ingress in AKS. Today while experimenting with the Azure API management, I tried to set it up so that all the traffic to the ingress controller would go through the API management. I pointed its backend service to the current public address of the ingress controller but I was wondering when I make the ingress controller private or remove it altogether to rely on the Kubernetes services instead, how API management could access it and how I would define the backend service in API management. By the way, while provisioning the API management instance, I added a new subnet to the existing virtual network of the AKS instance so they are in the same network.
There are two modes of deploying API Management into a VNet – External and Internal.
If API consumers do not reside in the cluster VNet, the External mode (Fig below) should be used. In this mode, the API Management gateway is injected into the cluster VNet but accessible from public internet via an external load balancer. It helps to hide the cluster completely while still allowing external clients to consume the microservices. Additionally, you can use Azure networking capabilities such as Network Security Groups (NSG) to restrict network traffic.
If all API consumers reside within the cluster VNet, then the Internal mode (Figure below) could be used. In this mode, the API Management gateway is injected into the cluster VNET and accessible only from within this VNet via an internal load balancer. There is no way to reach the API Management gateway or the AKS cluster from public internet.
In both cases, the AKS cluster is not publicly visible. The Ingress Controller may not be necessary. Depending on your scenario and configuration, authentication might still be required between API Management and your microservices. For instance, if a Service Mesh is adopted, it always requires mutual TLS authentication.
Pros:
The most secure option because the AKS cluster has no public endpoint
Simplifies cluster configuration since it has no public endpoint
Ability to hide both API Management and AKS inside the VNet using the Internal mode
Ability to control network traffic using Azure networking capabilities such as Network Security Groups (NSG)
Cons:
Increases complexity of deploying and configuring API Management to work inside the VNet
Reference
To restrict access to your applications in Azure Kubernetes Service (AKS), you can create and use an internal load balancer. An internal load balancer makes a Kubernetes service accessible only to applications running in the same virtual network as the Kubernetes cluster.
You can either expose your the backends on the AKS cluster through internal Ingress or simply using Services of type internal load balancer.
You can then point the API Gateway's backend to the internal Ingress' Private IP address or the internal load balancers Service's EXTERNAL IP (which would also be a private IP address). These private IP addresses are accessible within the Virtual Network and any connected network (i.e. Azure virtual networks connected through peering or Vnet-to-Vnet Gateway, or on-premises networks connected to the AKS Vnet). In your case, if the API Gateway is deployed in the same Virtual Network then, it should be able to access these private IP addresses. If the API Gateway is deployed in a different Virtual Network, please connect it to the AKS virtual network using VNET Peering or Vnet-to-Vnet Gateway, depending on your use-case.
Is it working now. If not, please try to add that vnet and subnet in apim. Mostly it won't required, because both of them are in same vnet,we can access directly via privateip. Please check the routing is properly configured in the ingress controller. Another option is, just for testing, you can directly call the service from api by avoiding ingress controller. So that we can make sure that, there is no request is getting blocked by nsg or others
.

Use an existing microservice architecture with kubernetes

I've an existing microservice architecture that uses Netflix Eureka and zuul services,
I've deployed a pod that successfully registers on the discover server but when I hit the API it gives a timeout, what I can think is that while registering on the Discovery server the container IP is given because of which it is not accessible.
Is there a way to either map the correct address or redirect the call to the proper URL looking for a easy way, as this needs to be done on multiple services
I think you should be rethinking your design in Kubernetes way! Your Eureka(service discovery), Zuul server (API gateway/ Loadbalancer) are really extra services that you really don't need in the Kubernetes platform.
For Service discovery and load-balancing, you can use Services in Kubernetes.
From Kubernetes documentation:
An abstract way to expose an application running on a set of Pods as a
network service. With Kubernetes, you don't need to modify your
application to use an unfamiliar service discovery mechanism. Kubernetes gives Pods their own IP addresses and a single DNS name for a set of Pods and can load-balance across them.
And for API gateway, you can think about Ingress in Kubernetes.
There are different implementations for Ingress Controllers for Kubernetes. I'm using Ambassador API gateway implementation.
https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/

How to access the Kubernetes external IP as HTTPS

I've deployed a Django app on Azure Kubernetes service using a load balancer service. So far accessing the external IP of the load balancer I'm able to access my application but I need to expose the app for HTTPS requests.
I'm new to Kubernetes and unable to find any article which provides these steps. So please help me with the steps/action I need to perform to make this work.
You need to expose your application using ingress.Here is the doc on how to do it in azure kubernetes service.