Is it possible to 'hide' Service Fabric application using Internal Load Balancing? I couldn't find any guides to do that?
I want that all public endpoints are only accessible through vpn, including 19000 and 19080.
Related
I have a grpc-based web service that runs in Google Kubernetes Engine, and I have had no luck applying Cloud Armor to it.
Currently, this web service is exposed via a Kubernetes service of type External load balancer, which is bound to an External TCP/UDP Network load balancer in Gcloud, and that all works fine.
The issue is that Cloud Armor cannot be applied to an External TCP/UDP Network load balancer.
So I've tried exposing the web service via Kubernetes services of type Node Port and Cluster IP to be able to bind to an Ingress that will use a load balancer that is supported for Cloud Armor (Global External HTTP(S), Global External HTTP(S) (classic), External TCP proxy, or External SSL proxy).
But I can't seem to find a configuration that actually handles the grpc traffic correctly and has a working healthcheck.
Has anyone else been able to get a grpc based web service running out of GKE protected with Cloud Armor?
More background:
The web service is Go-based, and has two features to facilitate Kubernetes healthchecks. First, it supports the standard grpc health protocol with grpc-health-probe, and the container that it is built into also has the grpc-health-probe executable (and this looks to be working correctly for the pod liveness/readiness checks). Second, it also serves an http(1) 200/OK on the '/' route on the same port on which it listens for the http/2 grpc traffic.
The web service runs with TLS using a CA-signed cert and a 4096 bit key, and currently terminates the TLS client traffic itself. But I am open to having the TLS traffic terminated at the edge/load balancer, if it can be made to work for grpc calls.
The Cloud Armor SSL TCP proxy is available, but there are some limitations:
Users can reuse existing Cloud Armor security policies (Backend security policies) or create new ones.
Only security policies with the following rule properties are are supported for TCP/SSL Proxies backend services:
Match Conditions: IP, Geo, ASN
Action: Allow, deny, throttle, rate-based-ban
Availability and limitations:
Security policies can be created/configured in the Console or via API/CLI
New or existing security policies can be attached to backend services fronted by TCP/SSL Proxies only via API/CLI.
To enable Cloud Logging events, leverage CLI/API to enable TCP/SSL Proxy logging on the relevant backend service as described in the Load Balancer documentation
There is a Network Load Balancer option that will be coming to market and is currently pre-GA. It is expected to be generally available sometime in H1 2023.
I have set up my application to be served by a Kubernetes NGINX ingress in AKS. Today while experimenting with the Azure API management, I tried to set it up so that all the traffic to the ingress controller would go through the API management. I pointed its backend service to the current public address of the ingress controller but I was wondering when I make the ingress controller private or remove it altogether to rely on the Kubernetes services instead, how API management could access it and how I would define the backend service in API management. By the way, while provisioning the API management instance, I added a new subnet to the existing virtual network of the AKS instance so they are in the same network.
There are two modes of deploying API Management into a VNet – External and Internal.
If API consumers do not reside in the cluster VNet, the External mode (Fig below) should be used. In this mode, the API Management gateway is injected into the cluster VNet but accessible from public internet via an external load balancer. It helps to hide the cluster completely while still allowing external clients to consume the microservices. Additionally, you can use Azure networking capabilities such as Network Security Groups (NSG) to restrict network traffic.
If all API consumers reside within the cluster VNet, then the Internal mode (Figure below) could be used. In this mode, the API Management gateway is injected into the cluster VNET and accessible only from within this VNet via an internal load balancer. There is no way to reach the API Management gateway or the AKS cluster from public internet.
In both cases, the AKS cluster is not publicly visible. The Ingress Controller may not be necessary. Depending on your scenario and configuration, authentication might still be required between API Management and your microservices. For instance, if a Service Mesh is adopted, it always requires mutual TLS authentication.
Pros:
The most secure option because the AKS cluster has no public endpoint
Simplifies cluster configuration since it has no public endpoint
Ability to hide both API Management and AKS inside the VNet using the Internal mode
Ability to control network traffic using Azure networking capabilities such as Network Security Groups (NSG)
Cons:
Increases complexity of deploying and configuring API Management to work inside the VNet
Reference
To restrict access to your applications in Azure Kubernetes Service (AKS), you can create and use an internal load balancer. An internal load balancer makes a Kubernetes service accessible only to applications running in the same virtual network as the Kubernetes cluster.
You can either expose your the backends on the AKS cluster through internal Ingress or simply using Services of type internal load balancer.
You can then point the API Gateway's backend to the internal Ingress' Private IP address or the internal load balancers Service's EXTERNAL IP (which would also be a private IP address). These private IP addresses are accessible within the Virtual Network and any connected network (i.e. Azure virtual networks connected through peering or Vnet-to-Vnet Gateway, or on-premises networks connected to the AKS Vnet). In your case, if the API Gateway is deployed in the same Virtual Network then, it should be able to access these private IP addresses. If the API Gateway is deployed in a different Virtual Network, please connect it to the AKS virtual network using VNET Peering or Vnet-to-Vnet Gateway, depending on your use-case.
Is it working now. If not, please try to add that vnet and subnet in apim. Mostly it won't required, because both of them are in same vnet,we can access directly via privateip. Please check the routing is properly configured in the ingress controller. Another option is, just for testing, you can directly call the service from api by avoiding ingress controller. So that we can make sure that, there is no request is getting blocked by nsg or others
.
I've deployed a Django app on Azure Kubernetes service using a load balancer service. So far accessing the external IP of the load balancer I'm able to access my application but I need to expose the app for HTTPS requests.
I'm new to Kubernetes and unable to find any article which provides these steps. So please help me with the steps/action I need to perform to make this work.
You need to expose your application using ingress.Here is the doc on how to do it in azure kubernetes service.
I have a workload deployed in kubernetes. I have exposed it using a load balancer service because I need an external IP to communicate with the workload.
The external IP is now publicly accessible. How do I secure it so that only I will be able to access it from an external application?
Kubernetes doesn't come with out-of-the-box authentication for external services. If you have more services and security is important for you I would take a look into istio project. You can configure authentication for your services in decalarative way using authentication policy:
https://istio.io/docs/tasks/security/authn-policy/#end-user-authentication
Using istio you can secure not only incoming connections, but also outgoing and internal traffic.
If you are new to service mesh concept and you don't know how to start, you can check kyma-project where istio is already configured and you can apply token validation with one click in UI or single kubectl command. Check the example:
https://github.com/kyma-project/examples/tree/master/gateway
I'm evaluating using SF or docker swarm for container orchestration and I can see service fabric has an edge by being able to use reverse proxy implementation which runs on all nodes in cluster. Problem is that I can see that based on cluster manifest only one port can be used as reverse proxy port and hence I'm not fully understanding how this can be utilized if you have multiple windows containers running with each of those running on their own port. I need to use port:port mapping only (with no HTTP rewrite), so ultimately wanted one to one reverse port mapping to each individual windows container running.
Is it possible to accomplish by using service fabric?
To be clear I have www.app1.com and www.app2.com hosted in 2 different containers, they don't need to talk to each other. I deploy those to service fabric, how do I use reverse proxy with single published external port to reach those containers externally?
At this point in time (version 5.6 of Service Fabric), Reverse Proxy will do the service resolution using the Service Fabric naming service and provide the URI to get to your service. The URL that reverse proxy will find your service on is specific to Service Fabric - e.g. http://clusterFQDN/appName/serviceName:port.
What you can use the DNS Service to get you a container IP (the IP of a host node in the cluster, running your container). However, you can only find the port by doing a DNS SRV record lookup.
Current best options for exposing containers in a Service Fabric cluster are:
If you have a fixed host port for your container, the Azure load balancer will be able to monitor where the container lives, and forward requests to only those nodes. You can add additional public IPs to your Load Balancer and use one per container. Cannot be used with dynamic host ports in the cluster.
Azure API Management can resolve Service Fabric services by integrating with the Service Fabric Naming Service.
Create your own HTTP Gateway as a Reliable Service: https://github.com/weidazhao/Hosting or https://github.com/c3-ls/ServiceFabric-Http
Running Nginx as a service in the cluster: Based on this prototype you can run and configure Nginx in Service Fabric: https://github.com/knom/ServiceFabric-Nginx
Yes you can use Reverse proxy with multiple containers. The idea is simple
Configure port to host mapping so your host knows which port your
application is listening
Configure container to container so your
container register a end point with service fabric. You can choose
the port for this endpoint. This will be registered with Naming
service and available for reverse proxy
Communication between containers can be done using reverse proxy using the service name and the port you specified. if you didn't specified the port number then service fabric will assign one for you and you can get it using environment variable.
Service Fabric team have excellent documentation about this here
https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-deploy-container-linux