mTLS between two kubernetes clusters - kubernetes

I'm trying to get mTLS between two applications in two kubernetes clusters without the way Istio does it (with its ingress gateway), and I was wondering if the following would work (for Istio, for Likerd, for Consul...).
Let's say we have a k8s cluster A with an app A.A. and a cluster B with an app B.B. and I want them to communicate with mTLS.
Cluster A has letsEncrypt cert for nginx ingress controller, and a mesh (whatever) for its application.
Cluster B has self signed cert from our root CA.
Cluster A and B service meshes have different certificates signed by our root CA.
Traffic goes from the internet to Cluster A ingress controller (HTTPS), from there to app A.A.
After traffic gets to app A.A., this app wants to talk to app B.B.
Apps A.A. and B.B. have endpoints exposed via ingress (using their ingress controllers).
The TLS certificates are ended in the endpoints and are wildcards.
Do you think the mTLS will work in this situation?

Basically this blog from portshift answer your question.
The answer is depends on how your clusters are built, because
Istio offers few options to deploy service mesh in multiple kubernetes clusters, more about it here.
So, if you have Single Mesh deployment
You can deploy a single service mesh (control-plane) over a fully connected multi-cluster network, and all workloads can reach each other directly without an Istio gateway, regardless of the cluster on which they are running.
BUT
If you have Multi Mesh Deployment
With a multi-mesh deployment you have a greater degree of isolation and availability, but it increases the set-up complexity. Meshes that are otherwise independent are loosely coupled together using ServiceEntries, Ingress Gateway and use a common root CA as a base for secure communication. From a networking standpoint, the only requirement is that the ingress gateways be reachable from one another. Every service in a given mesh that needs to be accessed a service in a different mesh requires a ServiceEntry configuration in the remote mesh.
In multi-mesh deployments security can become complicated as the environment grows and diversifies. There are security challenges in authenticating and authorizing services between the clusters. The local Mixer (services policies and telemetries) needs to be updated with the attributes of the services in the neighbouring clusters. Otherwise, it will not be able to authorize these services when they reaching its cluster. To achieve this, each Mixer needs to be aware of the workload identities, and their attributes, in neighbouring clusters Each Citadel needs to be updated with the certificates of neighbouring clusters, to allow mTLS connections between clusters.
Federation of granular workloads identities (mTLS certificates) and service attributes across multi-mesh control-planes can be done in the following ways:
Kubernetes Ingress: exposing HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource. An Ingress can terminate SSL / TLS, and offer name based virtual hosting. Yet, it requires an Ingress controller for fulfilling the Ingress rules
Service-mesh gateway: The Istio service mesh offers a different configuration model, Istio Gateway. A gateway allows Istio features such as monitoring and route rules to be applied to traffic entering the cluster. An ingress gateway describes a load balancer operating at the edge of the mesh that receives incoming HTTP/TCP connections. It configures exposed ports, protocols, etc. Traffic routing for ingress traffic is configured instead using Istio routing rules, exactly the same way as for internal service requests.
Do you think the mTLS will work in this situation?
Based on above informations
If you have Single Mesh Deployment
It should be possible without any problems.
If you have Multi Mesh Deployment
It should work, but since you don't want to use istio gateway then the only option is kubernetes ingress.
I hope it answer your question. Let me know if you have any more questions.

You want Consul's Mesh Gateways. They mTLS service-to-service connectivity between federated Consul clusters deployed in different data centers, clusters, or runtime environments.

Related

Differences between Kubernetes Layer 7 and Layer 4 Cloud Load Balancers

My understanding is that if you deploy a Kubernetes service of type 'LoadBalancer' then the Kubernetes cloud controller will automatically provision a Layer 4 load balancer in the cloud you're using. So this would imply that any Kubernetes service of type 'LoadBalancer' always maps to a Layer 4 cloud load balancer, correct?
However, my understanding of the Kubernetes Ingress is that once you deploy your Ingress controllers you also need to provision a service of type 'LoadBalancer' to route traffic to the Ingress controller pods. But this time, since an Ingress is involved, the load balancer will be provisioned as a Layer 7 load balancer and that Layer 7 load balancer sits in front of your Kubernetes cluster and routes traffic to your Ingress controllers.
So it looks like the Kubernetes cloud controller determines whether to provision a Layer 7 or Layer 4 load balancer based on whether an Ingress is present or not. Is this correct?
Ingresses & Ingress-Controller
A Kubernetes service is by default exclusive to the cluster.
Only applications running on the cluster can access them because of this.
An ingress in Kubernetes allows us to direct traffic from outside the cluster to one or more services there.
For all incoming traffic, the ingress typically serves as a single point of entry.
An ingress is assigned a public IP address (provisioned by your cloud provider), making it reachable from outside the cluster.
It then directs all of its traffic to the proper service using a set of rules, however, most Ingress-Controllers directly serve traffic to the pods and not through the service (by constantly checking the endpoint object).
When creating an ingress, there are a few things to consider.
They are initially made to manage web traffic (HTTP or HTTPS).
Although it is possible, using an ingress with other kinds of protocols usually requires additional configuration. Most importantly, the ingress object doesn't actually accomplish anything on its own. Therefore, we must have an ingress controller on hand for an ingress to actually function. Most cloud platforms provide their own ingress controllers, but there are also plenty of open-source options to choose from.
LoadBalancer
Ingresses and LoadBalancers in Kubernetes overlap quite a bit.
This is due to the fact that they are primarily employed to expose services to the internet.
LoadBalancers, however, differ from ingresses in several ways, a load balancer is merely an addition to a service, not a separate entity like an ingress.
For this to work, the cluster must be running on a provider that supports external load balancers. All of the major cloud providers support external load balancers using their own resource types:
AWS uses a Network Load Balancer
GKE also uses a Network Load Balancer
Azure uses a Public Load Balancer
Load balancers can only route to a single service because they are defined per service. As opposed to an ingress, which can route to numerous services within the cluster, this is different. As you've noted, a LoadBalancer operatores on Layer 4 whereas most Ingress Controllers operate on Layer 7; however the Ingress-Controller is usually "exposed" by an external Layer 4 LB to make it accessible in the first place.
That being said, your Cloud-provider doesn't decide wether to create an Ingress or LoadBalancer, it obviously depends on what resource you're creating and your CNI (Container Network Interface), which most Cloud providers also have their own implementation will notify the necessary service to create the resource you want. Also, keep in mind that regardless of the provider, using an external LoadBalancer will typically come with additional costs.
Layer-4 load balancer (or the external load balancer) forwards traffic to Nodeports. It allows you to forward both HTTP and TCP traffic. In Layer 4 Services can be exposed through a single globally managed config-map.
Layer-7 makes smart and informed load balances based on the content of the data,however, layer 4 carries out its load balancing based on its inbuilt software algorithm. Its load balancing is more CPU‑intensive than packet‑based Layer 4 load balancing, but rarely causes degraded performance on a modern server. Layer 7 load balancing enables the load balancer to make smarter load‑balancing decisions, and to apply optimizations and changes to the content.
Some cloud-managed layer-7 load balancers (such as the ALB ingress controller on AWS) expose DNS addresses for ingress rules. You need to map (via CNAME) your domain name to the DNS address generated by the layer-7 load balancer. Google Load Balancer provides a single routable IP address. Nginx Ingress Controller exposes the external IP of all nodes that run the Nginx Ingress Controller. You can configure your own DNS to map (via A records) your domain name to the IP addresses exposed by the Layer-7 load balancer.
Kubernetes Ingress is a collection of routing rules that govern how external users access services running on the Kubernetes cluster. Ingress controller reads the ingress resource’s information and processes the data accordingly. So basically, ingress resources contain the rules to route the traffic and ingress controller routes the traffic.
Routing using ingress is not standardized i.e. different ingress controllers have different semantics (different ways of routing).
At the end of the day, you need to build your own ingress controller based on your requirements and implementations. Ingress is the most flexible and configurable routing feature available.
Please look at the table below for more details :

What is the difference between ingress and service mesh in kubernetes?

Can someone help me to understand if service mesh itself is a type of ingress or if there is any difference between service mesh and ingress?
An "Ingress" is responsible for Routing Traffic into your Cluster (from the Docs: An API object that manages external access to the services in a cluster, typically HTTP.)
On the other side, a Service-Mesh is a tool that adds proxy-Containers as Sidecars to your Pods and Routs traffic between your Pods through those proxy-Containers.
use-Cases for Service-Meshes are i.E.
distributed tracing
secure (SSL) connections between pods
resilience (service-mesh can reroute traffic from failed requests)
network-performance-monitoring

ingress with DMZ on on-premise infrastructure

i have a question related to design and architecture needs instead of issue one, we have a kubernetes cluster which handle our production workload, we need to secure external traffic to this cluster so we have designed this approach :
make a worker node with ingress controller and without any workload
place this worker node in a DMZ zone in order to handle external traffic to our clusterIP services of our applications.
is that a good idea for securing our workloads ?
if we place an HAproxy in a DMZ zone (as a L4 just to load balance traffic to workers to be handled by ingress nginx for ex) it'll not give us an other level of security (protocol break)
note that we don't have a WAF.
Any ideas please??
Agree to use two dedicated nodes, for high availability, for external traffic entry point.
I would use the haproxy ingress controller Announcing HAProxy Kubernetes Ingress Controller 1.6 with Evolving Kubernetes networking with the Gateway API

In Istio, service to service communication, does Kubernetes service required to setup?

Hello I'm new to Istio and currently learning about Istio.
As per my understanding, Envoy proxy will resolve an IP address of destination instead of Kube DNS server. Envoy will send traffic directly to healthy pod based on information which received from control pane.
So... Does Kubernetes service required to setup, if I'm using Istio?
Correct me if I'm wrong.
Thanks!
From the docs
In order to direct traffic within your mesh, Istio needs to know where
all your endpoints are, and which services they belong to. To populate
its own service registry, Istio connects to a service discovery
system. For example, if you’ve installed Istio on a Kubernetes
cluster, then Istio automatically detects the services and endpoints
in that cluster.
So Kubernetes service is needed for istio to achieve service discovery i.e to know the POD IPs. But kubernetes service(L4) is not used for load balancing and routing traffic because L7 envoy proxy does that in istio.
From the docs.
A pod must belong to at least one Kubernetes service even if the pod
does NOT expose any port. If a pod belongs to multiple Kubernetes
services, the services cannot use the same port number for different
protocols, for instance HTTP and TCP.

Routing traffic to kubernetes cluster

I have a question related to Kubernetes networking.
I have a microservice (say numcruncherpod) running in a pod which is serving requests via port 9000, and I have created a corresponding Service of type NodePort (numcrunchersvc) and node port which this service is exposed is 30900.
My cluster has 3 nodes with following IPs:
192.168.201.70,
192.168.201.71
192.168.201.72
I will be routing the traffic to my cluster via reverse proxy (nginx). As I understand in nginx I need to specify IPs of all these cluster nodes to route the traffic to the cluster, is my understanding correct ?
My worry is since nginx won't have knowledge of cluster it might not be a good judge to decide the cluster node to which the traffic should be sent to. So is there a better way to route the traffic to my kubernetes cluster ?
PS: I am not running the cluster on any cloud platform.
This answer is a little late, and a little long, so I ask for forgiveness before I begin. :)
For people not running kubernetes clusters on Cloud Providers there are 4 distinct options for exposing services running inside the cluster to the world outside.
Service of type: NodePort. This is the simplest and default. Kubernetes assigns a random port to your service. Every node in the cluster listens for traffic to this particular port and then forwards that traffic to any one of the pods backing that service. This is usually handled by kube-proxy, which leverages iptables and load balances using a round-robin strategy. Typically since the UX for this setup is not pretty, people often add an external "proxy" server, such as HAProxy, Nginx or httpd to listen to traffic on a single IP and forward it to one of these backends. This is the setup you, OP, described.
A step up from this would be using a Service of type: ExternalIP. This is identical to the NodePort service, except it also gets kubernetes to add an additional rule on all kubernetes nodes that says "All traffic that arrives for destination IP == must also be forwarded to the pods". This basically allows you to specify any arbitrary IP as the "external IP" for the service. As long as traffic destined for that IP reaches one of the nodes in the cluster, it will be routed to the correct pod. Getting that traffic to any of the nodes however, is your responsibility as the cluster administrator. The advantage here is that you no longer have to run an haproxy/nginx setup, if you specify the IP of one of the physical interfaces of one of your nodes (for example one of your master nodes). Additionally you cut down the number of hops by one.
Service of type: LoadBalancer. This service type brings baremetal clusters at parity with cloud providers. A fully functioning loadbalancer provider is able to select IP from a pre-defined pool, automatically assign it to your service and advertise it to the network, assuming it is configured correctly. This is the most "seamless" experience you'll have when it comes to kubernetes networking on baremetal. Most of LoadBalancer provider implementations use BGP to talk and advertise to an upstream L3 router. Metallb and kube-router are the two FOSS projects that fit this niche.
Kubernetes Ingress. If your requirement is limited to L7 applications, such as REST APIs, HTTP microservices etc. You can setup a single Ingress provider (nginx is one such provider) and then configure ingress resources for all your microservices, instead of service resources. You deploy your ingress provider and make sure it has an externally available and routable IP (you can pin it to a master node, and use the physical interface IP for that node for example). The advantage of using ingress over services is that ingress objects understand HTTP mircoservices natively and you can do smarter health checking, routing and management.
Often people combine one of (1), (2), (3) with (4), since the first 3 are L4 (TCP/UDP) and (4) is L7. So things like URL path/Domain based routing, SSL Termination etc is handled by the ingress provider and the IP lifecycle management and routing is taken care of by the service layer.
For your use case, the ideal setup would involve:
A deployment for your microservice, with health endpoints on your pod
An Ingress provider, so that you can tweak/customize your routing/load-balancing as well as use for SSL termination, domain matching etc.
(optional): Use a LoadBalancer provider to front your Ingress provider, so that you don't have to manually configure your Ingress's networking.
Correct. You can route traffic to any or all of the K8 minions. The K8 network layer will forward to the appropriate minion if necessary.
If you are running only a single pod for example, nginx will most likely round-robin the requests. When the requests hit a minion which does not have the pod running on it, the request will be forwarded to the minion that does have the pod running.
If you run 3 pods, one on each minion, the request will be handled by whatever minion gets the request from nginx.
If you run more than one pod on each minion, the requests will be round-robin to each minion, and then round-robin to each pod on that minion.