Tenant isolation with Kubernetes on networking level - kubernetes

We want to run a multi-tenant scenario that requires tenant separation on a network level.
The idea is that every tenant receives a dedicated node and a dedicated network that other tenants nodes can join. Tenant nodes should be able to interact with each other in that network.
Networks should not be able to talk with each other (true network isolation).
Are there any architectural patterns to achieve this?
One Kubernetes cluster per tenant?
One Kubernetes cluster for all tenants, with one subnet per tenant?
One Kubernetes cluster across VPCs (speaking in AWS terms)?

The regular way to deal with multi-tenancy inside kubernetes is to use namespaces. But this is within a kube cluster, meaning you still have the same underlying networking solution shared by all tenants. That is actualy fine, as you have Network Policies to restrict networking in the cluster.
You can obviously run autonomous clusters per tenant, yet this is not exactly multi-tenancy then, just multiple clusters. Networking can be configured on node level to route as expected, but you'd still be left with an issue of cross-cluster service discovery etc. Federation can help a bit with that, but I would still advise to chase Namespaces+Policies approach.

I see four ways to run multi-tenant k8s clusters at network-level:
Namespaces
Ingress rules
allow/deny and ingress/egress Network Policies
Network-aware Zones

Related

Restrict IP-range in GKE cluster when using VPN?

We're integrating with a new partner that requires us to use VPN when communicating with them (over HTTPS). We're running all of our services in a (non-private) Google Kubernetes Engine (GKE) cluster and it's only a single pod that needs to communicate with the partner's API.
The problem we face is that our partner's VPN provider won't allow us to use the private IP-range provided by GKE, 10.244.0.0/14, because the subnet is too large.
Preferably, we don't want to deploy something outside our GKE cluster, like a Compute Engine instance, that is somehow used to proxy our traffic (we will of course do it if this is the only/best way to proceed). We're hoping that, perhaps, it'll be possible to create a new node pool in the same cluster with a different (smaller) subnet, but so far we haven't found a way to do this. We've also looked briefly at CloudVPN, but if we understand it correctly, it only works with private GKE clusters.
Question:
What's the recommended way to obtain a smaller subnet/IP-range for a pod in an existing (public) GKE cluster to allow it to communicate with a third-party API over VPN?
The problem I see is that you have to maintain your VPN connection within your pod, it is possible but looks like an antipattern.
I would recommend using CloudVPN in a separate GCP project (due to cost separation and security) to establish the connection with a specific and limited VPC and then route that traffic to the pod, that might be in a specific ip range as you mentioned.
Take a look at the docs on how to create the vpn:
https://cloud.google.com/network-connectivity/docs/vpn/concepts/overview
Redirect traffic between VPCs:
https://cloud.google.com/vpc/docs/vpc-peering
Create the nodepool with an IP range: https://cloud.google.com/sdk/gcloud/reference/container/node-pools/create
Assign your deployment to that nodepool
https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector

Is it possible to network two kubernetes clusters such that resources not publicly exposed in one can be accessed by the other?

I have deployed a rather large application and I have the need to segregate some of my deployments, which I normally access via cluster ip, into their own dedicated cluster. Once I have done this is there a way I can still allow deployments in cluster a to continue access deployments in cluster b, without exposing them to the internet? These are highly sensitive workloads and exposing them to the internet is not an option.
To reach resources deployed in a Kubernetes cluster from outside, you need to expose those resources. No other ways.
Of course, if you have the Kubernetes clusters in your local network, it is not necessary to expose them to the Internet.
You should be able to use and configure Contiv and Calico in a way that you can have pods in cluster 1 being technically able to talk to pods in cluster 2 without exposing services. Although you also shouldn't forget that this is simply IP based communication and services like e.g. DNS wont be unified right away. So you can't just simple connect by services or pod names.

How to create a GCP Kubernetes Engine cluster spanning two regions?

I wish to know how to create a GCP Kubernetes Engine cluster spanning two regions. For instance, a cluster has some instances at "us-west1" region, and others at "us-central1" region.
My use case is to verify "failure-domain.beta.kubernetes.io/region" topology key is working as expected. I am aware of:
1. cluster federation: not supported yet for Kubernetes Engine
2. multi-cluster ingress: in development, but may not something I am looking for
3. regional cluster: not applicable as it focuses on replication in only one region
I am aware that my use case is not atypical.
It is possible, but I cannot say that will be a stable and fully functional configuration.
There are no standard tools to do what you want, but you can connect external nodes to your cluster from a different region manually. It will not work with kubeadm, but if you will setup kubelet manually - it will work, but with many limitations:
No auto-updates.
You should manage the connection between regions manually (you should have a private network with direct routing within all your nodes).
You can have problems with logs, monitoring, load balancing, etc.
You will pay for the traffic between internal and external nodes as for the external traffic.
Finally, although it is possible, I cannot recommend you to use it. If you really want to get a multi-region cluster - setup it yourself by kubeadm and use kubefed to create a federation.

How to access services in a different Kubernetes cluster

For improved performance and availability we'd like to distribute certain services from out stack across different Kubernetes clusters in different parts of the world (GCP regions).
The majority of our stack will continue to run in one cluster / region but some user facing services will be deployed all over the world.
Some of these services need to access other services in our main cluster.
Q: How can we reliably access services in a different Kubernetes cluster?
Using internal load balancers seems to be out of the question as those are per region only.
We'd like to keep the communication between our services inside the private GCP network and avoid going over the public internet. So an public ingress also wouldn't work.
VPC networks are global resources, not restricted by regional boundaries, and so with the correct firewall rules set up, you should be able to access any internal resource from any other resource "right out of the box", assuming they are in the same VPC network and same project.
Take a look at VPN Peering: https://cloud.google.com/vpc/docs/vpc-peering
It allows you to connect two vpcs (in different regions) so that they can communicate privately.
You may have to recreate/reconfigure your Kubernetes in order to support this vpc architecture.

Kubernetes split-brain / HA across AZ

The Kubernetes HA documentation shows that you can ensure availability in the case of the failure of an apiserver by having multiple instances behind a load balancer.
However, it doesn't cover what happens if the Kubernetes is deployed across multiple availability zones. There is some documentation here but it doesn't really go into failure scenarios.
What is best practice here? Should you pin the api-servers to instances inside each AZ? What happens in the event of a split brain? If I have a pod running in one AZ and it becomes unavailable to the rest of the world, what happens to it?
I specifically want to know about a custom on-premise installation, not AWS or GCE.