VPN between two nodes of a cluster - kubernetes-helm

I have three nodes, a master which is geographically located elsewhere, and the two other nodes that are close, but not on the same network. I've create a cluster with those three, and now, I want to make a tunnel between the two (close) nodes to compare the benefits to communicate without going to the master, and then come back.
I've search a little, and found out these charts:
https://github.com/helm/charts/tree/master/stable/openvpn.
Can I use it to create the VPN between the 2 workers nodes?
Thanks for the help

Is not a good idea to use a helm chart for a VPN if you are trying to use it for the kubernetes internal communications.
My advice is to configure the VPN on the nodes itself but that comes with more problems of automation and availability.
What is the main idea of having that setup, can you use some external VPN service instead of installing inside the cluster? have you tried with peering instead of VPN?
Some actual cloud providers allow you to have easy turnkey clusters, have you tried it?
UPDATE
As per comments maybe two more solutions are good ones by itself or in combination:
Istio https://istio.io/
gRPC https://grpc.io/ in conjunction of mTLS

Related

UDP/TCP Broadcast in Managed Kubernetes Services (specifically AWS-EKS)

We have an app that uses UDP broadcast messages to form a "cluster" of all instances running in the same subnet.
We can successfully run this app in our (pretty std) local K8s installation by using hostNetwork:true for pods. This works because all K8s nodes are in the same subnet and broadcasting is possible. (a minor note: the K8s setup uses flannel networking plugin)
Now we want to move this app to the managed K8s service # AWS. But our initial attempts have failed. The 2 daemons running in 2 different pods didn't see each other. We thought that was most likely due to the auto-generated EC2 worker node instances for the AWS K8s service residing on different subnets. Then we created 2 completely new EC2 instances in the same subnet (and the same availability-zone) and tried running the app directly on them (not as part of K8s), but that also failed. They could not communicate via broadcast messages even though the 2 EC2 instances were on the same subnet/availability-zone.
Hence, the following questions:
Our preliminary search shows that AWS EC2 does probably not support broadcasting/multicasting, but still wanted to ask if there is a way to enable it? (on AWS or other cloud provider)?
We had used hostNetwork:true because we thought it would be much harder, if not impossible, to get broadcasting working with K8s pod-networking. But it seems some companies offer K8s network plugins that support this. Does anybody have experience with (or recommendation for) any of them? Would they work on AWS for example, considering that AWS doesn't support it on EC2 level?
Would much appreciate any pointers as to how to approach this and whether we have any options at all..
Thanks
Conceptually, you need to create overlay network on top of the VPC native like this. There's a CNI that support multicast and here's the AWS blog about it.

Off-Loading of k8s deployments to different cluster in case of high loads

Since I am unable to find anything on google or the official docs, I have a question.
I have a local minikube cluster with deployment, service and ingress, which is working fine. Now when the load on my local cluster becomes too high I want to automatically switch to a remote cluster.
Is this possible?
How would I achieve this?
Thank you in advance
EDIT:
A remote cluster in my case would be a rancher Kubernetes cluster, but as long as the resources on my local one are sufficient I want to stay there.
So lets say my local cluster has enough resources to run two replicas of my application, but when a third one is needed to distribute the load, it should be deployed to the remote rancher cluster. (I hope that is clearer now)
I imagine it would be doable with kubefed (https://github.com/kubernetes-sigs/kubefed) when using the ReplicaSchedulingPreferences (https://github.com/kubernetes-sigs/kubefed/blob/master/docs/userguide.md#replicaschedulingpreference) and just weighting the local cluster very high and the remote one very low and then setting spec.rebalance to true to distribute it in case of high loads, but that approach seems a bit like a workaround.
Your idea of using Kubefed sounds good but there is an another option: Multicluster-Scheduler.
Multicluster-scheduler is a system of Kubernetes controllers that
intelligently schedules workloads across clusters. It is simple to use
and simple to integrate with other tools.
To be able to make a better choice for your use case you can read through the Comparison with Kubefed (Federation v2).
All the necessary info can be found in the provided GitHub thread.
Please let me know if that helped.

How can I achieve an active/passive setup across multiple kubernetes clusters?

We have 2 kubernetes clusters hosted on different data centers and we're deploying the applications to both these clusters. We have an external load balancer which is outside the clusters but the the load balancer only accepts static IPs. We don't have control over the clusters and we can't provision a static IP. How can we go about this?
We've also tried kong as an api gateway. We were able to create an upstream with targets as load balanced application endpoints and providing different weights but this doesn't give us active/passive or active/failover. Is there a way we can configure kong/nginx upstream to achieve this?
Consider using HA proxy, where you can configure your passive cluster as backup upstream, and you will get active/passive cluster working. As mentioned in this nice guide about HA proxy
backup meaning it won’t participate in the load balance unless both
the nodes above have failed their health check (more on that later).
This configuration is referred to as active-passive since the backup
node is just sitting there passively doing nothing. This enables you
to economize by having the same backup system for different
application servers.
Hope it helps!

How are you connecting two Istio clusters?

The scenario:
I have two K8s clusters. One is on-prem, the other is hosted in AWS. I could use Istio to make communication painless and do things like balloon capacity in AWS, but I'm getting hung up on trying to connect them. Reading the documentation, it looks like I need a VPN deployed inside of K8s if I want to have encrypted tunnels so that each internal network can talk to the other side. They're both non-overlapping 10-dots so I have that part done.
Is that correct or am I missing something on how to connect the two K8s clusters?
Having Istio in your cluster is independent of setting up basic communication in between your two clusters. There are a few options that I can think of here:
VPN between some nodes in both clusters like you mentioned.
BGP peering with Calico and your existing infrastructure.
A router in between your two clusters that understand the internal cluster IPs (This could be with BGP or static routes)
Kubernetes Federation. V1 is in alpha and V2 is in the implementation phase as of this writing. Not prod ready yet IMO.
OK I figured out I'm basically doing it wrong. Since istio uses TLS - I don't need the VPN for crypto, just connectivity, which is overkill since it's encrypting encrypted traffic. I just need some sort of connectivity between the clusters which we can facilitate on the existing link and I can use EIPs if I don't have that.

Joining an external Node to an existing Kubernetes Cluster

I have a custom Kubernetes Cluster (deployed using kubeadm) running on Virtual Machines from an IAAS Provider. The Kubernetes Nodes have no Internet facing IP Adresses (except for the Master Node, which I also use for Ingress).
I'm now trying to join a Machine to this Cluster that is not hosted by my main IAAS provider. I want to do this because I need specialized computing resources for my application that are not offered by the IAAS.
What is the best way to do this?
Here's what I've tried already:
Run the Cluster on Internet facing IP Adresses
I have no trouble joining the Node when I tell kube-apiserver on the Master Node to listen on 0.0.0.0 and use public IP Adresses for every Node. However, this approach is non-ideal from a security perspective and also leads to higher cost because public IP Adresses have to be leased for Nodes that normally don't need them.
Create a Tunnel to the Master Node using sshuttle
I've had moderate success by creating a tunnel from the external Machine to the Kubernetes Master Node using sshuttle, which is configured on my external Machine to route 10.0.0.0/8 through the tunnel. This works in principle, but it seems way too hacky and is also a bit unstable (sometimes the external machine can't get a route to the other nodes, I have yet to investigate this problem further).
Here are some ideas that could work, but I haven't tried yet because I don't favor these approaches:
Use a proper VPN
I could try to use a proper VPN tunnel to connect the Machine. I don't favor this solution because it would add a (admittedly quite small) overhead to the Cluster.
Use a cluster federation
It looks like kubefed was made specifically for this purpose. However, I think this is overkill in my case: I'm only trying to join a single external Machine to the Cluster. Using Kubefed would add a ton of overhead (Federation Control Plane on my Main Cluster + Single Host Kubernetes Deployment on the external machine).
I couldn't think about any better solution than a VPN here. Especially since you have only one isolated node, it should be relatively easy to make the handshake happen between this node and your master.
Routing the traffic from "internal" nodes to this isolated node is also trivial. Because all nodes already use the master as their default gateway, modifying the route table on the master is enough to forward the traffic from internal nodes to the isolated node through the tunnel.
You have to be careful with the configuration of your container network though. Depending on the solution you use to deploy it, you may have to assign a different subnet to the Docker bridge on the other side of the VPN.