How can I achieve an active/passive setup across multiple kubernetes clusters? - kubernetes

We have 2 kubernetes clusters hosted on different data centers and we're deploying the applications to both these clusters. We have an external load balancer which is outside the clusters but the the load balancer only accepts static IPs. We don't have control over the clusters and we can't provision a static IP. How can we go about this?
We've also tried kong as an api gateway. We were able to create an upstream with targets as load balanced application endpoints and providing different weights but this doesn't give us active/passive or active/failover. Is there a way we can configure kong/nginx upstream to achieve this?

Consider using HA proxy, where you can configure your passive cluster as backup upstream, and you will get active/passive cluster working. As mentioned in this nice guide about HA proxy
backup meaning it won’t participate in the load balance unless both
the nodes above have failed their health check (more on that later).
This configuration is referred to as active-passive since the backup
node is just sitting there passively doing nothing. This enables you
to economize by having the same backup system for different
application servers.
Hope it helps!

Related

Restrict IP-range in GKE cluster when using VPN?

We're integrating with a new partner that requires us to use VPN when communicating with them (over HTTPS). We're running all of our services in a (non-private) Google Kubernetes Engine (GKE) cluster and it's only a single pod that needs to communicate with the partner's API.
The problem we face is that our partner's VPN provider won't allow us to use the private IP-range provided by GKE, 10.244.0.0/14, because the subnet is too large.
Preferably, we don't want to deploy something outside our GKE cluster, like a Compute Engine instance, that is somehow used to proxy our traffic (we will of course do it if this is the only/best way to proceed). We're hoping that, perhaps, it'll be possible to create a new node pool in the same cluster with a different (smaller) subnet, but so far we haven't found a way to do this. We've also looked briefly at CloudVPN, but if we understand it correctly, it only works with private GKE clusters.
Question:
What's the recommended way to obtain a smaller subnet/IP-range for a pod in an existing (public) GKE cluster to allow it to communicate with a third-party API over VPN?
The problem I see is that you have to maintain your VPN connection within your pod, it is possible but looks like an antipattern.
I would recommend using CloudVPN in a separate GCP project (due to cost separation and security) to establish the connection with a specific and limited VPC and then route that traffic to the pod, that might be in a specific ip range as you mentioned.
Take a look at the docs on how to create the vpn:
https://cloud.google.com/network-connectivity/docs/vpn/concepts/overview
Redirect traffic between VPCs:
https://cloud.google.com/vpc/docs/vpc-peering
Create the nodepool with an IP range: https://cloud.google.com/sdk/gcloud/reference/container/node-pools/create
Assign your deployment to that nodepool
https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector

Is load balancer unnecessary for k3s embeded etcd HA solution

I have a same discussion in k3s github repository, but no one reply. Hope someone can give an answer here.
There are articles talking about the embedded etcd HA solution of k3s like this. One of the key behavior is adding a load balancer solution (EIP like this article or LB from the clound provider) between the agents and masters:
k3s agent --> load balancer --> master
And the architecture of k3s also show that a Fixed Registration Address is necessary.
While, after some research I found that k3s (at least v1.21.5+k3s2) have a internal agent load balancer (config at /var/lib/rancher/k3s/agent/etc/k3s-agent-load-balancer.yaml) which will auto update the master k8s api server list in it. So the out side load balancer is unnecessary?
I got a response from the k3s discussion:
https://github.com/k3s-io/k3s/discussions/4488#discussioncomment-1719009
Our documentation lists a requirement for a "fixed registration endpoint" so that nodes do not rely on a single server being online in order to join the cluster. This endpoint could be a load-balancer or a DNS alias, it's up to you. This is only needed when nodes are registering to the cluster; once they have successfully joined, they use the client load-balancer to communicate directly with the servers without going through the registration endpoint.
I think this is good enough to answer this question.
Yes, an external load balancer is still required to achieve a highly available setup with more than one master node.
Whenever you start a worker node or use the API, you should connect to the external load balancer to ensure you can connect to a running master node if one master is currently down.
The internal load balancer you mentioned above distributes any load within your cluster.

How to setup manual fail over (PROD to DR) in IKS?

We have two IBM kubernetes cluster whenever issue happens in one cluster we need to failover to DR. Can anyone tell me how to do that automatically ? both cluster present in two different zones Montreal & torento. Also we have IBM Cloud internet service.
You could use the CIS service Global Load Balancer offering to set up a globally load-balanced and health-checked URL for your applications. You'd create a GLB for domain app.mydomain.com/app_path for example and then back it with the VIPs for your cluster ALBs in the same origin pool. Configure a health check at the GLB so traffic will be sent to the available endpoints that are healthy.
CIS GLB docs are covered at https://cloud.ibm.com/docs/cis?topic=cis-global-load-balancer-glb-concepts

How are you connecting two Istio clusters?

The scenario:
I have two K8s clusters. One is on-prem, the other is hosted in AWS. I could use Istio to make communication painless and do things like balloon capacity in AWS, but I'm getting hung up on trying to connect them. Reading the documentation, it looks like I need a VPN deployed inside of K8s if I want to have encrypted tunnels so that each internal network can talk to the other side. They're both non-overlapping 10-dots so I have that part done.
Is that correct or am I missing something on how to connect the two K8s clusters?
Having Istio in your cluster is independent of setting up basic communication in between your two clusters. There are a few options that I can think of here:
VPN between some nodes in both clusters like you mentioned.
BGP peering with Calico and your existing infrastructure.
A router in between your two clusters that understand the internal cluster IPs (This could be with BGP or static routes)
Kubernetes Federation. V1 is in alpha and V2 is in the implementation phase as of this writing. Not prod ready yet IMO.
OK I figured out I'm basically doing it wrong. Since istio uses TLS - I don't need the VPN for crypto, just connectivity, which is overkill since it's encrypting encrypted traffic. I just need some sort of connectivity between the clusters which we can facilitate on the existing link and I can use EIPs if I don't have that.

Is it possible to network two kubernetes clusters such that resources not publicly exposed in one can be accessed by the other?

I have deployed a rather large application and I have the need to segregate some of my deployments, which I normally access via cluster ip, into their own dedicated cluster. Once I have done this is there a way I can still allow deployments in cluster a to continue access deployments in cluster b, without exposing them to the internet? These are highly sensitive workloads and exposing them to the internet is not an option.
To reach resources deployed in a Kubernetes cluster from outside, you need to expose those resources. No other ways.
Of course, if you have the Kubernetes clusters in your local network, it is not necessary to expose them to the Internet.
You should be able to use and configure Contiv and Calico in a way that you can have pods in cluster 1 being technically able to talk to pods in cluster 2 without exposing services. Although you also shouldn't forget that this is simply IP based communication and services like e.g. DNS wont be unified right away. So you can't just simple connect by services or pod names.