K3s high available cluster on cloud - kubernetes

I am trying to create a cluster using k3s with n number of nodes, where all the nodes are servers only, and there are no agent/worker nodes. By default, a k3s server works as an agent. And I am trying to add a load balancer that can work for both API-Server and client side. I mean it can serve the 443 port (webserver) as well. I need a hybrid solution for the load balancer as I want to support on-prem and cloud both.
I was going through metalLB as it doesn't support cloud. Here it is mentioned that metalLb is not compatible with cloud providers. I am confused a bit about the load balancer where to add it?
Below I am adding an image of an architecture I want to build, not sure where the load balancer should I keep?
Please someone guide me in a right direction so I can complete the flow.
Thanks,

Related

Is load balancer unnecessary for k3s embeded etcd HA solution

I have a same discussion in k3s github repository, but no one reply. Hope someone can give an answer here.
There are articles talking about the embedded etcd HA solution of k3s like this. One of the key behavior is adding a load balancer solution (EIP like this article or LB from the clound provider) between the agents and masters:
k3s agent --> load balancer --> master
And the architecture of k3s also show that a Fixed Registration Address is necessary.
While, after some research I found that k3s (at least v1.21.5+k3s2) have a internal agent load balancer (config at /var/lib/rancher/k3s/agent/etc/k3s-agent-load-balancer.yaml) which will auto update the master k8s api server list in it. So the out side load balancer is unnecessary?
I got a response from the k3s discussion:
https://github.com/k3s-io/k3s/discussions/4488#discussioncomment-1719009
Our documentation lists a requirement for a "fixed registration endpoint" so that nodes do not rely on a single server being online in order to join the cluster. This endpoint could be a load-balancer or a DNS alias, it's up to you. This is only needed when nodes are registering to the cluster; once they have successfully joined, they use the client load-balancer to communicate directly with the servers without going through the registration endpoint.
I think this is good enough to answer this question.
Yes, an external load balancer is still required to achieve a highly available setup with more than one master node.
Whenever you start a worker node or use the API, you should connect to the external load balancer to ensure you can connect to a running master node if one master is currently down.
The internal load balancer you mentioned above distributes any load within your cluster.

Where is a good place to learn more about cloud independent Kubernetes Istio ingress controller external load balancer setup?

I am currently running Kubespray configured Kubernetes clusters that I am trying to integrate Istio as an Ingress Controller, and I am trying to figure out, not how to set up the internal workings of the Istio service (which there are tons of tutorials), but how to connect the Istio Ingress to cloud agnostic load balancers to route traffic into the cluster.
I see a lot of tutorials that mention cloud specific methodologies like AWS or GCP load balancers from within Kubernetes (which are utterly useless to me), but I want a Kubernetes cluster that knows / cares nothing about the external cloud environment that makes it easier to port or create hybrid / multi-cloud environments. I am having trouble finding information for this kind of setup. Can anyone help point me to information about manually configuring external load balancers to link external traffic into the cluster without relying on Kubernetes cloud extensions?
Thanks for any information you can provide or references you can point me to!

How can I achieve an active/passive setup across multiple kubernetes clusters?

We have 2 kubernetes clusters hosted on different data centers and we're deploying the applications to both these clusters. We have an external load balancer which is outside the clusters but the the load balancer only accepts static IPs. We don't have control over the clusters and we can't provision a static IP. How can we go about this?
We've also tried kong as an api gateway. We were able to create an upstream with targets as load balanced application endpoints and providing different weights but this doesn't give us active/passive or active/failover. Is there a way we can configure kong/nginx upstream to achieve this?
Consider using HA proxy, where you can configure your passive cluster as backup upstream, and you will get active/passive cluster working. As mentioned in this nice guide about HA proxy
backup meaning it won’t participate in the load balance unless both
the nodes above have failed their health check (more on that later).
This configuration is referred to as active-passive since the backup
node is just sitting there passively doing nothing. This enables you
to economize by having the same backup system for different
application servers.
Hope it helps!

Solution for HA production infrastructure and server management

My company mostly specialize in web and mobile development. Some of our clients want to have backend or web applications hosted and managed by us, because of that we have several apps and server to manage. I'm looking for a solution to have all these servers under one panel and most of all deploy all this application in High Availability. Moreover, we have servers in many different cloud providers and it would be nice if it would be possible to use them.
I've already found and tested few solutions. Maybe someone had the same problem and found a better solution or maybe can you advise which one of these are the best?
1. Rancher + DNS Round Robin
It would be setting up Rancher in HA mode with use of cattle or Kubernetes. Then set up few host just for loadBalancer and achieve HA by using DNS Round Robin. Put ip of all LoadBalancer in DNS records for every web application.
Pros:
Easy to setup
Multiple environments. One panel to administrate development, production infrastructure.
No single point of failure
Very cheap
Cons:
Leaves failover to the client-side application
Not reliable
When one node down high response times for some clients (he needs to wait for request to timeout)
2. Rancher + Cloudflare Load Balancer
As the previous set up Rancher in HA mode with use of cattle or Kubernetes. Then set up few host just for LoadBalancer and achieve HA by using Cloudflare LoadBancer pointing to rancher nodes used for LoadBalancers.
Pros:
Easy to setup
Multiple environments. One panel to administrate development, production infrastructure.
Theoretically, Cloudflare LB has 100% SLA
Cons:
The biggest problem is that Cloudflare LB uses DNS records for LoadBalancing. So our clients would need to redirect their domain to our DNS servers on Cloudflare or add CNAME record for our domain. Both of them are not ideal solutions :/ CNAME would be bad for SEO I think.
With many domains and many requests can get expensive.
Notes: I've tested this solution and it's working quite well, after shutting down node with LoadBalancer or with application downtime was about 20s-60s, so just time needed to spin new container.
3. Rancher + Floating IP + Keep alive
As the previous set up Rancher in HA mode with use of cattle or Kubernetes. Then set up few host just for LoadBalancer. Then setting up keepalive and (DigitalOcean) floating IP for nodes that are for LoadBalancers.
DigitalOcean floating ip diagram
Pros:
Easy to setup
Multiple environments. One panel to administrate development, production infrastructure.
No single point of failer
Cons:
LoadBalancers nodes needs to be on DigitalOcean
4. Kubernets on Google Cloud Platform with Kubernetes Engine
Setting up Kubernetes in HA mode on GCP.
Pros:
Super easy to setup on GCP. Just one click
Cons:
I couldn't find SLA of GCP Load Balancers. But probably single point of failer and SLA is not 100%
We would be attached with this Kubernetes cluster to one cloud provider
Having LB for every application, even if it's small could get expensive.
Worse web panel than the Rancher
5. Rancher 2.0 use all from above depending on environment
With Rancher 2.0 we could use all of above solutions it allows to add existing Kubernetes clusters to Rancher. So it would work with Kubernetes engine on GCP. However, it's in alpha version and doesn't have HA deployment yet.
Mostly I'm thinking about setting up option 3. Then if the rancher 2.0 will be released change for it and for larger applications use GCP with Kubernetes Engine. Have someone better solution? or maybe should I use other solutions from provided?

How to access kubernetes service load balancer with juju

Kubernetes create a load balancer, for each service; automatically in GCE. How can I manage something similar on AWS with juju?
Kubernetes service basically use the kubeproxy to handle the internal traffic. But that kubeproxy ip its do not have access to the external network.
There its a way to accomplish this deploying kubernetes cluster with juju?
I can't speak to juju specifically, but Kubernetes supports Amazon ELB - turning up a load-balancer should work.
There is a way to accomplish this, but it depends on additional work landing in the Kubernetes charms from the ~Kubernetes charmer team.
While you can reasonably hook it into an AWS ELB - Juju charms strive to be as DC agnostic as possible so its easily portable between data centers and clouds. A 'one size fits most' if you will.
What I see being required, is attaching the kube-proxy service to a load balancer service (Such as nginx) and using a template generator service like confd, or consul-template, to register/render the reverse proxy / load balancer configs for the services.
At present the Kubernetes bundle only has an internally functioning network, and the networking model is undergoing some permutations. If you'd like to participate in this planning + dev cycle, the recommended location to participate is the juju mailing list: juju#lists.ubuntu.com