Off-Loading of k8s deployments to different cluster in case of high loads - kubernetes

Since I am unable to find anything on google or the official docs, I have a question.
I have a local minikube cluster with deployment, service and ingress, which is working fine. Now when the load on my local cluster becomes too high I want to automatically switch to a remote cluster.
Is this possible?
How would I achieve this?
Thank you in advance
EDIT:
A remote cluster in my case would be a rancher Kubernetes cluster, but as long as the resources on my local one are sufficient I want to stay there.
So lets say my local cluster has enough resources to run two replicas of my application, but when a third one is needed to distribute the load, it should be deployed to the remote rancher cluster. (I hope that is clearer now)
I imagine it would be doable with kubefed (https://github.com/kubernetes-sigs/kubefed) when using the ReplicaSchedulingPreferences (https://github.com/kubernetes-sigs/kubefed/blob/master/docs/userguide.md#replicaschedulingpreference) and just weighting the local cluster very high and the remote one very low and then setting spec.rebalance to true to distribute it in case of high loads, but that approach seems a bit like a workaround.

Your idea of using Kubefed sounds good but there is an another option: Multicluster-Scheduler.
Multicluster-scheduler is a system of Kubernetes controllers that
intelligently schedules workloads across clusters. It is simple to use
and simple to integrate with other tools.
To be able to make a better choice for your use case you can read through the Comparison with Kubefed (Federation v2).
All the necessary info can be found in the provided GitHub thread.
Please let me know if that helped.

Related

How to simulate node joins and failures with a local Kubernetes cluster?

I'm developing a Kubernetes scheduler and I want to test its performance when nodes join and leave a cluster, as well as how it handles node failures.
What is the best way to test this locally on Windows 10?
Thanks in advance!
Unfortunately, you can't add nodes to Docker Desktop with Kubernetes enabled. Docker Desktop is single-node only.
I can think of two possible solutions, off the top of my head:
You could use any of the cloud providers. Major (AWS, GCP, Azure) ones have some kind of free tier (under certain usage, or timed). Adding nodes in those environments is trivial.
Create local VM for each node. This is less than perfect solution - very resource intesive. To make adding nodes easier, you could use kubeadm to provision your cluster.

When to use MiniKube and when to use Kubernetes?

I've found a partial answer Difference between Minikube, Kubernetes, Docker Compose, Docker Swarm, etc here, but I still do not completely get it:
In my understanding, kubernetes is a container-orchestration system. However, Minikube looks very similar to me.
Can somebody explain me when you would use minikube versus when you would use minikube, and why?
I think your question should have been "Can somebody explain me when you would use minikube versus when you would use Kubernetes, and why?"
Minikube is a small and easy Kubernetes setup for your Work-PC. You can install and configure a Kubernetes cluster very easily with it. However, for a production environment it is not the best choice. Minikube normally starts a virtual machine on your PC witch will affects the performance of your cluster other than Kubernetes which will run directly with your kernel if you use linux. Furthermore, like Butuzov already answered, it is only one node, not a "real" cluster.
So you use Kubernetes if you are in a production environment where you need distributed systems and workload as well as redundancy and failure safety.
Hope that helps for your understanding.
Edit: Use cases
Minikube:
Developer or DevOps who trying to execute a complex distributed system locally for testing purposes but with deployment over Helm.
Developer or DevOps who tries to create a deployment with Helm locally.
Kubernetes (standalone):
Execute complex distributed system on production systems.
Execute heavy workload (multiple products, distributed systems) in production
minikube - is one node cluster, with a master that can get loads, with a lot of solved and automated issues. designated to test, learn things from kubernetes ecosystem.
kubernetes itself is orchestrator that can come to you as managed service with a lot of problems (pv or loadbalancers) solved or like a lego, or you will tune here and there... well thing we called production ready.
minikube is ok to learn (not always but in 90% of cases) or experiment with tiny loads.

Clusters and nodes formation in Kubernetes

I am trying to deploy my Docker images using Kubernetes orchestration tools.When I am reading about Kubernetes, I am seeing documentation and many YouTube video tutorial of working with Kubernetes. In there I only found that creation of pods, services and creation of that .yml files. Here I have doubts and I am adding below section,
When I am using Kubernetes, how I can create clusters and nodes ?
Can I deploy my current docker-compose build image directly using pods only? Why I need to create services yml file?
I new to containerizing, Docker and Kubernetes world.
My favorite way to create clusters is kubespray because I find ansible very easy to read and troubleshoot, unlike more monolithic "run this binary" mechanisms for creating clusters. The kubespray repo has a vagrant configuration file, so you can even try out a full cluster on your local machine, to see what it will do "for real"
But with the popularity of kubernetes, I'd bet if you ask 5 people you'll get 10 answers to that question, so ultimately pick the one you find easiest to reason about, because almost without fail you will need to debug those mechanisms when something inevitably goes wrong
The short version, as Hitesh said, is "yes," but the long version is that one will need to be careful because local docker containers and kubernetes clusters are trying to solve different problems, and (as a general rule) one could not easily swap one in place of the other.
As for the second part of your question, a Service in kubernetes is designed to decouple the current provider of some networked functionality from the long-lived "promise" that such functionality will exist and work. That's because in kubernetes, the Pods (and Nodes, for that matter) are disposable and subject to termination at almost any time. It would be severely problematic if the consumer of a networked service needed to constantly update its IP address/ports/etc to account for the coming-and-going of Pods. This is actually the exact same problem that AWS's Elastic Load Balancers are trying to solve, and kubernetes will cheerfully provision an ELB to represent a Service if you indicate that is what you would like (and similar behavior for other cloud providers)
If you are not yet comfortable with containers and docker as concepts, then I would strongly recommend starting with those topics, and moving on to understanding how kubernetes interacts with those two things after you have a solid foundation. Else, a lot of the terminology -- and even the problems kubernetes is trying to solve -- may continue to seem opaque

Best way to configure Kubernetes on local server

Through a stroke of luck I've been given an extremely powerful server in my office - I'd love to somehow set up a replica of our staging Kubernetes environment on it. Our staging Kube environment is 5 nodes running on AWS that each have different configurations. I can't find much in the way of best practice guides (probably because this is a very weird use case) for this configuration.
My gut feel is this:
Install some kind of bare metal OS on the machine
Set up multiple VMs on the machine each configured to mirror a node from staging
Install the Kube master on one of the machines
Enrol each of the other VMs as a node under kubernetes
Run my deployments
Is there any better way for me to configure this or any potential issues I may hit/roadblocks if I follow this approach?
If you want to have it everything in one machine, I would also go for the multi-vm option. With Vagrant you could try to make the process simpler. This could help you:
https://github.com/pires/kubernetes-vagrant-coreos-cluster
After setting up the cluster you could adapt it to mimic the state of your staging cluster.
The only issue that comes to mind is that of overlay networking an external access. If you configure NAT networking you would have issues with external access and probably no issue with the network overlay. On the other side, I am not 100% certain how the overlay network would work in a bridged setting.

CockroachDB Clustered Across Google Container Engine Clusters, Stateful Sets

CockroachDB has a relatively simple clustering mechanism, you initialize the DB with a command line option pointing at the host name of the other cockroach machines (but, this question is relevant really for any peer to peer clustered db).
One of the benefits of Cockroach is you can cluster across regions within a continent. Cockroach themselves published a good k8s config to standup a cockroach cluster on stateful sets. See this config.
I'm trying to find a way to span the cockroach cluster across two GKE clusters in different regions. DNS and connectivity between the regions isn't really an issue, but I can't figure out how to address the stateful set instances. Internal to the cluster, they're cockroachdb-1.cockroach. Is there any way to allow these to be cross cluster addressable? One option would be to expose as a nodeport and point instances from the second cluster to machines with ports in the first cluster. That seems hacky and if the machine goes down represents a single point of failure. Any other ideas about how to do this? I also explored k8s federation, but I don't think it really addresses this issue either (though I could be wrong).
One final option would be exposing each instance through a load balancer...I don't really like that, but maybe it's the only way?
This is a good question that I've been meaning to play around with soon. You've been checking into a reasonable set of ideas. The core problem, as you allude to, is that every cockroach process needs to be able to individually address every other cockroach process.
I don't know how well cluster federation has developed over the last 12-18 months, but it seems like that's where this really should be solved.
Barring great developments in cluster federation, the "easiest" way that comes to mind is to use host networking for all the cockroachdb pods. You can specify a few known machine IPs as the join addresses for new pods to connect to, and then they'll all be able to talk to each other. I've made this work with StatefulSets before (by setting dnsPolicy: ClusterFirstWithHostNet along with hostNetwork: true), but I'm not sure it's a well-supported use case. You'd probably be better off using a DaemonSet (with a label selector to only run on certain nodes if you don't want it on them all). Something like this: https://gist.github.com/a-robinson/ec2b86783ccbf053c83ba83170673d63
If that doesn't tickle your fancy, then creating a service for each StatefulSet instance unfortunately is probably the next best bet. As of a fairly recent change in Kubernetes, a separate label will be created for each pod, which should make this easier than it used to be: https://github.com/kubernetes/kubernetes/pull/55329
I'd love to see other suggestions for this, though, since it's all kind of manual or infrastructure-specific.