Through a stroke of luck I've been given an extremely powerful server in my office - I'd love to somehow set up a replica of our staging Kubernetes environment on it. Our staging Kube environment is 5 nodes running on AWS that each have different configurations. I can't find much in the way of best practice guides (probably because this is a very weird use case) for this configuration.
My gut feel is this:
Install some kind of bare metal OS on the machine
Set up multiple VMs on the machine each configured to mirror a node from staging
Install the Kube master on one of the machines
Enrol each of the other VMs as a node under kubernetes
Run my deployments
Is there any better way for me to configure this or any potential issues I may hit/roadblocks if I follow this approach?
If you want to have it everything in one machine, I would also go for the multi-vm option. With Vagrant you could try to make the process simpler. This could help you:
https://github.com/pires/kubernetes-vagrant-coreos-cluster
After setting up the cluster you could adapt it to mimic the state of your staging cluster.
The only issue that comes to mind is that of overlay networking an external access. If you configure NAT networking you would have issues with external access and probably no issue with the network overlay. On the other side, I am not 100% certain how the overlay network would work in a bridged setting.
Related
I'm developing a Kubernetes scheduler and I want to test its performance when nodes join and leave a cluster, as well as how it handles node failures.
What is the best way to test this locally on Windows 10?
Thanks in advance!
Unfortunately, you can't add nodes to Docker Desktop with Kubernetes enabled. Docker Desktop is single-node only.
I can think of two possible solutions, off the top of my head:
You could use any of the cloud providers. Major (AWS, GCP, Azure) ones have some kind of free tier (under certain usage, or timed). Adding nodes in those environments is trivial.
Create local VM for each node. This is less than perfect solution - very resource intesive. To make adding nodes easier, you could use kubeadm to provision your cluster.
I'm looking for a way to create a live Kubernetes cluster without too much hassle.
I've got a nice HP server, which could run a few VM's with kubernetes on top. The reason for VM's is to isolate this from the host machine. Ideally, the VMs should only run containerd and kubelet and are essentially disposable for node-upgrades.
However, I get lost in what tooling would provide this. minikube? microk8s? k3s? rancher? charmed kubernetes? some existing qemu image? some existing vagrant config? The more managed it is, the better. So far I liked minikube, but it doesn't have "start on reboot" for example, nor the flexibility for node upgrades.
I have tried a lot of tools to train for the CKAD certification. For my usage, the better option for a local cluster was k3s and multipass (for online clusters, I have used Civo). Both are very fast to proceed their respective tasks, so it allows me to create clusters at will and dispose them to be able to work on clean environments.
multipass to create VM quickly
k3s which is nothing else than a lightweight kubernetes
You can find easily some tutorials to automate the creation of clusters for example:
https://betterprogramming.pub/local-k3s-cluster-made-easy-with-multipass-108bf6ce577c
https://medium.com/#yankee.exe/setting-up-multi-node-kubernetes-cluster-with-k3s-and-multipass-d4efed47fed5
https://github.com/superseb/multipass-k3s
Since I am unable to find anything on google or the official docs, I have a question.
I have a local minikube cluster with deployment, service and ingress, which is working fine. Now when the load on my local cluster becomes too high I want to automatically switch to a remote cluster.
Is this possible?
How would I achieve this?
Thank you in advance
EDIT:
A remote cluster in my case would be a rancher Kubernetes cluster, but as long as the resources on my local one are sufficient I want to stay there.
So lets say my local cluster has enough resources to run two replicas of my application, but when a third one is needed to distribute the load, it should be deployed to the remote rancher cluster. (I hope that is clearer now)
I imagine it would be doable with kubefed (https://github.com/kubernetes-sigs/kubefed) when using the ReplicaSchedulingPreferences (https://github.com/kubernetes-sigs/kubefed/blob/master/docs/userguide.md#replicaschedulingpreference) and just weighting the local cluster very high and the remote one very low and then setting spec.rebalance to true to distribute it in case of high loads, but that approach seems a bit like a workaround.
Your idea of using Kubefed sounds good but there is an another option: Multicluster-Scheduler.
Multicluster-scheduler is a system of Kubernetes controllers that
intelligently schedules workloads across clusters. It is simple to use
and simple to integrate with other tools.
To be able to make a better choice for your use case you can read through the Comparison with Kubefed (Federation v2).
All the necessary info can be found in the provided GitHub thread.
Please let me know if that helped.
I've found a partial answer Difference between Minikube, Kubernetes, Docker Compose, Docker Swarm, etc here, but I still do not completely get it:
In my understanding, kubernetes is a container-orchestration system. However, Minikube looks very similar to me.
Can somebody explain me when you would use minikube versus when you would use minikube, and why?
I think your question should have been "Can somebody explain me when you would use minikube versus when you would use Kubernetes, and why?"
Minikube is a small and easy Kubernetes setup for your Work-PC. You can install and configure a Kubernetes cluster very easily with it. However, for a production environment it is not the best choice. Minikube normally starts a virtual machine on your PC witch will affects the performance of your cluster other than Kubernetes which will run directly with your kernel if you use linux. Furthermore, like Butuzov already answered, it is only one node, not a "real" cluster.
So you use Kubernetes if you are in a production environment where you need distributed systems and workload as well as redundancy and failure safety.
Hope that helps for your understanding.
Edit: Use cases
Minikube:
Developer or DevOps who trying to execute a complex distributed system locally for testing purposes but with deployment over Helm.
Developer or DevOps who tries to create a deployment with Helm locally.
Kubernetes (standalone):
Execute complex distributed system on production systems.
Execute heavy workload (multiple products, distributed systems) in production
minikube - is one node cluster, with a master that can get loads, with a lot of solved and automated issues. designated to test, learn things from kubernetes ecosystem.
kubernetes itself is orchestrator that can come to you as managed service with a lot of problems (pv or loadbalancers) solved or like a lego, or you will tune here and there... well thing we called production ready.
minikube is ok to learn (not always but in 90% of cases) or experiment with tiny loads.
On NixOS is is easy to set up Kubernetes by a single line of config:
services.kubernetes.roles = ["master" "node"];
This installs both the master and node components on the local system and therefore creates a nice little working local kubernetes "cluster".
If I want to set up a "real" cluster I need to install it over multiple hosts, but I'm not sure about the intended way to connect them.
If I install only the master components on one host and only the node components on another node, how do I tell the node where to find its master?
There are quite a few configuration options, but I'm not sure how to use them correctly. Is anyone aware of some example setup?
Have a look at the latter part of Jaka Hudoklin/offlinehacker's NixCon '15 presentation about Kubernetes on NixOS at GateHub. It has an example configuration that configures docker to use a bridge interface. You can then use openvswitch to link the networks together.
I'm currently working to automate Kubernetes deployment with NixOS / NixOps. It works quiet well with multiple local VirtualBox nodes. Regarding AWS integration I still have to fix few things. Then I will try to integrate with other cloud providers.
You can have a look to this repository: NixOps Kubernetes. Do not hesitate to fork and help me improve it.
Have you checked Kubeadm tool? You can check it out at - https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/