I have worked with minikube as a single node Kubernetes cluster on windows to learn it. Now, I need to figure if it is possible to create a multi-node Kubernetes cluster on windows using other Vms..
I need to figure also the best production scenario for windows( to automate the Vms creation, configuration ), as alternative like (vagrant / ansible) on Linux.
Yes, it's possible. Feel free to create several VMs manually in the tool of your choice and network them together.
Vagrant and Ansible (Pull) work fine on Windows as well, where Ansible runs in the guest OS.
Or you can run Ansible and/or Docker from WSL.
FWIW, other than for learning purposes, there is little benefit of simulating a cluster of Kubernetes/Openshift or similar platform on all single machine. The same resources should work whether you have one node or many
Related
I'm looking for a way to create a live Kubernetes cluster without too much hassle.
I've got a nice HP server, which could run a few VM's with kubernetes on top. The reason for VM's is to isolate this from the host machine. Ideally, the VMs should only run containerd and kubelet and are essentially disposable for node-upgrades.
However, I get lost in what tooling would provide this. minikube? microk8s? k3s? rancher? charmed kubernetes? some existing qemu image? some existing vagrant config? The more managed it is, the better. So far I liked minikube, but it doesn't have "start on reboot" for example, nor the flexibility for node upgrades.
I have tried a lot of tools to train for the CKAD certification. For my usage, the better option for a local cluster was k3s and multipass (for online clusters, I have used Civo). Both are very fast to proceed their respective tasks, so it allows me to create clusters at will and dispose them to be able to work on clean environments.
multipass to create VM quickly
k3s which is nothing else than a lightweight kubernetes
You can find easily some tutorials to automate the creation of clusters for example:
https://betterprogramming.pub/local-k3s-cluster-made-easy-with-multipass-108bf6ce577c
https://medium.com/#yankee.exe/setting-up-multi-node-kubernetes-cluster-with-k3s-and-multipass-d4efed47fed5
https://github.com/superseb/multipass-k3s
I've found a partial answer Difference between Minikube, Kubernetes, Docker Compose, Docker Swarm, etc here, but I still do not completely get it:
In my understanding, kubernetes is a container-orchestration system. However, Minikube looks very similar to me.
Can somebody explain me when you would use minikube versus when you would use minikube, and why?
I think your question should have been "Can somebody explain me when you would use minikube versus when you would use Kubernetes, and why?"
Minikube is a small and easy Kubernetes setup for your Work-PC. You can install and configure a Kubernetes cluster very easily with it. However, for a production environment it is not the best choice. Minikube normally starts a virtual machine on your PC witch will affects the performance of your cluster other than Kubernetes which will run directly with your kernel if you use linux. Furthermore, like Butuzov already answered, it is only one node, not a "real" cluster.
So you use Kubernetes if you are in a production environment where you need distributed systems and workload as well as redundancy and failure safety.
Hope that helps for your understanding.
Edit: Use cases
Minikube:
Developer or DevOps who trying to execute a complex distributed system locally for testing purposes but with deployment over Helm.
Developer or DevOps who tries to create a deployment with Helm locally.
Kubernetes (standalone):
Execute complex distributed system on production systems.
Execute heavy workload (multiple products, distributed systems) in production
minikube - is one node cluster, with a master that can get loads, with a lot of solved and automated issues. designated to test, learn things from kubernetes ecosystem.
kubernetes itself is orchestrator that can come to you as managed service with a lot of problems (pv or loadbalancers) solved or like a lego, or you will tune here and there... well thing we called production ready.
minikube is ok to learn (not always but in 90% of cases) or experiment with tiny loads.
I was looking into the different ways of installing Kubernetes in https://kubernetes.io/docs/setup/pick-right-solution/ but I'm still not sure which one is the best for me.
I have access to a testbed that can provision CENTOS 7.3 VM's through vagrant. This tesbed is basically a bare-metal environment in which the VM's are started up.
I can configure each host individually so I suppose kubeadm (https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/) would be a good way to go?
Brandon,
While the Kubernetes community supports multiple cluster deployment solutions simultaneously (mainly because there is no single best solution that will satisfy all the needs of everyone), Kubeadm (https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/) - is the right solution that we may suggest for you.
Kubeadm is a community-driven, cross-distribution cluster deployment and LCM tool, that is widely recognized as a standard way to deploy Kubernetes clusters with a wide variety of options.
Also, feel free to check the article (https://medium.com/#lizrice/kubernetes-in-vagrant-with-kubeadm-21979ded6c63) that describes the way of Kubernetes cluster deployment with Kubeadm and Vagrant.
I'm looking to run Kubernetes in production on a single machine - bare metal - no VM. But can't seem to find a writeup for this scenario. The reason is basically that we have on-premise small installations and we'd prefer to have everything based on Kubernetes rather than having two different environments - one cloud and one on-prem.
Update
With the official Ubuntu guide I managed to get it up and running in conjunction with the following: http://www.dangtrinh.com/2017/09/how-to-deploy-openstack-in-single.html. LXD version was wrong for what Conjure was expecting and IPv6 needs to be turned off for LXD. Now I am the happy owner of an Intel NUC here for testing which acts as a master and a node. Thanks for all the assistance in the comments!
Does Kubernetes have the ability/need to hook into a cloud provider (AWS, Rackspace) to spin up new nodes? If so, how does it then provision the node - does it run Ansible etc? Or will Kubernetes need to have all the nodes available to it manually?
The short answer is no.
The longer answer is explained in the following blog posting that describes the new kubeadm command:
http://blog.kubernetes.io/2016/09/how-we-made-kubernetes-easy-to-install.html
There are three stages in setting up a Kubernetes cluster, and we
decided to focus on the second two (to begin with):
Provisioning: getting some machines
Bootstrapping: installing Kubernetes on them and configuring certificates
Add-ons: installing necessary cluster add-ons like DNS and monitoring services, a pod network, etc
We realized early on that there's enormous variety in the way that
users want to provision their machines.
They use lots of different cloud providers, private clouds, bare
metal, or even Raspberry Pi's, and almost always have their own
preferred tools for automating provisioning machines: Terraform or
CloudFormation, Chef, Puppet or Ansible, or even PXE booting bare
metal. So we made an important decision: kubeadm would not provision
machines. Instead, the only assumption it makes is that the user has
some computers running Linux.
Update
http://blog.kubernetes.io/2017/01/stronger-foundation-for-creating-and-managing-kubernetes-clusters.html