When to use MiniKube and when to use Kubernetes? - kubernetes

I've found a partial answer Difference between Minikube, Kubernetes, Docker Compose, Docker Swarm, etc here, but I still do not completely get it:
In my understanding, kubernetes is a container-orchestration system. However, Minikube looks very similar to me.
Can somebody explain me when you would use minikube versus when you would use minikube, and why?

I think your question should have been "Can somebody explain me when you would use minikube versus when you would use Kubernetes, and why?"
Minikube is a small and easy Kubernetes setup for your Work-PC. You can install and configure a Kubernetes cluster very easily with it. However, for a production environment it is not the best choice. Minikube normally starts a virtual machine on your PC witch will affects the performance of your cluster other than Kubernetes which will run directly with your kernel if you use linux. Furthermore, like Butuzov already answered, it is only one node, not a "real" cluster.
So you use Kubernetes if you are in a production environment where you need distributed systems and workload as well as redundancy and failure safety.
Hope that helps for your understanding.
Edit: Use cases
Minikube:
Developer or DevOps who trying to execute a complex distributed system locally for testing purposes but with deployment over Helm.
Developer or DevOps who tries to create a deployment with Helm locally.
Kubernetes (standalone):
Execute complex distributed system on production systems.
Execute heavy workload (multiple products, distributed systems) in production

minikube - is one node cluster, with a master that can get loads, with a lot of solved and automated issues. designated to test, learn things from kubernetes ecosystem.
kubernetes itself is orchestrator that can come to you as managed service with a lot of problems (pv or loadbalancers) solved or like a lego, or you will tune here and there... well thing we called production ready.
minikube is ok to learn (not always but in 90% of cases) or experiment with tiny loads.

Related

How can I easily create a kubernetes cluster on KVM?

I'm looking for a way to create a live Kubernetes cluster without too much hassle.
I've got a nice HP server, which could run a few VM's with kubernetes on top. The reason for VM's is to isolate this from the host machine. Ideally, the VMs should only run containerd and kubelet and are essentially disposable for node-upgrades.
However, I get lost in what tooling would provide this. minikube? microk8s? k3s? rancher? charmed kubernetes? some existing qemu image? some existing vagrant config? The more managed it is, the better. So far I liked minikube, but it doesn't have "start on reboot" for example, nor the flexibility for node upgrades.
I have tried a lot of tools to train for the CKAD certification. For my usage, the better option for a local cluster was k3s and multipass (for online clusters, I have used Civo). Both are very fast to proceed their respective tasks, so it allows me to create clusters at will and dispose them to be able to work on clean environments.
multipass to create VM quickly
k3s which is nothing else than a lightweight kubernetes
You can find easily some tutorials to automate the creation of clusters for example:
https://betterprogramming.pub/local-k3s-cluster-made-easy-with-multipass-108bf6ce577c
https://medium.com/#yankee.exe/setting-up-multi-node-kubernetes-cluster-with-k3s-and-multipass-d4efed47fed5
https://github.com/superseb/multipass-k3s

Off-Loading of k8s deployments to different cluster in case of high loads

Since I am unable to find anything on google or the official docs, I have a question.
I have a local minikube cluster with deployment, service and ingress, which is working fine. Now when the load on my local cluster becomes too high I want to automatically switch to a remote cluster.
Is this possible?
How would I achieve this?
Thank you in advance
EDIT:
A remote cluster in my case would be a rancher Kubernetes cluster, but as long as the resources on my local one are sufficient I want to stay there.
So lets say my local cluster has enough resources to run two replicas of my application, but when a third one is needed to distribute the load, it should be deployed to the remote rancher cluster. (I hope that is clearer now)
I imagine it would be doable with kubefed (https://github.com/kubernetes-sigs/kubefed) when using the ReplicaSchedulingPreferences (https://github.com/kubernetes-sigs/kubefed/blob/master/docs/userguide.md#replicaschedulingpreference) and just weighting the local cluster very high and the remote one very low and then setting spec.rebalance to true to distribute it in case of high loads, but that approach seems a bit like a workaround.
Your idea of using Kubefed sounds good but there is an another option: Multicluster-Scheduler.
Multicluster-scheduler is a system of Kubernetes controllers that
intelligently schedules workloads across clusters. It is simple to use
and simple to integrate with other tools.
To be able to make a better choice for your use case you can read through the Comparison with Kubefed (Federation v2).
All the necessary info can be found in the provided GitHub thread.
Please let me know if that helped.

Is there any way to deploy multi-container application in K8S single node for production?

What i want do is deployment of multiple container application in...
In RHEL os
RedHat Supportable product (if possible)
In single node K8S cluster (Bare metal machine)
So I found several way but I concerned about..
minikube, minishift, OKD, CodeReady Container
First, they run in VM but what I want is run in HOST.
Second, their doc said they are not for production environment.
So, Is there any PaaS for single-node cluster as production environment?
Docker, Docker-compose
Deployment target OS should maybe RHEL8. I guess it is not good idea to use docker because RedHat product is moving away from docker. Even in RHEL8 repository, there is no docker rpm for el8 yet.
My question is
Is there any PaaS for single-node cluster as production environment?
If not exist, docker-compose is best?
It was already mentioned, you should not use single node setup in production environment.
You should not do that because, if your servers drops you have service offline. There is nothing to switch to, nothing that might continue the process that was being worked on.
If you still want to setup a single node Kubernetes cluster you can do that using kubeadm. I think this would be closest to production grade as you can get.
Other then that as an alternative you can play with Installing Kubernetes with Minikube or Install a local Kubernetes with MicroK8s.
It's up to you which one you will choose but you need to remember this should not be running as a production, this should be a lab or a test environment which if works as expected will be migrated into few node production grade cluster.
As for PaaS as a single node there is Dokku.
Docker powered mini-Heroku. The smallest PaaS implementation you've ever seen.
And if you would consider using a cloud for PaaS, you can choose from AWS Cloud9, Azure App Service or Google App Engine.
Single node cluster is not recommended for production applications. You need scalability, high availability, fault tolerance for production apps. You must have more than one node to have these features.

Deploy Kubernetes on Self-host Production environment

I am trying to install kubernetes on Self-hosted production environment running on Ubuntu 16.04. I am not able to find any helpful guide to setup production grade kubernetes master and connect worked nodes to it.
any help is much appreciated.
you can use the kubespray to self Host production environment.
https://github.com/kubernetes-incubator/kubespray
Depends on what you understand by saying "self-host". The most people think it's about deploying kubernetes in the own environment.
If you want to compare different approaches to deploy k8s in a custom environment, refer to this article which covers a bunch of options suitable for that.
If you are interested in how to set up an HA Kubernetes cluster using kubeadm, refer to this article.
However, in kubernetes, there is a different definition of "self-hosted". It means running kubernetes itself as a workload in kubernetes. If you are interested in a real self-hosted approach (on a custom environment), refer to this article
Hope this helps
You can use typhoon which can be used to provision an HA kubernetes cluster.
Here is a sample configuration which I used to bring up my own home cluster.
A few advantages of typhoon are that you have the option of choosing your choice of a cloud provider for provisioning your infrastructure, which is done using terraform and the fact that it gives you upstream k8s is a big plus too.
Internally, it uses bootkube to bring up the temporary control plane, which would consist of
api-server
controller-manager
scheduler
and then when we have the temporary control plane object, we inject the objects to the API server to have our k8s cluster.
Have a look at this kubecon talk given by CoreOS which explains how this is working.

how to install kubernetes manually?

While getting familiar with kubernetes I do see tons of tools that should helps me to install kubernetes anywhere, but I don't understand exactly what it does inside, and as a result don't understand how to trouble shoot issues.
Can someone provide me a link with tutorial how to install kubernetes without any tools.
There are two good guides on setting up Kubernetes manually:
Kelsey Hightower's Kubernetes the hard way
Kubernetes guide on getting started from scratch
Kelsey's guide assumes you are using GCP or AWS as the infrstructure, while the Kubernetes guide is a bit more agnostic.
I wouldn't recommend running either of these in production unless you really know what you're doing. However, they are great for learning what is going on under the hood. Even if you just read the guides and don't use them to setup any infrastructure you should gain a better understanding of the pieces that make up a Kubernetes cluster. You can then use one of the helpful setup tools to create your cluster, but now you will understand what it is actually doing and can debug when things go wrong.
For simplicity, you can view k8s as three components
etcd
k8s master, which includes kube-apiserver, controller, scheduler
node, which contains kubelet
You can install etcd and k8s master together in one machine. The procedures are
Install etcd. Download etcd package and run it, which is quite
simple. Remember the port of etcd service, e.g. 2379,4001, or any you
set.
Git clone the kubernetes project from github. Find the executable binary file, e.g. for k8s version 1.3, you can find kube-apiserver, kube-controller-manager and kube-scheduler in src/k8s.io/kubernetes/_output/local/bin/linux/amd64 folder
Then run kube-apiserver, specify the etcd ip and port (e.g. --etcd_servers=http://127.0.0.1:4001)
Run scheduler and controller, specifying the apiserver ip and port(e.g. --master=127.0.0.1:8080). There is no oreder between scheduler and controller
Master is running so far. Make sure these processes run without errors. If etcd exits, apiserver would exit. If apiserver exits, scheduler and controller would exit.
On another machine(virtual preferred, network connected), run kubelet. Kubelet could also be found in previous folder(src/k8s.io/kubernetes/_output/local/bin/linux/amd64), specify apiserver ip and port(e.g. --api-servers=http://10.10.10.19:8080). You may install docker or something else on node, which to prove that you could create a container.