I'd like to run two instances of k3s on the same machine.
Let's say I have already two different master servers.
What I'd like to do is to install:
A K3S instance to deal with Master #1.
A second instance to deal with Master #2.
First instance will be managed by myself while the second instance will be managed by my customer.
Is there a way to setup two instances of K3S on the same machine?
Running multiple instances of k3s is not supported out of the box.
You can however use tools like k3d (running inside docker container), or multipass-k3s (running inside a VM).
chroot jail is not really an option. Rootless K3s is still in experimental stage [reference], and applications running as root can easily break out of jail.
Related
Our company use Kubernetes in all our environments. as well as on our local Macbook using minikube.
We have many microservices and most of them are running JVM which require a large amount of memory. We started to face an issue that we cannot run our stack on minikube due to out of memory of the local machine.
We thought about multiple solutions:
the first was to create a k8s cloud development environment and when a developer is working on a single microservice on his local macbook he will redirect the outbound traffic into the cloud instead of the local minikube. but this solution will create new problems:
how a pod inside the cloud dev env will send data to the local developer machine? its not just a single request/response scenario
We have many developers, they can overlap each other with different versions of each service they need to be deploy on the cloud. (We can set each developer separate namespace but we will need a huge cluster to support it)
The second solution was maybe we should use a tools like skaffold or draft to deploy our current code into the cloud development environment. that will solve issue #1 but again we see problems:
Slow development cycle - building a java image and push to remote cloud and wait for init will take too much time for developer to work.
And we still facing issue #2
Antoher though was, kubernetes support multiple nodes, why won't we just add another node, a remote node that sit on the cloud, to our local minikube? The main issue is that minikube is a single node solution. Also, we didn't find any resources for it on the web.
Last thought was to connect minikube docker daemon to a remote machine. so we will use minikube on the local machine but the docker will run the containers on a remote cloud server. But no luck so far, minikube crush when we do this manipulate. and we didn't find any resources for it on the web as well.
Any thought how to solve our issue? Thank you!
What are the differences in running Minikube with a VM hypervisor (VirtualBox/HVM) vs none?
I am not asking whether or not Minikube can run without a hypervisor. I know that running on '--vm-driver=none' is possible and it runs on the local machine and requires Docker be installed.
I am asking what is the performance differences. There is not a lot of documentation on how '--vm-driver=none' works. I am wondering would running without the VM affect the functionality of Minikube.
This is how I explain it to myself:
driver!=none mode
In this case minikube provisions a new docker-machine (Docker daemon/Docker host) using any supported providers. For instance:
a) local provider = your Windows/Mac local host: it frequently uses VirtualBox as a hypervisor, and creates inside it a VM based on boot2docker image (configurable). In this case k8s bootstraper (kubeadm) creates all Kubernetes components inside this isolated VM. In this setup you have usually two docker daemons, your local one for development (if you installed it prior), and one running inside minikube VM.
b) cloud hosts - not supported by minikube
driver=none mode
In this mode, your local docker host is re-used.
In case no.1 there will be a performance penalty, because each VM generates some overhead, by running several system processes required by VM itself, in addition to those required by k8s components running inside VM. I think driver-mode=none is similar to "kind" version of k8s boostraper, meant for doing CI/integration tests.
I am having two linux machines where I am learning Kubernetes. Since resources are limited, I want to configure the same node as master and slave, so the configuration looks like
192.168.48.48 (master and slave)
191.168.48.49 (slave)
How to perform this setup. Any help will be appreciated.
Yes, you can use minikube the Minikube install for single node cluster. Use kubeadm to install Kubernetes where 1 node is master and another one as Node. Here is the doc, but, make sure you satisfy the prerequisites for the nodes and small house-keeping needs to done as shown in the official document. Then you could install and create two machine cluster for testing purpose if you have two linux machines as you shown two different IP's.
Hope this helps.
I am trying to create Kubernetes cluster with different number of nodes using same machine. Here I want to create separate VMs and need to create node in those VMs. I am currently exploring about kubeadm and minikube for these tasks.
When I am exploring I had the following confusions:
I need to create 4 number of nodes each need to create in different VMs. Can I use kubeadm for these requirement?
Also found that Minikube is using for creating the single node structure and also possible to use to creation of VMs. What is the difference between kubeadm and minikube ?
If I want to create nodes in different VMs which tool should use along with installation of Kubernetes cluster master?
If I am using VMs, then can I directly install VMware workstation / virtualbox in my Ubuntu 16.04 ?
In AWS EC2, they already giving the Ubuntu as a virtual machine. So is possible to install VMware workstation on ubuntu? Since it is VMs on another VM.
Kubeadm should be a good choice for you. It is quite easy to use by just following the documentation. Minikube would give you only single node Kubernetes. As of minikube 1.10.1, it is possible to use multi-node clusters.
Kubeadm is a tool to get Kubernetes up and running on already existing machine. It will basically configure and start all required Kubernetes components (for minimum viable cluster). Kubeadm is the right tool to bootstrap the Kubernetes cluster on your virtual machines. But you need to prepare the machines your self (install OS + required software, networking, ...). kubeadm will not do it for you.
Minikube is a tool which will allow you to start locally single node Kubernetes cluster. This is usually done in a VM - minikube supports VirtualBox KVM and others. It will start for you the virtual machine and take care of everything. But it will not do a 4 node cluster for you.
Kubeadm takes care of both. You first setup the master and then use kubeadm on the worker nodes to join the master.
When you use Kubeadm, it doesn't really care what do you use for the virtualization. You can choose whatever you want.
Why do you want to run virtual machines on top of your EC2 machine? Why not just create more (perhaps smaller) EC2 machines for the cluster? You can use this as an inspiration: https://github.com/scholzj/terraform-aws-kubernetes. There are also some more advanced tools for setting up the whole cluster such as (for example) Kops.
I want to use following deployment architecture.
One machine running my webserver(nginx)
Two or more machines running uwsgi
Postgresql as my db on another server.
All the three are three different host machines on AWS. During development I used docker and was able to run all these three on my local machine. But I am clueless now as I want to split those three into three separate hosts and run it. Any guidance, clues, references will be greatly appreciated. I preferably want to do this using docker.
If you're really adamant on keeping the services separate on individual hosts then there's nothing stopping you from still using your containers on a Docker installed EC2 host for nginx/uswgi, you could even use a CoreOS AMI which comes with a nice secure Docker instance pre-loaded (https://coreos.com/os/docs/latest/booting-on-ec2.html).
For the database use PostgreSQL on AWS RDS.
If you're running containers you can also look at AWS ECS which is Amazons container service, which would be my initial recommendation, but I saw that you wanted all these services to be on individual hosts.
you can use docker stack to deploy the application in swarm,
join the other 2 hosts as worker and use the below option
https://docs.docker.com/compose/compose-file/#placement
deploy:
placement:
constraints:
- node.role == manager
change the node role as manager or worker1 or workern this will restrict the services to run on individual hosts.
you can also make this more secure by using vpn if you wish