Hi I am setting up Kubernetes on top of Mesos by following http://kubernetes.io/v1.1/docs/getting-started-guides/mesos.html and this is how my current test lab looks like
2 Numbers of mesos master with zookeeper
2 Numbers of mesos slaves with docker and flannel installed
Additional mesos slave running Kubernetes-mesos and kubernetes srvices
A server with ETCD service which supports both flannel and kubernetes
Can you please let me know if this is enough ?
Below are the two questions I have
Do we really need to have the kubernetes master server here to be configured as a mesos slave?
Do we need to install kubernetes package on mesos slaves as well ? The url talks about package installation and configuration only on the kubernetes master..With out kubernetes running on the slaves can the master create pods/services etc on the slaves through mesos scheduler?
Regarding Mesos Masters and Zookeeper instances, to have an even number of nodes is not really a good idea, because of the quorum mechanisms involved. My suggestion would be running three nodes of both services.
I assume you want to run this locally? If so, I guess it would make sense to use a preconfigured Vagrant project such as https://github.com/tobilg/coreos-mesos-cluster This launches a three node CoreOS cluster with all the Mesos/Zookeeper services already installed, and etcd and flanneld are also already installed on CoreOS itself.
This would mean that you only would have to do the following steps once the cluster is launched:
http://kubernetes.io/v1.1/docs/getting-started-guides/mesos.html#deploy-kubernetes-mesos respectively https://coreos.com/kubernetes/docs/latest/getting-started.html
http://kubernetes.io/v1.1/docs/getting-started-guides/mesos.html#start-kubernetes-mesos-services
1)Kubernetes master doesnt need to be a mesos slave .
2)You dont need kubernetes to be installed on the minions(mesos-slaves)
All you need is below
1)Mesos setup (Mesos masters and slaves along with zookeeper ,docker running on all mesos slaves)
2)etcd cluster ,will provide overlay network(flannel) and also will do the service discovery of kubernetes setup
3)kubernetes master ..
Below blogs helped a lot in setting it up
http://manfrix.blogspot.in/2015/11/mesoskubernetes-how-to-install-and-run.html
https://github.com/ruo91/docker-kubernetes-mesos
Related
I am trying to install a Kubernetes cluster with one master node and two worker nodes.
I acquired 3 VMs for this purpose running on Ubuntu 21.10. In the master node, I installed kubeadm:1.21.4, kubectl:1.21.4, kubelet:1.21.4 and docker-ce:20.4.
I followed this guide to install the cluster. The only difference was in my init command where I did not mention the --control-plane-endpoint. I used calico CNI v3.19.1 and docker for CRI Runtime.
After I installed the cluster, I deployed minio pod and exposed it as a NodePort.
The pod got deployed in the worker node (10.72.12.52) and my master node IP is 10.72.12.51).
For the first two hours, I am able to access the login page via all three IPs (10.72.12.51:30981, 10.72.12.52:30981, 10.72.13.53:30981). However, after two hours, I lost access to the service via 10.72.12.51:30981 and 10.72.13.53:30981. Now I am only able to access the service from the node on which it is running (10.72.12.52).
I have disabled the firewall and added calico.conf file inside /etc/NetworkManager/conf.d with the following content:
[keyfile]
unmanaged-devices=interface-name:cali*;interface-name:tunl*;interface-name:vxlan.calico
What am I missing in the setup that might cause this issue?
This is a community wiki answer posted for better visibility. Feel free to expand it.
As mentioned by #AbhinavSharma the problem was solved by switching from Calico to Flannel CNI.
More information regarding Flannel itself can be found here.
I have an application running in a Kubernetes cluster Azure AKS which is made up of a website running in one deployment, background worker processes running as scheduled tasks in Kubernetes, RabbitMQ running as another deployment and a SQL Azure DB which is not part of the Kubernetes.
I would like to deploy achieve load balancing and failover by deploying another kubernetes cluster in another region and placing a Traffic Manager DNS Load Balancer in front of the web site.
The problem that I see is that if the two rabbit instances are in separate kubernetes clusters then items queued in one will not be available in the other.
Is there a way to cluster the rabbitmq instances running in each kubernetes cluster or something besides clustering?
Or is there a common design pattern that might avoid problems from having seperate queues?
I should also note that currently there is only one node running RabbitMq in the current kuberntes cluster but as part of this upgrade it seems like a good idea to run multiple nodes in each cluster which I think the current Helm charts support.
You shouldn't cluster RabbitMQ nodes across regions. Your cluster will get split brain because of network delays. To synchronise RabbitMQ queues, exchanges between clusters you can use federation or shovel plugin, depending on your use case.
Federation plugin can be enabled on a cluster by running commands:
rabbitmqctl stop_app
rabbitmq-plugins enable rabbitmq_federation
rabbitmq-plugins enable rabbitmq_federation_management
rabbitmqctl start_app
Mode details on Federation.
For shovel:
rabbitmq-plugins enable rabbitmq_shovel
rabbitmq-plugins enable rabbitmq_shovel_management
rabbitmqctl stop_app
rabbitmqctl start_app
Mode details on Shovel.
Full example on how to setup Federation on RabbitMQ cluster can be found here.
I am having two linux machines where I am learning Kubernetes. Since resources are limited, I want to configure the same node as master and slave, so the configuration looks like
192.168.48.48 (master and slave)
191.168.48.49 (slave)
How to perform this setup. Any help will be appreciated.
Yes, you can use minikube the Minikube install for single node cluster. Use kubeadm to install Kubernetes where 1 node is master and another one as Node. Here is the doc, but, make sure you satisfy the prerequisites for the nodes and small house-keeping needs to done as shown in the official document. Then you could install and create two machine cluster for testing purpose if you have two linux machines as you shown two different IP's.
Hope this helps.
I'm trying to use the Kubernetes to deploy Docker Container and I found this tutorial.
So according to this tutorial, what is the prerequisites?
They said that "services that are typically on a separate Kubernetes master system and two or more Kubernetes node systems are all running on a single system."
But I don't understand how we run both master and nodes on a single system (for example I have one instance EC2 with IP address 52.192.x.x)
That is a guide about running Kubernetes specifically on RedHat Atomic nodes. There are lots of guides about running Kubernetes on other types of nodes; see the Creating a Kubernetes Cluster page on docs.k8s.io.
One of the guides on the Kubernetes site shows how to run a local docker-based cluster, which should also work for you on a single node in the cloud.
I'm reading the Mesos Architecture docs which, ironically, don't actually specify which components are supposed to run on which VMs/physicals.
It looks like, to run Mesos in HA, you need several categories of components:
Mesos Masters
ZooKeeper instances (quorum)
Hadoop clusters (job nodes? name nodes?)
But there's never any mention of how many you need of each type.
So I ask: How many VMs/physicals do you need to run Mesos with HA, and what components should be deployed to each?
Did you have a look at the HA docs? To run Mesos in HA, you'll need the Mesos Masters and ZooKeeper. Any Hadoop-related configurations are out of scope for Mesos HA itself.
To have a HA setup, you'll need a uneven number of nodes for the Masters and ZooKeeper (because of quorum mechanism). In our case, we're running 3 Master and 3 ZooKeeper nodes on 3 machines (one Master and one ZooKeeper instance per machine), and a number of Mesos Slaves/Agents on different machines.
Theoretically, the Slaves/Agents can run on the same machines as the Masters/ZooKeepers as well. I guess this is a matter of preferences and availability of machines, and your SLA needs.
If you want to run a large-scale production setup, it will probably make a lot of sense to even separate the Master and ZooKeeper instances.
Further references:
http://mesos.apache.org/documentation/latest/operational-guide/
http://mesos.apache.org/documentation/latest/configuration/ (see "Master Options")
Can Mesos 'master' and 'slave' nodes be deployed on the same machines?