How to scale RabbitMQ across multiple Kubernetes Clusters - kubernetes

I have an application running in a Kubernetes cluster Azure AKS which is made up of a website running in one deployment, background worker processes running as scheduled tasks in Kubernetes, RabbitMQ running as another deployment and a SQL Azure DB which is not part of the Kubernetes.
I would like to deploy achieve load balancing and failover by deploying another kubernetes cluster in another region and placing a Traffic Manager DNS Load Balancer in front of the web site.
The problem that I see is that if the two rabbit instances are in separate kubernetes clusters then items queued in one will not be available in the other.
Is there a way to cluster the rabbitmq instances running in each kubernetes cluster or something besides clustering?
Or is there a common design pattern that might avoid problems from having seperate queues?
I should also note that currently there is only one node running RabbitMq in the current kuberntes cluster but as part of this upgrade it seems like a good idea to run multiple nodes in each cluster which I think the current Helm charts support.

You shouldn't cluster RabbitMQ nodes across regions. Your cluster will get split brain because of network delays. To synchronise RabbitMQ queues, exchanges between clusters you can use federation or shovel plugin, depending on your use case.
Federation plugin can be enabled on a cluster by running commands:
rabbitmqctl stop_app
rabbitmq-plugins enable rabbitmq_federation
rabbitmq-plugins enable rabbitmq_federation_management
rabbitmqctl start_app
Mode details on Federation.
For shovel:
rabbitmq-plugins enable rabbitmq_shovel
rabbitmq-plugins enable rabbitmq_shovel_management
rabbitmqctl stop_app
rabbitmqctl start_app
Mode details on Shovel.
Full example on how to setup Federation on RabbitMQ cluster can be found here.

Related

Is it possible to deploy Kubernetes on a single web server?

Good afternoon!
In my study of Kubernetes, I got to the practice of deploying Kuber on the server. There are different deployment scenarios. I chose kubespray. Can you tell me if you can somehow deploy kuber on a host? Or is it necessary to create virtual machines, set up a network between them and only then deploy the cluster?
Node: A node may be a virtual or physical machine, depending on the cluster. Each node is managed by the control plane and contains the services necessary to run Pods.
You can deploy single Node Kubernetes
For local (development, test etc) purposes:
minikube
kind
...
For production:
k3s
k0s
...
And, of course, you can create separate nodes under one "machine." And use them as worker nodes, but the above solutions are simpler.

Load balancer for kubeapi server while creating the Kubernetes cluster using kubeadm

I am trying to create Kubernetes cluster having 1 master and 2 worker nodes by using the tool kubeadm in my on-premise machines. I am following the Kubernetes official documentation for forming the cluster from the following url:
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/
After installing all the runtime and completing before begin pre-requistics steps, I found in the document as the first step of forming the cluster is Create load balancer for kube-apiserver.
My Doubt
When I created the single master 3 worker nodes cluster using kubespray tool, I did not created any separate load balancer for that. So here when I am following the kubeadm tool, Do I need to create the load balancer actually for forming ?
Why are both tools showing different way, Since I did not created load balancer by using kubespray tool. Now I am trying to create cluster with kubeadm tool.
Speaking of load balancers creation during Kubernetes deployment using Kubeadm it depends on your setup. It is not mandatory to setup load balancer. Your cluster will still work, but without load balancing, it's going to be hard to qualify this cluster as HA.
In a single master setup as it is in your case, the master node manages the etcd database, API server, controller manager and scheduler, along with the worker nodes. However, if that single master node fails, all the worker node fail as well and entire cluster will be lost.
Learn more here: kubernetes-ha-kubeadm.
Kubeadm covers the needs of a life-cycle management for Kubernetes clusters, including self-hosted layouts, dynamic discovery services, etc. Kubespray is more about generic configuration, initial clustering, and bootstrapping.
Kubespray is a good choice when you either are familiar with Ansible or seek a possibility to switch between multiple platforms. If your priority is tight integration with unique features offered by the supported clouds, and you plan to stick with your provider, kops may be a better option.
Deploying a loadbalancer is up to a user and is not covered by ansible roles in Kubespray. By default, it only configures a non-HA endpoint, which points to the access_ip or IP address of the first server node in the kube-master group. It can also configure clients to use endpoints for a given loadbalancer type. More information you can find here: kubespray-lb.
Here you have comparision of Kubernetes deployment tools: Kubernetes Deployment Tools.

Kubernetes Helm chart initiation with Kubernetes cluster

I am implementing the continuous integration and continuous deployment by using Ansible, Docker, Jenkins and Kubernetes. I already created one Kubernetes cluster with 1 master and 2 worker nodes by using Ansible and kubespray deployment. And I have 30 - 40 number of micro service application. I need to create that much of service and deployments.
My Confusion
When I am using Kubernetes package manager Kubernetes Helm chart, then do I need to initiate my chart on master node or in my base machine from where I I deployed my kubernet cluster ?
If I am initiating inside master, then can I use kubectl to deploy using ssh on remote worker nodes?
If I am initiating outside the Kubernetes cluster nodes , then Can i use kubectl command to deploy in Kubernetes cluster ?
Your confusion seems to lie in the configuration and interactions of Helm components. This explanation provides a good graphics to represent the relationships.
If you are using the traditional Helm/Tiller configuration, Helm will be installed locally on your machine and, assuming you have the correct kubectl configuration, you can "initialize" your cluster by running helm init to install Tiller into your cluster. Tiller will run as a deployment in kube-system, and has the RBAC privileges to create/modify/delete/view the chart resources. Helm will automatically manage all the API objects for you, and the kube-scheduler will schedule the pods to all your nodes accordingly. You should not be directly interacting with your master and nodes via your console.
In either configuration, you would always be making the Helm deployment from your local machine with a kubectl access to your cluster.
Hope this helps!
If you look for the way for running helm client inside your Kubernetes cluster, please check the concept of Helm-Operator.
I would recommend you also to look around for term "GitOps" - set of practices which combines Git with Kubernetes, and sets Git as a source of truth for your declarative infrastructure and applications.
There are two great OSS projects out there, that implements GitOps best practices:
flux (uses Helm-Operator)
Jenkins-x (uses helm as a part of release pipeline, check out this session on YT to see it in action)

Should we run a Consul container in every Pod?

We run our stack on the Google Cloud Platform (hosted Kubernetes, GKE) and have a Consul cluster running outside of K8s (regular GCE instances).
Several services running in K8s use Consul, mostly for it's CP K/V Store and advanced locking, not so much for service discovery so far.
We recently ran into some issues with using the Consul service discovery from within K8s. Right now our apps talk directly to the Consul Servers to register and unregister services they provide.
This is not recommended best-practice, usually Consul clients (i.e. apps using Consul) should talk to the local Consul agent. In our setup there are no local Consul agents.
My Question: Should we run local Consul agents as sidekick containers in each pod?
IMHO this would be a huge waste of ressources, but it would match the Consul best-practies better.
I tried searching on Google, but all posts about Consul and Kubernetes talk about running Consul in K8s, which is not what I want to do.
As the official Consul Helm chart and the documentation suggests the standard approach is to run a DaemonSet of Consul clients and then use a connect-side-car injector to inject sidecars into your node simply by providing an annotation of the pod spec. This should handle all of the boilerplate and will be inline with best practices.
Consul: Connect Sidecar; https://www.consul.io/docs/platform/k8s/connect.html

Orchestrating containers

I'm trying to use the Kubernetes to deploy Docker Container and I found this tutorial.
So according to this tutorial, what is the prerequisites?
They said that "services that are typically on a separate Kubernetes master system and two or more Kubernetes node systems are all running on a single system."
But I don't understand how we run both master and nodes on a single system (for example I have one instance EC2 with IP address 52.192.x.x)
That is a guide about running Kubernetes specifically on RedHat Atomic nodes. There are lots of guides about running Kubernetes on other types of nodes; see the Creating a Kubernetes Cluster page on docs.k8s.io.
One of the guides on the Kubernetes site shows how to run a local docker-based cluster, which should also work for you on a single node in the cloud.