fabric ensemble with active/passive containers - jbossfuse

I have the following problem and hope someone may help me out.
I had to migrate from rh jboss soa-p to rh fuse.
I was able to create a fabric ensemble with three nodes and also use the gateway profile for the load-balancing of rest services.
Now, I have to deploy a java application that must be running only on the master nodes of the ensemble. If the master falls, then this application must start running automatically on one of the other nodes (the node that will become the new master).
How can I achive this?
Thank you
Barbara

If you use Camel, then there is a fuse master [1] component that only runs if its the master, and automatic failover to another node if the master dies / is stopped.
[1] - https://access.redhat.com/documentation/en-US/Red_Hat_JBoss_Fuse/6.2/html/Apache_Camel_Component_Reference/Master.html

Related

Is it possible to deploy Kubernetes on a single web server?

Good afternoon!
In my study of Kubernetes, I got to the practice of deploying Kuber on the server. There are different deployment scenarios. I chose kubespray. Can you tell me if you can somehow deploy kuber on a host? Or is it necessary to create virtual machines, set up a network between them and only then deploy the cluster?
Node: A node may be a virtual or physical machine, depending on the cluster. Each node is managed by the control plane and contains the services necessary to run Pods.
You can deploy single Node Kubernetes
For local (development, test etc) purposes:
minikube
kind
...
For production:
k3s
k0s
...
And, of course, you can create separate nodes under one "machine." And use them as worker nodes, but the above solutions are simpler.

Should server node be on different server than agent nodes, and how to achieve that?

I need advice for k3s architecture. I would like to create small cluster with one master and 3 agent nodes, but in my opinion master node should be in separate server so it have resources only for itself. But I can't see in k3s documentation --disable-agent anymore, and I read that it is buggy so they removed it, so I am wondering how can I have only server setup on one node and is it a good practice at all?
Having master node separated is a typical architecture that Kubernetes utilizes since it runs all the vital components (API Server, Controller manager, etcd and scheduler) necessary to manage your cluster. So it a good idea to have it running on another node (In K8s it is the only way although it is possible to schedule pods on master node if you untaint it)
Here`s a good article about having multinode k3 cluster that relates to your desire state.
Alternative way would be to a solution suggested in this github issue related to --disable-agent and taint the master with NoExecute key.

How to add remote vm instance as worker node in kubernetes cluster

I'm new to kubernetes and trying to explore the new things in it. So, my question is
Suppose I have existing kubernetes cluster with 1 master node and 1 worker node. Consider this setup is on AWS, now I have 1 more VM instance available on Oracle Cloud Platform and I want to configure that VM as worker node and attach that worker node to existing cluster.
So, is it possible to do so? Can anybody have any suggestions regarding this.
I would instead divide your clusters up based on region (unless you have a good VPN between your oracle and AWS infrastructure)
You can then run applications across clusters. If you absolutely must have one cluster that is geographically separated, I would create a master (etcd host) in each region that you have a worker node in.
Worker Node and Master Nodes communication is very critical for Kubernetes cluster. Adding nodes from on-prem to a cloud provider or from different cloud provider will make lots of issues from network perspective.
As VPN connection between AWS and Oracle Cloud needed and every time worker node has to cross ocean (probably) to reach master node.
EDIT: From Kubernetes Doc, Clusters cannot span clouds or regions (this functionality will require full federation support).
https://kubernetes.io/docs/setup/best-practices/multiple-zones/

Can we configure same node as master and slave in Kubernetes

I am having two linux machines where I am learning Kubernetes. Since resources are limited, I want to configure the same node as master and slave, so the configuration looks like
192.168.48.48 (master and slave)
191.168.48.49 (slave)
How to perform this setup. Any help will be appreciated.
Yes, you can use minikube the Minikube install for single node cluster. Use kubeadm to install Kubernetes where 1 node is master and another one as Node. Here is the doc, but, make sure you satisfy the prerequisites for the nodes and small house-keeping needs to done as shown in the official document. Then you could install and create two machine cluster for testing purpose if you have two linux machines as you shown two different IP's.
Hope this helps.

Forward Traffic to POD in Kubernetes Cluster

I installed and configured 3 node K8S cluster. The worker nodes are windows nodes. We have one .Net application. We want to containerize this application. This application internally using Apache Ignite for the distributed cache.
We build docker image for this application, wrote a deployment file and deployed it in K8S cluster. The deployment will also create a service of “LoadBalancer” type. Using this service we are connecting to the application from the outside world. All is good so far.
Coming to the issue, as we are using Apache Ignite for the distributed cache. One of the POD will be master. We want to always forward the traffic to the POD which is acting as the master node in the Apache Ignite cluster. The Apache Ignite master node identification must be dynamic.
I had gone through the below link. Here the POD configuration is static. We want to dynamically identify the master POD and forward the traffic. What we have to do on the service side.
https://appscode.com/products/voyager/7.4.0/guides/ingress/http/statefulset-pod/
Any help on how to forward the traffic to the POD is greatly appreciated.
The very fact that you have a leader/follower topology, the ask to direct traffic to a said nome (master node) is flawed for a couple of reasons:
What happens when the current leader fails over and there is a new election to select a new leader
The fact that pods are ephemeral they should not have major roles to play in production, instead work with deployments and their replicas. What you are trying to achieve is an anti-pattern
In any case, if this is what you want, may be you want to read about gateways in istio which can be found here