How to configure JBoss AS4 and Wildfly 10 to communicate via JMX to each other? (JGroups) - jboss

I have 2 clusters of JBoss servers - one cluster of JBoss 4.2.1 and one cluster of Wildfly 10.
I'd like to configure the two clusters to be able to exchange JMX notifications between each other.
While Wildfly 10 uses JGroups with "RELAY2" support, JBoss 4.2.1 uses JGroups 2.4.1, which does not support "RELAY2".
Is there any way to configure the two different clusters to be able to exchange JMX notifications with each other? How can I configure my Wildfly 10 nodes to join my JB4 cluster?

As per RedHat support, there is unfortunately no way to have the two clusters talk to each other as they are using different clustering techniques, even if both rely on JGroups for the communication mechanism.

Related

Is There any Java agent to monitor the metrices of all JVM running processes in my POD?

I am looking for a java agent to monitor the metrics of all JVM running processes in my POD. I want to collect the metrics for each process like CPU, Memory and forward them to stdout or Splunk. How Can I achieve that?
There are a bunch of ways to do this, but I'll highlight 3 of them.
Jolokia - Jolokia is remote JMX with JSON over HTTP. You essentially enable this in your java arguments and it will spin up a server that allows you to request json. You can also install hawtio online for a gui in kubernetes. If you go to Red Hat, the JBoss image has Jolokia installed by defaults. That's how it does it's health checks.
Prometheus - The Prometheus agent can also be installed. You can also ask for metrics on an http/https port just like jolokia.
OpenTelemetry - Opentelemetry is newer and I honestly haven't played around with it yet.
If you run Red Hat JBoss images , both Prometheus and Jolokia extensions are added by default. Hope this helps, Jay

Distributed event bus with vertx

I want to know if there is a standard distributed event bus in Vertx to use for communication between a sender in a VM and a consumer in another VM without using a cluster manager.
Thanks
without using a cluster manager.
No, the cluster manager is the standard way to distribute event bus calls efficiently between vertices of any kind, especially using conveniences like vertx.sharedData().
Not without a cluster manager, but it's pretty easy to set up with the cluster manager. Just go to the starter app generator and select "Clustering" dependency in the sidebar. You can generate working code with any of the 4 cluster managers:
Hazelcast Cluster Manager
Infinispan Cluster Manager
Apache Ignite Cluster Manager
Apache Zookeper Cluster Manager

How is group membership determined in a Wildfly cluster of standalone servers?

I have seen that a cluster can be formed/started very easily with Wildlfy. Is it possible using "standalone" configuration to create multiple clusters? That is some servers should only be part of a cluster named "cluster1" and other servers should form a different cluster named "cluster2". That is can a group name or similar be provided or configured? (I am not looking for a managed domain setup).
By specifying different multicast-addresses, that is each cluster has its own multicast-address. Port offsets don't affect the cluster but might be necessary to avoid port conflicts (when starting multiple Wildfly instances on a single machine).
jboss.default.multicast.address //controls the multicast address

What is the difference between a fabric container and a standalone container?

While going through Redhat Fuse ESB documentation , I found mention of fabric containers as something different from stand-alone container. Are Fabric containers virtual/logical containers?
Link : https://access.redhat.com/documentation/en-US/Fuse_ESB_Enterprise/7.1/html/Deploying_into_the_Container/files/FESBLocateFabric.html
Fabric containers are real JVMs that are started and controlled by Fabric servers. They are not 'virtual' containers but are real JVM processes.
Standalone containers are single JVMs that monitor their "deploy" folder by default to look for artifacts to deploy. You can start a standalone Fuse server by simply running bin/fuse. This server will not contact any other Fuse servers.
A Fabric is a clustered group of Fuse instances. Because the cluster needs to distribute its artifacts according to some configuration it doesn't look at its deploy folder anymore (it ignores the contents) but uses "profiles" which are stored on the Fabric servers.
If you would create a cluster of 3 hardware servers, you would run 3 fabric servers on them.
On the first server, you start Fuse by running bin/start.
Then run bin/client -r 10 to connect to the server.
You now still have a standalone instance. To turn it into a Fabric server run fabric:create --clean --wait-for-provisioning
On the other two servers, you start Fuse the same way, but instead of running fabric:create you run fabric:join with the relevant arguments to have them connect to the first server.
You'll notice that when you look at the administration console of the first server you'll see the other 2 servers as well, and you will be able to start fabric containers on any one of those 3 servers. You can also attach profiles to those containers.

zookeeper initial discovery

I want to use Apache Zookeeper (or Curator) as a replicated naming service. Let's say I run 3 zookeeper servers and I have a dozen of computers with different applications which can connect to these servers.
How should I communicate zookeeper IP addresses to clients? A configuration file which should be distributed manually to each machine?
Corba Naming service had an option of UDP broadcast discovery in which case no configuration file is needed. Is there a similar possibility in Zookeeper?
It depends where/how you are deploying. If this is at AWS you can use Route 53 or elastic IPs. In general, the solution is some kind of DNS. i.e. a well known hostname for each of the ZK instances.
If you use something like Exhibitor (disclaimer, I wrote it) it's easier in that Exhibitor can work with Apache Curator to provide up-to-date cluster information.