I'm trying to configure two separate clusters of wildfly with infinispan as cache server on the same network. I need that the two cluster (say cluster A and cluster B) does not communicate with each other: cluster A is a cluster of preproduction of our application, and cluster B is our cluster for development. Both cluster A and B have their infinispan server. The two clusters have ports and ip different.
I didn't find on the manual or on the net a way to disable/avoid communication between two clusters on the same network; I made a lot of try but with only one success: the infinispan seems to be isolated in its cluster. Now I have hornetq that shares its data between cluster A and B.
Does anyone know how to isolate cluster A from cluster B?
Thank you very much.
I would advise you to use separate multicast groups for different clusters. You can do it by changing multicast-address attribute of a socket-binding elements for the names jgroups-mping, jgroups-udp, messaging-group, modcluster in a standalone.xml/domain.xml file.
<socket-binding-group name="full-ha-sockets" default-interface="public">
...
<socket-binding name="jgroups-mping" port="0" multicast-address="${jboss.default.multicast.address:230.0.0.4}" multicast-port="45700"/> <socket-binding name="jgroups-udp" port="55200" multicast-address="${jboss.default.multicast.address:230.0.0.4}" multicast-port="45688"/>
<socket-binding name="messaging-group" port="0" multicast-address="${jboss.messaging.group.address:231.7.7.7}" multicast-port="${jboss.messaging.group.port:9876}"/>
<socket-binding name="modcluster" port="0" multicast-address=”224.0.1.105” multicast-port="23364"/>
...
</socket-binding-group>
The first two: jgroups-mping, jgroups-udp are used in the infinispan subsystem and you can change them by passing the jboss.default.multicast.address system property:
-Djboss.default.multicast.address=different_group
or in server group definition, if you use a domain mode:
<property name="jboss.default.multicast.address" value="different_group"/>
The messaging-group group is used in the messaging subsystem (hornetq) and can configured by:
-Djboss.messaging.group.address=different_group
You should also change the group for modcluster. You can do it either by editing xml file or introducing a new property and passing it analogously as in the previous examples.
More about socket binding you can find at (it’s for Jboss EAP 6.3, but should be the same or similar): https://access.redhat.com/documentation/en-US/JBoss_Enterprise_Application_Platform/6.3/html/Administration_and_Configuration_Guide/sect-Socket_Binding_Groups.html
Here you can find more information about how to choose the right multicast group:
http://www.iana.org/assignments/multicast-addresses/multicast-addresses.xhtml
I would recommend you use a scope from 239.0.0.0-239.255.255.255 (Organization-Local Scope). For example the second-to-last octet could represent environment (1 for pre production, 2 for development) and the last octet could represent particular multicast group.
Related
I have a Linux machine with 3 network interfaces,
let's say IPs are 192.168.1.101,192.168.1.102,192.168.1.103
I want to consume all 3 IPs of this single node to create a Kafka cluster with other nodes, Should all 3 IPs have their separate brokers?
Also using nic bonding is not recommended, all IPs need to be utilized
Overall, I'm not sure why you'd want to do this... If you are using separate volumes (log.dirs) for each address, then maybe you'd want separate Java processes, sure, but you'd still be sharing the same memory, and having that machine be a single point of failure.
In any case, you can set one process to have advertised.listeners list out each of those addresses for clients to communicate with, however, you'd still have to deal with port allocations in the OS, so you might need to set listeners like so
listeners=PLAINTEXT_1://0.0.0.0:9092,PLAINTEXT_2://0.0.0.0:9093,PLAINTEXT_3://0.0.0.0:9094
And make sure you have listener.security.protocol.map setup as well using those names
Note that clients will only communicate with the leader topic-partition at any time, so if you have one broker JVM process and 3 addresses for it, then really only one address is going to be utilized. One optimization for that could be your intra-cluster replication can use a separate NIC.
Based on the documentation:
Domain Clustered Mode:
Domain mode is a way to centrally manage and publish the configuration for your servers.
Running a cluster in standard mode can quickly become aggravating as
the cluster grows in size. Every time you need to make a configuration
change, you have to perform it on each node in the cluster. Domain
mode solves this problem by providing a central place to store and
publish configurations. It can be quite complex to set up, but it is
worth it in the end. This capability is built into the WildFly
Application Server which Keycloak derives from.
I tried the example setup from the user manual and it really the maintenance of multiple configuration.
However, as High Availability is concerned, this is not quite resilient. When the master node goes down, the Auth Server will stop functioning since all the slave nodes listen to the domain controller.
Is my understanding correct here? Or am I missing something?
If this is the case, to ensure High Availability then Standalone-HA is the way to go, right?
Wildfly nodes management and clustering is ortogonal features.
Clustering in keycloak in fact is just a cache replication (all kinds of sessions, login failures etc...). So if you want to enable fault tolerance for your sessions you just have to properly configure cache replication (and usually nodes discovery), and to do that you can simply just make owners param be greater that 1:
<distributed-cache name="sessions" owners="2"/>
<distributed-cache name="authenticationSessions" owners="2"/>
<distributed-cache name="offlineSessions" owners="2"/>
<distributed-cache name="clientSessions" owners="2"/>
<distributed-cache name="offlineClientSessions" owners="2"/>
<distributed-cache name="loginFailures" owners="1"/>
<distributed-cache name="actionTokens" owners="2">
Now all new sessions that was initiated on first node will be replicated to another node, so if first node goes down end-user can be served by another node. For example you can have 3 node total, and require at least 2 sessions replica distributed among those 3 nodes.
Now if we look to domain vs ha mode, we can say that it just all about how those jboss/wildfly server configs will be delivered to target node. In HA mode all configs supplied with server runtime, in domain mode this configs will be fetched from domain controller.
I suggest you to achieve replication with HA mode, and then if required move to Domain mode. Also if we take to account modern approach to containerize everything, HA mode is more appropriate for containerization. Parametrized clustering settings could be injected during container build, with ability to alter them in runtime via environment (e.g. owners param could be drained from container enviroment variable)
There was some articles in Keycloak blog about clustering like:
this
Also i suggest to check out Keycloak docker container image repository:
here
The case is: to separate client and broker replication communication + introduce security.
Question is: is it possible to separate the communication with some procedure like rolling restart? Without need to have downtime on the whole cluster.
Configuration as is (simple with one port for everything wihout security):
listeners=PLAINTEXT://server1:9092
Wanted configuration (different ports and some with security, replication on 9094 port):
listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SASLPLAIN:SASL_PLAINTEXT,REPLICATION:SASL_PLAINTEXT
listeners=PLAINTEXT://server1:9092,SASLPLAIN://server1,REPLICATION://server1:9094
inter.broker.listener.name=REPLICATION
sasl.mechanism.inter.broker.protocol=PLAIN
sasl.enabled.mechanisms=PLAIN
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
Progress:
Configuration below is well working. But only way, without putting cluster into inconsistent state i know now, is to stop the cluster, introduce new configuration as shown above, and start cluster again. That´s obviously not wanted by the customer.
Gratefull for any thoughts how to proceed without need to stop/start whole cluster.
I managed to proceed from original, one listener configuration, to desired by below steps.
If someone has any idea to ease up the process, please add.
Original config:
listeners=PLAINTEXT://server1:9092
1.Change server.properties and do rolling restart
listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SASLPLAIN:SASL_PLAINTEXT,REPLICATION:SASL_PLAINTEXT
listeners=PLAINTEXT://SERVER1:9092,SASL_PLAINTEXT://SERVER1:9093,REPLICATION://SERVER1:9094
sasl.enabled.mechanisms=PLAIN
Also include jaas config as jvm parameter.
-Djava.security.auth.login.config=/path/to/kafka_server_jaas.conf
2.Modify the server.properties and do rolling restart
listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SASLPLAIN:SASL_PLAINTEXT,REPLICATION:SASL_PLAINTEXT
listeners=PLAINTEXT://SERVER1:9092,SASL_PLAINTEXT://SERVER1:9093,REPLICATION://SERVER1:9094
inter.broker.listener.name=REPLICATION
sasl.mechanism.inter.broker.protocol=PLAIN
sasl.enabled.mechanisms=PLAIN
3.Modify server properties one last time and do third rolling restart
listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SASLPLAIN:SASL_PLAINTEXT,REPLICATION:SASL_PLAINTEXT
listeners=PLAINTEXT://SERVER1:9092,SASL_PLAINTEXT://SERVER1:9093,REPLICATION://SERVER1:9094
inter.broker.listener.name=REPLICATION
sasl.mechanism.inter.broker.protocol=PLAIN
sasl.enabled.mechanisms=PLAIN
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
allow.everyone.if.no.acl.found=true
I have a number of demo environments that I would like to setup for different groups of customers. These would contain the same deployment apps (WAR's) but requiring different configurations. currently I'm using:
3 datasources (accessed by JNDI) per application (so each environment would need different databases)
some Naming/JNDI simple bindings which would need to be different by environment.
one activeMQ queue for environment, also identified via JNDI.
Would it be possible, on Wildfly 11, to configure the Naming, Datasources and ActiveMQ subsystems on a non-global manner ? Maybe by either configuring the subsystems on a server, host or deployment level? I don't mind having multiple Server or Hosts definitions with different network ports (8080, 8081, etc...)
I know that I can setup multiple instances of standalone running on the same machine, each with a different configuration file, but I would realy like to use the same Wildfly instance to manage this scenario. Is this at all possible ?
Thank you,
You should be using domain mode where you can manage several servers and assign to them different configuration profile https://docs.jboss.org/author/display/WFLY/Domain+Setup
We have a Jboss 5 AS cluster consiteing of 2 nodes using multicast, every thing works fine and the servers are able to discover and make a cluster
but the problem is these servers generate heavy multicast traffic which effects the network performace of other servers shareing the same network.
I am new to Jboss clustering is there any way to use unicast (point-to-point) instead of multicast ? Or configure the multicast such that its not problem for rest of the network ? can you refer me to some documentation , blog post or simmillar that can help me get rid of this problem.
Didn't got any answers here but this might be of help to someone in future we managed to resolve it by
Set the following TTL property for jboss in the start up script
-Djgroups.udp.ip_ttl=1
this will restrict the number of hops to 1 for the multicast messages. This will not reduce the amount of network traffic between the clustered JBoss but will prevent it spreading outside.
If you have other servers in the same subnet that are effected by flooding problem then
you might have to switch to TCP stack and do unicast instead of multicast
-Djboss.default.jgroups.stack=tcp
Also there are more configuration files in jboss deploy for clustering that you should look at.
server/production/deploy/cluster/jboss-cache-manager.sar/META-INF/jboss-cache-manager-jboss-beans.xml
and other conf files in the JGroups config.
If multicast is not an option of for some reason it doesn't work due to network topology we can use the unicast.
To use unicast clustering instead of UDP mcast. Open up your profile and look into file jgroups-channelfactory-stacks.xml and locate the stack named "tcp". That stacks still uses UDP only for multicast discovery. If low UDP traffic is alright, you dont need to change it. If it is or mcast doesn't work, you will need to configure TCPPING protocol and configure intial_hosts where to look for cluster members.
Afterwards, you will need to tell JBoss Cache to use this stack, open up jboss-cache-manager-jboss-beans.xml where for each cache you have a stack defined. You can either change it here from udp to tcp or you can simply use the property when starting AS, just add:
-Djboss.default.jgroups.stack=tcp