Per-host or per-deployment subsystem configuration in WildFly - deployment

I have a number of demo environments that I would like to setup for different groups of customers. These would contain the same deployment apps (WAR's) but requiring different configurations. currently I'm using:
3 datasources (accessed by JNDI) per application (so each environment would need different databases)
some Naming/JNDI simple bindings which would need to be different by environment.
one activeMQ queue for environment, also identified via JNDI.
Would it be possible, on Wildfly 11, to configure the Naming, Datasources and ActiveMQ subsystems on a non-global manner ? Maybe by either configuring the subsystems on a server, host or deployment level? I don't mind having multiple Server or Hosts definitions with different network ports (8080, 8081, etc...)
I know that I can setup multiple instances of standalone running on the same machine, each with a different configuration file, but I would realy like to use the same Wildfly instance to manage this scenario. Is this at all possible ?
Thank you,

You should be using domain mode where you can manage several servers and assign to them different configuration profile https://docs.jboss.org/author/display/WFLY/Domain+Setup

Related

Programmatically create Artemis cluster on remote server

Is it possible to programmatically create/update a cluster on a remote Artemis server?
I will have lots of docker instances and would rather configure on the fly than have to set in XML files if possible.
Ideally on app launch I'd like to check if a cluster has been set up and if not create one.
This would probably involve getting the current server configuration and updating it with the cluster details.
I see it's possible to create a Configuration.
However, I'm not sure how to get the remote server configuration, if it's at all possible.
Configuration config = new ConfigurationImpl();
ClusterConnectionConfiguration ccc = new ClusterConnectionConfiguration();
ccc.setAddress("231.7.7.7");
config.addClusterConfiguration(ccc);
// need a way to get and update the current server configuration
ActiveMQServer.getConfiguration();
Any advice would be appreciated.
If it is possible, is this a good approach to take to configure on the fly?
Thanks
The org.apache.activemq.artemis.core.config.impl.ConfigurationImpl object can be used to programmatically configure the broker. The broker test-suite uses this object to configure broker instances. However, this object is not available in any remote sense.
Once the broker is started there is a rich management API you can use to add things like security settings, address settings, diverts, bridges, addresses, queues, etc. However, the changes made by most (although not all) of these operations are volatile which means many of them would need to be performed every time the broker started. Furthermore, there are no management methods to add cluster connections.
You might consider using a tool like Ansible to manage the configuration or even roll your own solution with a templating engine like FreeMarker to customize the XML and then distribute it to your Docker instances using some other technology.

Jboss multiple instances in Standalone mode on same pc

We are using Jboss 7 App Server and we are trying run multiple server nodes on a single box and also on other box *basically 2 boxes which will have 2 each nodes of Jboss servers running).
My question is to have multiple nodes of Jboss Servers on a single box in Standalone mode. Should I have to copy server folder twice with port offsets?
Or is it ok to start servers just via port offset without having to copying server folder?
What is the best practice to have multiple server nodes running on the same box? Any advice would be greatly appreciated.
Thank you.
Just create multiple copies of standalone directory(Example: standalone_PROD,standalone_SIT) so that we will have separate log files and deployment directories for each instance. And use below option while starting server instance:
-Djboss.server.base.dir=/path/to/standalone_SIT <-- Location of standalone dir
-Djboss.socket.binding.port-offset=10 <-- PortOffset to avoid port conflict
We have had two instances of jboss on the same computer over several years. Both instances were in the same domain. Each instance had its own port and of course lay in its own path. Our experiences were good.
You can have as many standalone instances you want on a machine, depending upon the resources available.
All you need to do is copy over the same folder twice and make changes in all the ports to be used in the standalone mode. Also If you are setting any parameters make sure they are according to the memory available on the machine.

AppFabric setup in a domain

So I am a little confused by reading the documents.
I want to setup AppFabric caching and hosting.
Can I do the following?
DC
SQL Server
AppFabric1
AppFabric2
All these computers are joined to the DC.
I want to be able to have AppFabric1 be the mainhost but also part of the cache cluster?
What about AppFabric2? or AppFabricX? How can I make them part of the cache cluster?
Do I have to make AppFabric1 and AppFabric2 configured in Windows as part of a cluster (i.e setup the entire environment as a cluster)?
Can I install AppFabric independently on AppFabric1 and 2 and have them cluster together and "make it work"? If so - how?
I see documentation about setting it up in a webfarm but also a workgroup... and that's it. nothing about computers joined to a domain.
I want to setup AppFabric caching and hosting.
Caching and Hosting are two totaly different things and generally don't share the same use cases.
AppFabric Caching provides an in-memory, distributed cache platform for Windows Server, previously named Velocity. The cache cluster is a collection of one or more instances of the Caching Service working together. You can easily add new cache host without restarting the cluster in the "storage location" (xml or sql server).
Can I install AppFabric independently on AppFabric1 and 2 and have
them cluster together and "make it work"? If so - how?
Don't worry... this can be done easily during installation. In addition, there are powerfull PS module to to the same thing.
AppFabric Hosting enhance the hosting of WCF and Workflow Foundation services in WAS (autostart, monitoring of hosted services, workflow persistence, ...). There is no cluster here and basically you just have to configure to monitoring/persistence DB for each server.
Just try it !
When you are adding the second node in the AppFabric cluster, make sure to choose the option Join Cluster (instead of New Cluster) and point to the path of the share where you stored the configuration (assuming that you used FILE SHARE to store the configuration of the cluster). The share that you used should be accessible from Appfabric2.

JBOSS AS 7 Load Balancing with Server Failover

I have 2 instances of Jboss servers running on eg: 127.0.0.1 and 127.0.0.2.
I have implemented Jboss load balancing, but am not sure how to achieve server failover. I do not have a webserver to monitor the heartbeat and hence using mod_cluster is out the question. Is there any way I can achieve failover using only the two available servers?
Any help would be appreciated. Thanks.
JBoss clustering automatically provides JNDI and EJB failover and also HTTP session replication.
If your JBoss AS nodes are in a cluster then the failover should just work.
The Documentation refers to an older version of JBoss (5.1) but it has clear descriptions of how JBoss clustering works.
You could spun up another instance to server as your domain controller, and the two instances you already have will be your hosts. Then you could go through the domain controller, and it will do the work for you. However, I haven't seen instances going down to often, it usually servers that do, and it looks like you are using just one server (i might be wrong) for both instances, so i would consider splitting it up.

JBoss multiple instances of a server, multiple ports in production environment not recommended?

The following document says:
This is easier to do and does not require a sysadmin. However, it is not the preferred approach for production systems for the reasons listed above. This approach is usually used in development to try out clustering behavior.
What are risks with this approach in the production environment? In weblogic, it is pretty common, and seen few production environments running with multiple ports(managed servers).
https://community.jboss.org/wiki/ConfiguringMultipleJBossInstancesOnOnemachine
The wiki clearly answers that question. Here is the text from the wiki for your reference
Where possible, it is advised to use a different ip address for each instance of JBoss rather than changing the ports or using the Service Binding Manager for the following reasons:
When you have a port conflict, it makes it very difficult to troubleshoot, given a large amount of ports and app servers.
Too many ports makes firewall rules too difficult to maintain.
Isolating the IP addresses gives you a guarantee that no other app server will be using the ports.
Each upgrade requires that you go in and re set the binding manager again. Most upgrades will upgrade the conf/jboss-service.xml file, which has the Service binding manager configuration in it.
The configuration is much simpler. When defining new ports(either through the Service Binding manager or by going in and changing all the ports in the configuration), it's always a headache trying to figure out which ports aren't taken already. If you use a NIC per JBoss Instance, all you have to change is the Ip address binding argument when executing the run.sh or run.bat. (-b )
Once you get 3 or 4 applications using different ports, the chances really increase that you will step on another one of your applications ports. It just gets more difficult to keep ports from conflicting.
JGroups will pick random ports within a cluster to communicate. Sometimes when clustering, if you are using the same ip address, two random ports may get picked in two different app servers(using the binding manager) that conflict. You can configure around this, but it's better not to run into this situation at all.
On a whole, having an individual IP addresses for each instance of an app server causes fewer problems (some of those problems are mentioned here, some aren't).