wildfly 14 + pooled-connection-factory min-pool-size - wildfly

In wildfly activemq subsystem, I have defined min-pool-size to 40 in pooled-connection-factory. Deploy artemis externally and define ip and port in wildfly standalone-full-ha.xml. After starting artemis and wildfly server, how can i verify that wildfly is created connection (based on min-pool-size property) once server started?

I believe the pool in Wildfly is filled lazily so I don't believe that just starting Wildfly will necessarily cause a connection to be created. Something actually needs to use the pool to ensure it's working as expected. Confirming that the application using the pool is working as expected is the best way to know whether or not the pool is working as expected.
Also, I don't think the pooled-connection-factory actually exposes any metrics via the Wildfly management interfaces so that's probably why you're not seeing anything for it.

Related

Setup eclipselink cache coordination in Payara

I'm using payara 5.201. I have two instances running in docker in the same network. Payara uses eclipselink 2.7.4.
I used the settings as described here.
I enabled and started the hazelcast grid in both payara instances.
I created a rest resource which can get and update data in an entity.
When I set some value in instance one I expect the get on instance two to show the same information but it doesn't.
What am I doing wrong?
See here for a test project that you can run/debug.
The problem is fixed, the issue was that the hazelcast discovery mode is 'domain' by default in payara server (it is multicast in payara-micro) which makes sense because usually a payara cluster is used.
But in my case there is no cluster but simply two payara (DAS) instances. So setting the hazelcast cluster mode to multicast solved the problem.
This works in this case because both docker instances are started using docker-compose and share the same network so multicast works.
I updated the github repo. Check it out.

How to configure JDK8 JMX port to allow VisualVM to monitor all threads

I need to monitor a JDK 8 JVM on a production Red Hat 6 Linux server that allows me no direct access. I'm able to connect to the JMX port using VisualVM.
I can see the Monitor tab: CPU usage; heap and metaspace; loaded and unloaded classes; total number of threads.
However, I can't see dynamic data on the Threads tab. It shows me total number of threads and daemon threads, but no real time status information by thread name.
I don't know how to advise the owners of the server how to configure the JDK so I can see dynamic data on the Threads tab.
It works fine on my local machine. I can see the status of every thread by name, with color coded state information, if I point VisualVM to a Java process running locally.
What JVM setting makes dynamic thread data available on the JMX port?
Update:
I should point out that I'm using JBOSS 6.x.
If I look at my local JBOSS 6.x standalone.xml configuration I see the following subsystem entry for JMX:
<subsystem xmlns="urn:jboss:domain:jmx:1.3">
<expose-resolved-model/>
<expose-expression-model/>
<remoting-connector/>
</subsystem>
I can see all dynamic thread information when running on my local machine.
I'm asking the owners of the production instance if the standalone.xml includes this subsystem. I'm hoping they will say that theirs is different. If it is, perhaps modifying the XML will make the data I need available.

Prevent deployment to entry node, only deploy to other nodes

I have a free OpenShift account with the default 3 gears. On this I have installed the WildFly 8.1 image using the OpenShift web console. I set the minimal and maximal scaling to 3.
What happens now is that OpenShift will create 3 JBoss WildFly instances:
One on the entry node (which is also running HAProxy)
One on an auxiliary node
One on another auxiliary node
The weird thing is that the JBoss WildFly instance on the entry node is by default disabled in the load balancer config (haproxy.conf). BUT, OpenShift is still deploying the war archive to it whenever I commit in the associated git repo.
What's extra problematic here is that because of the incredibly low number of max user processes (250 via ulimit -u), this JBoss WildFly instance on the entry node cannot even startup. During startup JBoss WildFly will throw random 'java.lang.OutOfMemoryError: unable to create new native thread' (and no, memory is fine, it's the OS process limit).
As a result, the deployment process will hang.
So to summarize:
A JBoss WildFly instance is created on the entry node, but disabled in the load balancer
JBoss WildFly in its default configuration cannot startup on the entry node, not even with a trivial war.
The deployer process attempts to deploy to JBoss WildFly on the entry node, despite it being disabled in the load balancer
Now my question:
How can I modify the deployer process (including the gear start command) to not attempt to deploy to the JBoss WildFly instance on the entry node?
When an app scales from 2 gears to 3, HAproxy stops routing traffic to your application on the headgear and routes it to the two other gears. This assures that HAproxy is getting the most CPU as possible as the application on your headgear (where HAproxy is running) is no longer serving requests.
The out of memory message you're seeing might not be an actual out of memory issue but a bug relating to ulimit https://bugzilla.redhat.com/show_bug.cgi?id=1090092.

JBOSS AS 7 Load Balancing with Server Failover

I have 2 instances of Jboss servers running on eg: 127.0.0.1 and 127.0.0.2.
I have implemented Jboss load balancing, but am not sure how to achieve server failover. I do not have a webserver to monitor the heartbeat and hence using mod_cluster is out the question. Is there any way I can achieve failover using only the two available servers?
Any help would be appreciated. Thanks.
JBoss clustering automatically provides JNDI and EJB failover and also HTTP session replication.
If your JBoss AS nodes are in a cluster then the failover should just work.
The Documentation refers to an older version of JBoss (5.1) but it has clear descriptions of how JBoss clustering works.
You could spun up another instance to server as your domain controller, and the two instances you already have will be your hosts. Then you could go through the domain controller, and it will do the work for you. However, I haven't seen instances going down to often, it usually servers that do, and it looks like you are using just one server (i might be wrong) for both instances, so i would consider splitting it up.

JNDI - how it works

If I understand correctly the default JNDI service runs on my local AS, right? So if I create an EJB and in jboss.xml (running JBoss) I name it "sth" than it is registered in my AS. Correct?
In big projects EJBs might be distributed through many servers - on one server EJBs doing sth and on another sth else. When calling JNDI loopup() I search only one server, right? So it means that I need to know where the EJB is registered... Is it true?
When you cluster your app you will usually configure the cluster so that you have one shared JNDI. In JBoss you do this using HA-JNDI (High Availability - JNDI) or equivalent. This is a centralized service with fail-over. In principle you could imagine having a replicated service for better throughput, but to my knowledge that is not available in JBoss.
In short, you will have only one namespace, so you don't need to know where it is registered.