HornetQ local connection never timing out - jboss

My application, running in a JBOSS standalone env, relies on a HornetQ (v2.2.5.Final) middleware to exchange messages between parts of my application in a local environment - not over the network.
The default TTL (time-to-live) value for the connection is 60000ms, I am thinking of changing that to -1 since, from an operative point of view, I am looking forward to keep sending messages through such connection from time to time (not known in advance). Also, that would prevent issues like jms queue connection failure.
The question is: what are the issues with never timing out a connection on the server side, in such context? Is that a good choice? If not, is there a strategy that is suited for such situation?

The latest versions of HornetQ automatically disable connection checking for in-vm connections so there shouldn't be any issues if you configure this manually.

Related

socket connections closing when manually deploying

We made a chat module in our project using socket.io. When the load is balanced and the manual deployed, if socket connections are switched to different servers, socket connections are disconnected and the messaging events are partially not processed. I solved the load balance problem with socket.io-redis library. It acts as a gateway and solves this problem thanks to redis.
Another problem is that when I deploy it manually, the pid of the servers changes and socketio connections are instantly disconnected on the client and then it is not connected even though it says connected.
Do you think that using tools such as Travis CI solves the problems in manual deploy process?
Another question is, if a system that goes to 3 servers with load balance then goes back to 2 servers, the socket connections will be closed again, what method may be required to solve this? I thought of separating the socket.io service from the monolithic structure and keeping it on a single server, and scaling the server vertically when the load increased.
We are using an Aws Elastic Beanstalk(EBS), it automatically performs load balance.

ADO.NET background pool validation

in Java, application servers like JBoss EAP have the option to periodically verify the connections in a database pool (https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/6.4/html/administration_and_configuration_guide/sect-database_connection_validation). This has been very useful for removing stale connections.
I'm now looking at a ADO.NET application, and I was wondering if there was any similar functionality that could be used with a Microsoft SQL Server?
I ended up find this post by redgate that describes some of the validation that goes on when connections are taken from the pool:
If the connection has died because a router has decided that it no
longer wants to forward your packets and no other routers like you
either then there is no way to know this unless you try to send some
data and don’t get a response.
If you create a connection and a connection pool is created and
connections are put into the pool and not used, the longer they are in
there, the bigger the chance of something bad happening to it.
When you go to use a connection there is nothing to warn you that a
router has stopped forwarding your packets until you go to use it; so
until you use it, you do not know that there is a problem.
This was an issue with connection pooling that was fixed in the first
.Net 4 reliability update (see issue 14 which vaguely describes this)
with a feature called “Connection Pool Resiliency”. The update meant
that when a connection is about to be taken from the pool, it is
checked for TCP validity and only returned if it is in a good state.

hornetq fails when we change system time

I have an issue and I hope you can help me a bit.
I have to implement fast forwarding time, because I need to test something. I've wrote a python script which increment the system time with 5 seconds for every 1 real second. (5 times faster).
Then my jboss fails with some hornetq timeouts.
Do you have any ideas how I can fix this?
03/09/18 09:18:00,107 WARN
[org.hornetq.core.protocol.core.impl.RemotingConnectionImpl] (hornetq-
failure-check-thread) Connection failure has been detected: Did not
receive data from invm:0. It is likely the client has exited or crashed
without closing its connection, or the network between the server and
client has failed. You also might have configured connection-ttl and
client-failure-check-period incorrectly. Please check user manual for
more information. The connection will now be closed. [code=3]
The underlying issue is that changing the time breaks the connection-failure-detection algorithm used by the broker. The broker thinks it isn't receiving "ping" packets from clients at the proper time because you're forcing time to pass at 5x the normal rate. There is no way to fix this for remote clients aside from disabling or extending the connection TTL. However, for in-vm connections you could apply the fix from https://issues.jboss.org/browse/HORNETQ-1314 (which is not resolved in the version of HornetQ you are using) to the branch of HornetQ you're currently using and rebuild. If you don't want to rebuild you could upgrade to a version of JBoss AS (or Wildfly) which contains this fix.

MQ error code 2058 when connecting to queue manager JMS

I am trying to connect to Queue Manager using MQ api and I am able to connect to queue manager
MQQueueManager queueManager=new MQQueueManager(qmgrName);
queueManager.accessQueue(qName,MQOO_OUTPUT);
But when I try to connect to the same queue manager using JMS it fails with 2058 code.Not sure if I am missing something with JMS
MQQueueConnectionFactory qcf=new MQQueueConnectionFactory();
qcf.setQueueManager(qmgrName);
qcf.setPort(1414);
qcf.setHostname("localhost");
qcf.createQueueConnection();
You have two or more queue managers on the local host. In your first example you connect in bindings mode so the queue manager is selected by name and you get the right one. In the second example the connection is being made over a client connection and so is received by the QMgr listening on 1414 which is not the one that you intend so the connection is rejected.
Please note that if both QMgrs have a listener on 1414 the connection will succeed or fail depending on which QMgr was started first. Only one can bind to that port so the first one started on it gets to use it. This might lead to what appears to be inconsistent behavior.
Please see Connection modes for IBM MQ classes for JMS which advises "To change the connection options used by the IBM MQ classes for JMS, modify the Connection Factory property CONNOPT." The acceptable values are provided on the page but you almost always want it to set for Standard Bindings (MQCNO_STANDARD_BINDING).
As documented here, MQRC 2058 means an invalid queue manager name or the queue manager name is unknown. But as you mention, bindings mode connection using MQ Base Java is successful, the queue manager name appears valid.
Update:
Sorry, I was mislead by your code and thought you are trying to do client mode connection using JMS. You don't need to set host and port for bindings mode connection.
Since the transport type is not set, default, WMQ_CM_BINDINGS is used. Suggest you to verify the queue manager name.
To connect with "BINDINGS", the queue manager needs to be local. Are you trying to connect to a remote queue manager? If so you would need to connect as "CLIENT". Also, check to be sure the qmgr is listening on the port you specified.

Is there a way to wait for a listening socket on win32?

I have a server and client program on the same machine. The server is part of an application- it can start and stop arbitrarily. When the server is up, I want the client to connect to the server's listening socket. There are win32 functions to wait on file system changes (ReadDirectoryChangesW) and registry changes (RegNotifyChangeKeyValue)- is there anything similar for network changes? I'd rather not have the client constantly polling.
There is no such Win32 API, however this can be easily accomplished by using an event. The client would wait on that event to be signaled. The server would signal the event when it starts up.
The related API that you will need to use is CreateEvent, OpenEvent, SetEvent, ResetEvent and WaitForSingleObject.
If your server will run as a service, then for Vista and up it will run in session 0 isolation. That means you will need to use an event with a name prefixed with "Global\".
You probably do have a good reason for needing this, but before you implement this please consider:
Is there some reason you need a connect right away? I see this as a non issue because if you perform an action in the client, you can at that point make a new server connection.
Is the server starting and stopping more frequently than the client? You could switch roles of who listens/connects
Consider using some form of Windows synchronization, such as semaphore. The client can wait on the synchronization primitive and the server can signal it when it starts up.
Personally I'd use a UDP broadcast from the server and have the "client" listening for it. The server could broadcast a UDP packet every X period whilst running and when the client gets one, if it's not already connected, it could connect.
This has the advantage that you can move the client onto a different machine without any issues (and since the main connection from client to server is sockets already it would be a pity to tie the client and server to the same machine simply because you selected a local IPC method for the initial bootstrap).