JMS Client Problem - jboss

When my JBoss server is down, my JMS client application repeatedly tries to reconnect to the JMS server. However, after a couple of failed attempts, it connects to any other available servers in the LAN. How can I avoid this ?

It can be resolved by setting the following property when creating the initial context
props.setProperty("jnp.disableDiscovery", "true");

Related

HornetQ local connection never timing out

My application, running in a JBOSS standalone env, relies on a HornetQ (v2.2.5.Final) middleware to exchange messages between parts of my application in a local environment - not over the network.
The default TTL (time-to-live) value for the connection is 60000ms, I am thinking of changing that to -1 since, from an operative point of view, I am looking forward to keep sending messages through such connection from time to time (not known in advance). Also, that would prevent issues like jms queue connection failure.
The question is: what are the issues with never timing out a connection on the server side, in such context? Is that a good choice? If not, is there a strategy that is suited for such situation?
The latest versions of HornetQ automatically disable connection checking for in-vm connections so there shouldn't be any issues if you configure this manually.

How to implement bidirectional channel using camel netty4

Here is my use case:
I have two endpoints: one with MQ and the second with TCP/IP
I have to replace a legacy server which accepts queries from remote TCP/IP clients. Once the socket is open with the client, data is exchanged in both sides. the server sends asynchronously MQ data through TCP/IP and receive data from clients asynchronously also. Each data message sent has to be acknowledged. The constraint here is that I have to use the same socket.
I created two routes
from("netty4:tcp://ipAddress:port?sync=true").to("wmq:queue:toQueue")
from("wmq:queue:fromQueue").to("netty4:tcp://ipAddress:port?sync=true")
I start the first queue to receive session open request from clients and then I start the second route to start sending data but I cannot use the same channel.
I tried to get the remote port of the first route and used it in the second route but I have a ConnectException because netty4 tries to open a new socket which is already open.
I found that netty4 can be used asynchronously using the AsyncProcessor but I didn't find any example dealing with my use case.
The only idea I found is that I have to create a standalone server which open the sockets with the clients and make it communicate with the two endpoints.
Is there any way to implement this situation using camel only?
any help on this subject is really appreciated.
Your code won't be able to run as it is for your use case. I also suspect you are trying to use Camel as IP server framework and not an integration in this case.
Lets review Apache Camel's concept of producers and consumers. In the integration world we talk about client and servers as consumers and producers. This might seem like a language difference until you realise a consumer(typically a client) can also be a producer(server).
Some helpful definitions:
1. Producer: A producer is an entity capable of creating and sending a message to an endpoint. A typical example would be code like .to("file:data/outbox") as this produces a file.
2. Consumer: A consumer is an entity that receives messages produced by a producer, it wraps these messages in an exchange and sends them to be processed. A typical example would be code like from(jms:topic:xmlOrders)
A rule of thumb is that typically consumers are the source of the messages being routed.
BIG NOTE:
These two definitions are not set in stone a producer can also be an endpoint using the from and a consumer can be an endpoint using the to.
So in your case let's break up the route:
from("netty4:tcp://ipAddress:port?sync=true").to("wmq:queue:toQueue")
In this route you are creating a Netty server that sends a message to a queue. Here your netty endpoint acts as a consumer(yes it is in the from clause) however this creates a Netty4 Server at the IP address and endpoint you specified. This then send a message to another consumer which is the MQ client which act as a consumer again. So two consumers? Where is the producer? The client connecting to the netty server will act as producer.
Let's look at the second piece of the route:
from("wmq:queue:fromQueue").to("netty4:tcp://ipAddress:port?sync=true")
Here you are creating a client/consumer for the MQ services and then creating a client/producer to the netty server. Essentially you are creating a NEW client here that connects to the SERVER you created in the first route.
So in short your route creates a Netty server that send a message to MQ then creates a MQ client that sends a message to a Netty client which connects to the server you have created. It wont work like this.
Go read about message exchange patterns for further reading, but I would suggest that if you are just using Netty and MQ then maybe Camel is a bit overkill as it is a integration platform and not a IP server platform.

MQ error code 2058 when connecting to queue manager JMS

I am trying to connect to Queue Manager using MQ api and I am able to connect to queue manager
MQQueueManager queueManager=new MQQueueManager(qmgrName);
queueManager.accessQueue(qName,MQOO_OUTPUT);
But when I try to connect to the same queue manager using JMS it fails with 2058 code.Not sure if I am missing something with JMS
MQQueueConnectionFactory qcf=new MQQueueConnectionFactory();
qcf.setQueueManager(qmgrName);
qcf.setPort(1414);
qcf.setHostname("localhost");
qcf.createQueueConnection();
You have two or more queue managers on the local host. In your first example you connect in bindings mode so the queue manager is selected by name and you get the right one. In the second example the connection is being made over a client connection and so is received by the QMgr listening on 1414 which is not the one that you intend so the connection is rejected.
Please note that if both QMgrs have a listener on 1414 the connection will succeed or fail depending on which QMgr was started first. Only one can bind to that port so the first one started on it gets to use it. This might lead to what appears to be inconsistent behavior.
Please see Connection modes for IBM MQ classes for JMS which advises "To change the connection options used by the IBM MQ classes for JMS, modify the Connection Factory property CONNOPT." The acceptable values are provided on the page but you almost always want it to set for Standard Bindings (MQCNO_STANDARD_BINDING).
As documented here, MQRC 2058 means an invalid queue manager name or the queue manager name is unknown. But as you mention, bindings mode connection using MQ Base Java is successful, the queue manager name appears valid.
Update:
Sorry, I was mislead by your code and thought you are trying to do client mode connection using JMS. You don't need to set host and port for bindings mode connection.
Since the transport type is not set, default, WMQ_CM_BINDINGS is used. Suggest you to verify the queue manager name.
To connect with "BINDINGS", the queue manager needs to be local. Are you trying to connect to a remote queue manager? If so you would need to connect as "CLIENT". Also, check to be sure the qmgr is listening on the port you specified.

How to deploy a WebSocket server?

When deploying a web application running on a traditional web server, you usually restart the web server after the code updates. Due to the nature of HTTP, this is not a problem for the users. On the next request they will get the latest updates.
But what about a WebSocket server? If I restart or kill the old process all connected users will get disconnected. So my question is, what kind of strategy have you used to deploy a WebSocket server smoothly?
You're right, every connected user will be disconnected if the server restarts.
I think the less bad solution is to tell to the client to reconnect in the onClose method of the client.
WebSockets is just a transport mechanism. Libraries like socket.io exist to build on that transport -- and provide heartbeats, browser fallbacks, graceful reconnects and handle other edge-cases found in real-time applications.
In our WebSocket-enabled application, socket.io is central to ensuring our continuous deployment setup doesn't break users' active socket connections.
If clients are connected directly to sever that does all sockets networking and application logic, then yes - they will be disconnected, due to TCP layer that holds connection.
If you have gateway that clients will be connecting to, and that gateway application is running on another server, but will communicate and forward messages to logical server, then logical server will send them back and gateway will send back to client responses. With such infrastructure, you have to implement stacking of packets on gateway until it will re-establish connection with logical server. Logical server might notify gateway server before restart. That way client will have connection, it will just wont receive any responses.
Or you can implement on client side reconnection.
With HTTP, every time you navigate away, browser actually is creating socket connection to server, transmits all data and closes it (in most cases). And then all website data is local, until you navigate away.
With WebSockets it is continuous connection, and there is no reconnection on requests. Thats why you have to implement simple mechanics when WebSockets getting closing event, you will try to reconnect periodically on client side.
It is more based on your specific needs.

How do I connect a JBoss4 Message Driven Bean to a topic on a remote server?

I have a JBoss server (Server A) that is publishing messages on a topic.
I have a message driven bean on another server (Server B) that needs to retrieve the messages from Server A.
How do I go about it? I can easily get everything working if the publisher and subscriber are on the same server but I can't find any information about how to do it using a remote server.
Thanks.
This link describes how to connect an MDB to a remote queue.