JMX RMI agent fault tolerance mechanisms - rmi

I am using the JMX-RMI agent for message passing. I have a java program which sends a message having a name/id to a set of listener/listeners.Based on the message received by the listeners the client side programs behave accordingly.This piece works fine but I would like to know what kind of fault tolerance is built in into the JMX RMI agent.
If the listener stops accidentally, does JMX restart it or logs the error somewhere,what if the message queue on either side is full. Any documentation which explains the underlying architecture of JMX RMI or the in built fault tolerance mechanism will be appreciated. If it doesn't have any fault tolerance mechanisms, what would be a good way of doing it.
Thanks Much

I am assuming your client side listeners are using the standard javax.management.remote connectors. Without some customization, I would say you can implement some straightforward Fault Detection. For Fault Tolerance you're probably looking at some sort of clustering solution.
There are two layers of connectivity you need to be concerned about:
The MBeanServerConnection itself. In other words, if the whole server side JVM terminates, your client side processes need to know.
While the server JVM and the subsidiary MBeanServerConnection may continue to be available, the "hosted", the listener/client message forwarder service itself may stop/fail/stall.
For #1, the client processes can register a NotificationListener with the JMXConnector using the addConnectionNotificationListener method. Your local connection will then emit JMXConnectionNotifications on all the following events:
A new client connection has been opened.
A client connection has been closed.
A client connection has failed unexpectedly.
A client connection has potentially lost notifications. This notification only appears on the client side.
This way, your clients will know when a connection to the server has been established and lost.
For #2, it's a bit more specific to your application, but perhaps you can adapt a simple pattern like this:
When your listener/forwarder service starts, emit a start-notification. When it stops,emit a stopped-notification. The two categories of listeners that would register for these notifications would be:
The clients, so they know the service has started/stopped.
A server side "watcher" that can listen for a "stop" and restart the service.
Is that more-or-less what you were thinking of ?

Related

ZeroMQ mixed PUB/SUB DEALER/ROUTER pattern

I need to do the following:
multiple clients connecting to the SAME remote port
each of the clients open 2 different sockets, one is a PUB/SUB, the
other is a ROUTER/DEALER ( the server can occasionally send back to client heartbeats, different server related information ).
I am completely lost whether it can be done in ZeroMQ or not.
Obviously if I can use 2 remote ports, that is not an issue, but I fail
to understand if my setup can be achieved with some kind of envelope
usage in ZeroMQ.
Can it be done?
Thanks,
Update:
To clarify what I wish to achieve.
Multiple clients can communicate with the server
Clients operate on request-response basis mostly(on one socket)
Clients create a session socket, which means that whenever this
type of socket is created, a separate worker thread needs to be created
and from that time on the client communicates with this worker thread
with regards to requests processing, e.g. server thread must not block
the connection of other clients when dealing with the request of one client
However clients can receive occasional messages from the worker thread with regards to heartbeats of the worker.
Update2:
Actually I could sort it out. What I did:
identify clients obviously, so ROUTER/DEALER is used, e.g. clients
are indeed dealers, hence async processing is provided
clients send messages to the one and only local port, where the router sits
router peeks into messages (kinda the lazy pirate example), checks whether a new client comes in; if yes it offloads to a separate thread, and connects the separate thread with an internal "inproc:" socket
router obviously polls for the frontend and all connected clients' backends and sends messages back and forth.
What bugs me is that it is an overkill if I compare this with a "regular" socket solution, where I could have connected the client with the worker thread DIRECTLY (e.g. worker thread could recv from the socket opened by the client directly), hence I could spare the routing completely.
What am I missing?
There was a discussion on the ZeroMQ mailing list recently about multiplexing multiple services on one TCP socket. The proposed solutions is essentially what you implemented.
The discussion also mentions Malamute with its brokers which essentially provides a framework based on ZeroMQ which also provides the functionality you need. I haven't had the time to look into it myself, but it looks promising.

ZMQ Pattern Dealer/Router HeartBeating

I have a Dealer socket in client side, who is connected to Router socket in server side.
I often see Heartbeating mechanism : the server regularly send message to the client in order that client knows if he is correctly connect to the server, so the client can reconnect if he doesn't received message for some times.
For example the Paranoid Pirate pattern here : http://zguide.zeromq.org/page:chapter4
But after some tests : if the client loose the connection to the server for a moment and find it again, the client is automatically reconnected to the server socket (he receive sended message...).
I wonder in which case Heartbeating is necessary ?
Heartbeating isn't necessary to keep the connection alive (there is a ZMQ_TCP_KEEPALIVE socket option for TCP sockets). Instead, heartbeating is required for both sides to know that the other side is still active. If either side does detect that the other is inactive, it can take alternative action.
Inactivity might be because a process has died, it's deadlocked, it's doing too much work between network activity, or network failure, etc. From the other sides point of view, all these scenarios are indistinguishable without more information.
In networking, making a design work is the easy part. The overwhelmingly hard part is dealing with failure. You have to consider as many possible failure modes as possible and deal with them in the design protocols. Heartbeating is often a helpful part in those protocols. They are far more useful than trying to work out if a socket is still up by use of monitor events, say.
Having said that, if your application doesn't need any particular level of reliability; perhaps you can just power cycle equipment when a failure happens. Then you probably don't need to worry about heartbeating. After all, there are plenty of patterns in the guide that don't use it. It's horses for courses.

Delphi Indy TCP Client/Server communication best approach

I have a client and a server application that is communicating just fine, there is a TIdCmdTCPServer in the server and a TIdTCPClient in the client.
The client has to authenticate in the server, the client asks the server for the newest version information and downloads any updates, and other communications. All this communication with TIdTCPClient.SendCmd() and TIdTCPClient.LastCmdResult.Text.Text.
The way it is, the server receives commands and replies, the clients only receives replies, never commands, and I would like to implement a way to make the client receives commands. But as I heard, if the client uses SendCmd it should never be listening for data like ReadLn() as it would interfere with the reply expected in SendCmd.
I thought of making a command to check for commands, for example, the client would send a command like "IsThereCommandForMe" and the server would have a pool of commands to each client and when the client asks, the server send it in the reply, but I think it would not be a good approach as there would be a big delay between the commands being available and the client asking for it. I also thought of making a new connection with new components, for example a TIdCmdTcpClient, but then there would be 2 connections for each client, I don't like that idea as I think it could easily give problems in the communication.
The reason I want this, is that I want to implement a chat functionality in the client, and it should be receiving messages from the server without asking for it all the time, imagine all clients continually asking the server if there is message for them. And I would like to be able to inform the client when there is an update available instead the client being asking if there is any. And with this I could send more commands to the client too.
what are your thoughts about this ? how can I make the server receiving commands from the clients, but also sends them ?
TCP sockets are bidirectional by design. Once the connection between 'client' and 'server' has been established, they are symmetric and data can be sent at any time from any side over the same socket.
It only depends on the protocol (which is just written 'contract' for the communication) which communication model is used. HTTP for example uses a request/reply model. With Telnet for example, both sides can initate data transmissions. (If you take a look at the Indy implementation for Telnet, you will see that it uses a background thread to listen for server data, but it uses the same socket connection in the main thread to send data from client to server).
A "full duplex" protocol which supports both request/response and server push, and also is firewall-friendly, is WebSockets. With WebSockets (a HTTP upgrade), the server can send data to the connected client(s) any time. This would meet your 'chat' requirement.
If you use TIdTCPClient / TIdCmdTCPServer, corporate firewalls might block the communication.

Memcached Client-Server communication

I've been researching memcached, and I'm planning on using that with spymemcached on the client. I'm just curious how client/server communication works between the two. When creating a memcached client object, you can pass in a list of servers, but after the client is created is there any communication between the servers and the client saying that they are still alive and that the client send that particular server information? I've tried looking through the memcached and spymemcached documentation sites, but haven't found anything yet.
Spymemcached does not send any special messages to make sure that the connection is still alive, but you can do this in your application code if necessary by sending no-op messages to each server. You should also note that the TCP layer employs mechanisms such as keep-alive and timeout in order to try to detect dead connections. These parameters however may be different depending on the operating system you are using.

Why is a FIX connection dropping heartbeats when both endpoints are on the same server?

We have two applications using QuickFIX engine, both are running in the same machine.
Sometimes we see that the session ends due to lack of heartbeats.
How can it be since both are running on the same machine?
FIX heartbeat mechanism has nothing to do with the fact where applications communicating using FIX protocol run. If you see the session being dropped due to lack of heartbeat then you have to determine which session did not send heartbeat (it will also fail to respond to «Test Request» message, if any) and why did that happen. Possible reasons are:
Server and client have different heartbeat interval settings and server does not honor client's heartbeat interval (field #108 in «Logon» message) and test request/response logic is broken (or turned off).
Underlying transport errors (i.e. TCP/IP errors or UDP packet drops).
Other software/hardware bugs.
Something else.
Hope it helps. Good Luck!