Delay in getting message from Microsoft Message Queue - msmq

There is a delay of 8 to 9 minutes in receiving the message in user's machine from a server's MSMQ. There is no blocks in network connectivity. How to find the root cause of the issue. Can any one help.
It is happening in many machines.Initially, there is no delay in receiving the messages from MSMQ.
Updates:
There are two servers - server 1 and server 2. Message sending from server 1 is getting delayed. There is no delay if the message is sent from server 2. What we need to check from server end? Can any one help.
Thanks in advance.

The delay in receiving the message from server to user machine via MSMQ is resolved.
Added registry keys DeferFirstConnectAttempt and WaitTime. These two keys resolved the latency issue.

Related

Wildfly redeliver JMS Messages

im using jms in jboss Wildfly 8 for messaging. Message delivered succesfully and Receiver goes on with processing. And it takes about 15-20 Minutes, till Receiver does its job. But Server redelivers same Message after about 10 minutes. My Question is how and where can i configure Wildfly to wait for it with for example 20 Mins timelimitations. i found some helpful explenations on Wildfly documentation. But im not sure whether its right way to do this.
Jboss Documentation
Messaging Configuration
should i just add
<redelivery-delay>1200000</redelivery-delay>
 <max-delivery-attempts>2</max-delivery-attempts>
in <address-setting> in standalone-full.xml
The setting you made is correct. The setting makes the server attempt to deliver messages for another 2 attempts if the first delivery is unsuccessful.
The fact that you be getting the same message several times should be related to the way you're telling the server that the message was processed.
Look at the link below and check that the acknowledge mode is correct with the operating mode of the class that receives the JMS messages.
JMS Message Delivery Reliability and Acknowledgement Patterns

TCP keepalive not working

The situation:
Postgres 9.1 on Debian Server
Scala(Java) application using the LISTEN/NOTIFY mechanism to get notified through JDBC
As there can be very long pauses (multipla days) between notifications I ran into the problem that the underlying TCP connection silently got terminated after some time and my application stopped to receive the notifications.
When googeling for a solution I found that there is a parameter tcpKeepAlive that you can set on the connection. So I set it to true and was happy. Until the next day I saw that again my connection was dead.
As I had been suspicious there was a wireshark capture running in parallel which now turns out to be very usefull. Just about exactly two hours after the last successfull communication on the connection of interest my application sends a keepalive packet to the database server. However the server responds with RST as it seems it has already closed the connection.
The net.ipv4.tcp_keepalive_time on the server is set to 7200 which is 2 hours.
Do I need to somehow enable keepalive on the server or increase the keepalive_time?
Is this the way to go about keeping my application connected?
TL;DR: My database connection gets terminated after long inactivity. Setting tcpKeepAlive didnt fix it as server responds with RST. What to do?
As Craig suggested in the comments the problem was very likely related to some piece of network hardware in between the server and the application. The fix was to increase the frequency of the keepalive messages.
In my case the OS was Windows where you have to create a Registry key with the idle time in milliseconds after which the message should be sent. Info on that here
I have set it to 15 minutes which seems to have solved the issue.
UPDATE:
It only seemed like it solved the issue. After about two days of program run time my connection was gone again. I switched to checking the validity my connection every time I use it. This does not seem like it is the solution but it is a solution nonetheless.

"Remote computer is not available." is the error thrown when reading from remote public windows server 2003 queue

I keep getting the error “Remote computer is not available.” when reading from remote public windows server 2003 queue. The queue is on server B. My application is on server A;
Amazingly, server A can drop a message on any queue on server B, i just can't read a message off from B.
The two servers A and B are on the same domain.
all other servers can read and write on B's queues
it happened after I restarted server A
i have restarted A again in vain
msmq is running on A and B
Online suggestions are all not working.
It doesn't look like a trust issue between servers A and B. Please help
I got the solution. The approach in my comment above worked.
The thing is, port 135 was blocked by our IT-Networks guys. This doesn't immediately affect pulling messages. It had to be after restarting server A that B rejected its request due to failed RPC requests.
Again, thanks to MSMQ from the plumber's mate

Session getting disconnected in the middle of working

Sessions are getting disconnected automatically (in the middle of working).
Disconnection happens for the users when they working by using telnet connection to Linux server via putty telnet application.
During the disconnection, the Network b/w utilization is high and no limitation for total number of users in a network.
Error "Hangup signal received (562)"
Any idea about this ??
The network connection was interrupted or a hangup signal was sent via "kill".
You mention network utilization being "high" when disconnects happen. How do you know that? What measurement are you looking at that tells you it is "high"? That might be a symptom of a networking issue that is at the root of the problem.
There are few directions:
OpenEdge has published this article with links to implementing keep-alive packets:
https://knowledgebase.progress.com/articles/Article/Telnet-connection-times-out-after-15-minutes
Increase the number of "instances" in xinetd.conf, and then restart the service.
Make sure that the database watchdog is up and running: https://documentation.progress.com/output/ua/OpenEdge_latest/index.html#page/dmadm/prowdog-command.html
Check the database log file, to find out what happened just before the hangup (https://documentation.progress.com/output/ua/OpenEdge_latest/index.html#page/gsins/openedge-database-log-file.html)

socket error 10054

I have a C/S program. Client use socket to send a file to server, after send approximate more than 700k data, client(on win7) will receive a socket 10054 error which means Connection reset by peer.
Server worked on CentOS 5.4, client is windows7 virtual machine run in virtual box. client and server communicate via a virtual network interface.
The command port(send log) is normal, but the data port(send file) have the problem.
If it was caused by wrong configuration of socket buffer size or something else?
If anyone can help me check the problem. Thanks.
Every time I call socket send a buffer equals 4096 byte
send(socket, buffer, 4096, 0 )
CentOS socket config.
#sysctl -a
...
net.ipv4.tcp_rmem = 4096 87380 4194304
net.ipv4.tcp_wmem = 4096 16384 4194304
net.ipv4.tcp_mem = 196608 262144 393216
net.ipv4.tcp_dsack = 1
net.ipv4.tcp_ecn = 0
net.ipv4.tcp_reordering = 3
net.ipv4.tcp_fack = 1
I'm not quite understand what the socket buffer configuration means, if this will cause the receive incomplete result problem?
It's almost definitely a bug in your code. Most likely, one side thinks the other side has timed out and so closes the connection abnormally. The most common way this happens it that you call a receive function to get data, but you actually already got that data and just didn't realize it. So you're waiting for data that you have already received and thus time out.
For example:
1) Client sends a message.
2) Client sends another message.
3) Server reads both messages but thinks it only got one, sends an acknowledge.
4) Client receives acknowledge, waits for second acknowledge which server will never send.
5) Server waits for second message which it actually already received.
Now the server is waiting for the client and the client is waiting for the server. The server was coded incorrectly and didn't realize that it actually got two messages in one go. TCP does not preserve message boundaries.
If you tell me more about your protocol, I can probably tell you in more detail what went wrong. What constitutes a message? Which side sends when? Are there any acknowledgements? And so on.
But the short version is that each side is probably waiting for the other.
Most likely, the connection reset by peer is a symptom. Your problem occurs, one side times out and aborts the connection. That causes the other side to get a connection reset because the other side aborted the connection.