xmpp ejabberd: to many unacked stanzas - xmpp

I'm running an ejabberd-server. Some weeks it worked really nice. But now I'm getting always the error: Stream closed by us: Too many unacked stanzas (policy-violation).
I don't know, why I'm getting the error, because I just connected as usual to the server from a mobile client. How can I delete unacked stanzas for using the ejabberd server like before?
Thanks in advance
yajo10

Solved at Github
The parameter max_ack_queue was set to 70 but the normal value is 5000

Related

Delay in getting message from Microsoft Message Queue

There is a delay of 8 to 9 minutes in receiving the message in user's machine from a server's MSMQ. There is no blocks in network connectivity. How to find the root cause of the issue. Can any one help.
It is happening in many machines.Initially, there is no delay in receiving the messages from MSMQ.
Updates:
There are two servers - server 1 and server 2. Message sending from server 1 is getting delayed. There is no delay if the message is sent from server 2. What we need to check from server end? Can any one help.
Thanks in advance.
The delay in receiving the message from server to user machine via MSMQ is resolved.
Added registry keys DeferFirstConnectAttempt and WaitTime. These two keys resolved the latency issue.

Mobicents presence server. How to register softphone?

I have installed the Mobicents Presence server following the guide.
The server is installed but now I am not able to proceed further. I mean now how to test the presence or register the devices with XDM, PS and RLS. How do I find on which port the services are running?
I am able to see register messages received to the server but on the softphones request timeouts.
Is there any documentation which I am missing?
Please help.
The Mobicents SIP Presence Server is not supported anymore.
There is an issue on GitHub currently in development about presence. Take a look at: https://github.com/Mobicents/RestComm/issues/380

TCP keepalive not working

The situation:
Postgres 9.1 on Debian Server
Scala(Java) application using the LISTEN/NOTIFY mechanism to get notified through JDBC
As there can be very long pauses (multipla days) between notifications I ran into the problem that the underlying TCP connection silently got terminated after some time and my application stopped to receive the notifications.
When googeling for a solution I found that there is a parameter tcpKeepAlive that you can set on the connection. So I set it to true and was happy. Until the next day I saw that again my connection was dead.
As I had been suspicious there was a wireshark capture running in parallel which now turns out to be very usefull. Just about exactly two hours after the last successfull communication on the connection of interest my application sends a keepalive packet to the database server. However the server responds with RST as it seems it has already closed the connection.
The net.ipv4.tcp_keepalive_time on the server is set to 7200 which is 2 hours.
Do I need to somehow enable keepalive on the server or increase the keepalive_time?
Is this the way to go about keeping my application connected?
TL;DR: My database connection gets terminated after long inactivity. Setting tcpKeepAlive didnt fix it as server responds with RST. What to do?
As Craig suggested in the comments the problem was very likely related to some piece of network hardware in between the server and the application. The fix was to increase the frequency of the keepalive messages.
In my case the OS was Windows where you have to create a Registry key with the idle time in milliseconds after which the message should be sent. Info on that here
I have set it to 15 minutes which seems to have solved the issue.
UPDATE:
It only seemed like it solved the issue. After about two days of program run time my connection was gone again. I switched to checking the validity my connection every time I use it. This does not seem like it is the solution but it is a solution nonetheless.

"Remote computer is not available." is the error thrown when reading from remote public windows server 2003 queue

I keep getting the error “Remote computer is not available.” when reading from remote public windows server 2003 queue. The queue is on server B. My application is on server A;
Amazingly, server A can drop a message on any queue on server B, i just can't read a message off from B.
The two servers A and B are on the same domain.
all other servers can read and write on B's queues
it happened after I restarted server A
i have restarted A again in vain
msmq is running on A and B
Online suggestions are all not working.
It doesn't look like a trust issue between servers A and B. Please help
I got the solution. The approach in my comment above worked.
The thing is, port 135 was blocked by our IT-Networks guys. This doesn't immediately affect pulling messages. It had to be after restarting server A that B rejected its request due to failed RPC requests.
Again, thanks to MSMQ from the plumber's mate

PeopleSoft Webserver crashing, losing connection to AppServer

On our Webserver, we're seeing a ton of these errors:
Application Server last connected //psoftapp.company.net_8850
bea.jolt.ServiceException: bea.jolt.JoltRemoteService(GetCertificate)call(): Timeout\nbea.jolt.SessionException: Connection recv error\nbea.jolt.JoltException: [3] NwHdlr.recv(): Timeout Error
and on our Appserver:
PSPUBDSP_dflt.27505 (0) 07/20/11 08:13:33 (JNIUTIL): Java exception thrown: java.net.SocketException: Connection reset
I'm reading some tuning documents from PeopleSoft & I found a suggestion that I've seen in a couple of places -- Reducing the tcp_wait_time_interval to 60 seconds. I think I sort of understand what this is doing - It seems that network (or socket?) connections that are no longer being used are "recycled" or made available? Can someone confirm this? Also, why are these connections unused/stale? Is it caused by people not properly logging out of the app (and just closing the browser)?
Thanks!
PSPUBDP is part of the Integration Broker application messaging framework. You could look at the Tuxedo logs or the Integration Broker Monitor too see what is going on. You may be running a high number of messages and overloading the server or possibly you have a message with errors that is somehow causing the crashes.