While resuming a stream managed session of ejabberd, I get the below logs:
2016-04-11 08:53:07.430 [info] <0.5432.11>#ejabberd_c2s:terminate:1752 Closing former stream of resumed session for 54ff31587261691606060000#108.59.83.204/sender
2016-04-11 08:53:07.430 [info] <0.7868.11>#ejabberd_c2s:handle_unacked_stanzas:2814 1 stanzas were not acknowledged by 54ff31587261691606060000#108.59.83.204/sender
2016-04-11 08:53:07.430 [info] <0.7868.11>#ejabberd_c2s:handle_resume:2731 Resumed session for 54ff31587261691606060000#108.59.83.204/sender
2016-04-11 08:53:08.602 [info] <0.8227.11>#ejabberd_c2s:handle_enable:2644 Stream management with resumption enabled for 5695b87d7261697179130000#108.59.83.204/sender
2016-04-11 08:53:09.516 [info] <0.8227.11>#ejabberd_c2s:terminate:1779 ({socket_state,gen_tcp,#Port<0.138899>,<0.8244.11>}) Close session for 5695b87d7261697179130000#108.59.83.204/sender
2016-04-11 08:53:09.517 [info] <0.8227.11>#ejabberd_c2s:handle_unacked_stanzas:2814 1 stanzas were not acknowledged by 5695b87d7261697179130000#108.59.83.204/sender
2016-04-11 08:53:09.987 [info] <0.458.0>#ejabberd_listener:accept:333 (#Port<0.138210>) Accepted connection 106.196.172.221:58035 -> 10.240.0.3:5222
2016-04-11 08:53:11.157 [info] <0.8254.11>#ejabberd_c2s:wait_for_sasl_response:919 ({socket_state,gen_tcp,#Port<0.138210>,<0.8185.11>}) Accepted authentication for 5695b87d7261697179130000 by undefined from 106.196.172.221
Here 1 stanza was not acknowledged, what does it mean and how to correct it.
Sometimes, few delivery acknowledgements are not received by the receiver client, is this the reason that those stanzas are getting lost?
UPDATE:
Stanzas not acknowledged during a session are exchanged again when the connection resumes. But there are cases when some stanzas are getting lost, as a result delivery acknowledgements are not reflected on the sender client side.
Parameters set for stream management:
Resume on timeout: 120 seconds
Resend on timeout: true
Is there any configuration that I may be missing due to which some stanzas are getting lost?
It means that you are using stream management and that your client did not confirm he received some stanzas. If this were messages they will be stored for offline delivery or resend on another connections, so you should not lose any message.
Related
I use ActiveMQ Artemis 2.10.1 and getting message listener thread hanging issue.
Thread is going into TIMED_WAITING and recover only after client JVM restart. This is an indeterminate issue and not able to reproduce easily. Client library version is 2.16.0.
java.lang.Thread.State: TIMED_WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
at org.apache.activemq.artemis.core.client.impl.LargeMessageControllerImpl.waitCompletion(LargeMessageControllerImpl.java:301)
- locked <0x000000050cd4e4f0> (a org.apache.activemq.artemis.core.client.impl.LargeMessageControllerImpl)
at org.apache.activemq.artemis.core.client.impl.LargeMessageControllerImpl.saveBuffer(LargeMessageControllerImpl.java:275)
- locked <0x000000050cd4e4f0> (a org.apache.activemq.artemis.core.client.impl.LargeMessageControllerImpl)
at org.apache.activemq.artemis.core.client.impl.ClientLargeMessageImpl.checkBuffer(ClientLargeMessageImpl.java:159)
at org.apache.activemq.artemis.core.client.impl.ClientLargeMessageImpl.getBodyBuffer(ClientLargeMessageImpl.java:91)
at org.apache.activemq.artemis.jms.client.ActiveMQBytesMessage.readBytes(ActiveMQBytesMessage.java:220)
at com.eu.jms.JMSEventBus.onMessage(JMSEventBus.java:385)
at org.springframework.jms.listener.AbstractMessageListenerContainer.doInvokeListener(AbstractMessageListenerContainer.java:746)
at org.springframework.jms.listener.AbstractMessageListenerContainer.invokeListener(AbstractMessageListenerContainer.java:684)
at org.springframework.jms.listener.AbstractMessageListenerContainer.doExecuteListener(AbstractMessageListenerContainer.java:651)
at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.doReceiveAndExecute(AbstractPollingMessageListenerContainer.java:317)
at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.receiveAndExecute(AbstractPollingMessageListenerContainer.java:255)
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.invokeListener(DefaultMessageListenerContainer.java:1166)
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.executeOngoingLoop(DefaultMessageListenerContainer.java:1158)
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.run(DefaultMessageListenerContainer.java:1055)
at java.lang.Thread.run(Thread.java:748) ```
The client is waiting in LargeMessageControllerImpl.waitCompletion. This wait will not block forever. The code waits in a loop for packets of a large message. As long as packets of the large message are still arriving the client will continue to wait until all the packets have arrived or if a packet doesn't arrive within the given timeout it will throw an error. The timeout is based on the callTimeout which is configured on the client's URL. The default callTimeout is 30000 (i.e. 30 seconds).
My guess is that your client is receiving a very large message or the network has slown down or perhaps a combination of these things. You can turn on TRACE logging for org.apache.activemq.artemis.core.protocol.core.impl.RemotingConnectionImpl to see the individual large message packets arriving at the client if you want more insight into what's happening.
To be clear, it's not surprising that thread dumps show your client waiting here as this is the most likely place for the code to be waiting while it receives a large message. It doesn't mean the client is stuck.
Keep in mind that if there is an actual network error or loss of connection the client will throw an error. Also, the client maintains an independent thread which sends & receives "ping" packets to & from the broker respectively. If the client doesn't get the expected ping response then it will throw an error as well. The fact that none of this happened with your client indicates the connection is valid.
I would recommend checking the size of the message at the head of the queue. The broker supports arbitrarily large messages so it could potentially be many gigs which the client will happily sit and receive as long as the connection is valid.
I use Fusion core Freeswitch to build my PBX Server.
My version:
FreeSWITCH version: 1.10.2-release-14-f7bdd3845a~64bit (-release-14-f7bdd3845a 64bit)
it working find till last month BUT when user registrations reach to 1000
i have check Freeswitch log ( debug level) freeswitch still working
I have check postgreSql log still working
But client disconnected ( webrct from Web using SipJS and Zoiper use TCP protocol) and can not connect to Freeswitch for Registrations , so it can make any call at this time.
At this time when i see log it show "Maximum Calls In Progress"
I have try increase session reach to 5000 and session per second to 1000 and flush cached/ restart freeswitch but still not woking.
Here my switch.conf.xml
Here my postgresql.conf
Here my log when server down: fs_log
You can see i restart freeswitch at this log:
2020-07-29 14:39:08.291394 [INFO] switch_core.c:2879 Shutting down ca289c03-0617-46bf-a7af-eda4a4fe2fbb 2020-07-29 14:39:08.291394 [NOTICE] switch_core_session.c:407 Hangup sofia/internal/1100365#125.212.xxx.xxx [CS_NEW] [SYSTEM_SHUTDOWN]
Please take a look at help me solve this.
I have enabled message delivery logs on our Artemis instances using broker plugins according to this page. To draw some analytics by mapping end to end message delivery and receipt timings between publisher -> artemis server -> subscriber, I'm trying to see if the contents of the message that are being logged to artemis log file (To be specific Message ID) can be accessed by the publishing and subscribing .NET applications we have. Below are logs from the artemis.log file for a message with MessageId indicating various events.
20:50:24,552 INFO
[org.apache.activemq.artemis.core.server.plugin.impl] AMQ841010:
routed message with ID: 2231685496, result: OK
20:50:24,552 INFO
[org.apache.activemq.artemis.core.server.plugin.impl] AMQ841009: sent
message with ID: 2231685496, session name:
9d9c035b-176e-11ea-ab75-020ff9805db8, session connectionID: 68a7ec34,
result: OK
20:50:24,553 INFO
[org.apache.activemq.artemis.core.server.plugin.impl] AMQ841012:
delivered message with message ID: 2231685496, to consumer on address:
News.Source.T, queue: f0586137-5ad3-4c77-b2c7-5b68daad672c, consumer
sessionID: fcbcd194-3295-11ea-a2c0-0a89c5c4c02a, consumerID: 0
20:50:24,554 INFO
[org.apache.activemq.artemis.core.server.plugin.impl] AMQ841014:
acknowledged message ID: 2231685496, messageRef sessionID:
fcbcd194-3295-11ea-a2c0-0a89c5c4c02a, with messageRef consumerID: 0,
messageRef QueueName: f0586137-5ad3-4c77-b2c7-5b68daad672c, with
ackReason: NORMAL
We are using AMQPNetLite for this and haven't found anything that can help us tie these messages sent and received to the logs that are being written to the artemis.log file. I've been looking to understand if there is a way to get a hold of the MessageId from these logs on the publisher application. Any pointers on this topic are much appreciated.
Messaging clients can't get data from the broker's log files since that data is just in a text-based log and not actually in the message broker itself. However, you could use something like the NotificationActiveMQServerPlugin which, instead of logging this information, will actually send messages with this information to the management notification address. Clients can create subscriptions on the management notification address and receive the messages and then take action based on that information. The notification messages may not contain all the information you need, but you can easily extend this class to create your own plugin which includes all the information you need.
can anyone tell me what does these log messages mean? is any session been terminated ,why?
2016-01-20 15:48:24.651 [info] <0.477.0>#ejabberd_listener:accept:333 (#Port<0.16235>) Accepted connection 192.16.35.6:1432 -> 28.4.5.2
2016-01-20 15:48:27.497 [info] <0.1411.0>#ejabberd_c2s:wait_for_feature_request:740 ({socket_state,p1_tls,{tlssock,#Port<0.16235>,#Port<0.16236>},<0.1410.0>}) Accepted authentication for 14512843168518 by ejabberd_auth_odbc from 103.233.119.62
2016-01-20 15:48:27.903 [info] <0.1411.0>#ejabberd_c2s:wait_for_session:1106 ({socket_state,p1_tls,{tlssock,#Port<0.16235>,#Port<0.16236>},<0.1410.0>}) Opened session for 14512843168518#cndivneofveofv/androidjc1PGFLG
2016-01-20 15:48:27.906 [info] <0.1355.0>#ejabberd_c2s:terminate:1768 ({socket_state,p1_tls,{tlssock,#Port<0.16227>,#Port<0.16228>},<0.1354.0>}) Replaced session for 14512843168518#cedefjwojffj/androidjc1PGFLG
Yes, the user 14512843168518#devchat.drooly.co had an open session with resource androidjc1PGFLG, and this session was terminated and replaced by a new session with the same resource.
This is a feature of XMPP: if a user's client has lost its connection to the XMPP server, but the server hasn't detected it yet, the client can force the server to terminate the previous connection by connecting again and specifying the same resource.
I've implemented a chat client that stays online forever and sends messages to different XMPP users. This client is connected to ejabberd server and is implemented in java using Smack.
After exactly 1 minute, the client goes offline and then comes back online in almost 15 seconds. The log that appears on client's console, follows:
java.io.EOFException: no more data available - expected end tag </stream:stream> to close start tag <stream:stream> from line 1, parser stopped on END_TAG seen...erd.cloudservicesplatform.biz/cspb2\' id=\'t5f8L-12\' type=\'result\'/>... #1:1264
at org.xmlpull.mxp1.MXParser.fillBuf(MXParser.java:3035)
at org.xmlpull.mxp1.MXParser.more(MXParser.java:3046)
at org.xmlpull.mxp1.MXParser.nextImpl(MXParser.java:1144)
at org.xmlpull.mxp1.MXParser.next(MXParser.java:1093)
at org.jivesoftware.smack.PacketReader.parsePackets(PacketReader.java:279)
at org.jivesoftware.smack.PacketReader.access$000(PacketReader.java:44)
at org.jivesoftware.smack.PacketReader$1.run(PacketReader.java:70)
Failed to parse extension packet in Presence packet.
java.io.EOFException: no more data available - expected end tag </stream:stream> to close start tag <stream:stream> from line 1, parser stopped on END_TAG seen
...<query xmlns=\'jabber:iq:version\'/>\n</iq>... #6:6
at org.xmlpull.mxp1.MXParser.fillBuf(MXParser.java:3035)
at org.xmlpull.mxp1.MXParser.more(MXParser.java:3046)
at org.xmlpull.mxp1.MXParser.nextImpl(MXParser.java:1144)
at org.xmlpull.mxp1.MXParser.next(MXParser.java:1093)
at org.jivesoftware.smack.PacketReader.parsePackets(PacketReader.java:279)
at org.jivesoftware.smack.PacketReader.access$000(PacketReader.java:44)
at org.jivesoftware.smack.PacketReader$1.run(PacketReader.java:70)
Failed to parse extension packet in Presence packet.
java.io.EOFException: no more data available - expected end tag </stream:stream> to close start tag <stream:stream> from line 1, parser stopped on END_TAG seen
...=\'cspbox107#dev-ejabberd.cloudservicesplatform.biz\'/></query></iq>... #6:292
at org.xmlpull.mxp1.MXParser.fillBuf(MXParser.java:3035)
at org.xmlpull.mxp1.MXParser.more(MXParser.java:3046)
at org.xmlpull.mxp1.MXParser.nextImpl(MXParser.java:1144)
at org.xmlpull.mxp1.MXParser.next(MXParser.java:1093)
at org.jivesoftware.smack.PacketReader.parsePackets(PacketReader.java:279)
at org.jivesoftware.smack.PacketReader.access$000(PacketReader.java:44)
at org.jivesoftware.smack.PacketReader$1.run(PacketReader.java:70)
Failed to parse extension packet in Presence packet.
Smack until 3.3.0 throw an exception when a malformed presence stanza was received. This behavior was changed with SMACK-390.
You should:
update to the latest Smack Version (i.e. 3.4.1 at the time of writing)
inform the developer of the library/program where the malformed stanza originates from about the bug
Edit: After having a close look at the logs and since you didn't mention the used Smack version, it seems also likely that you simply received a malformed stanza which causes the disconnect (This would be a non-recoverable error, even on Smack 3.4.1).