Message lost while the receiver's presence is not updated in OF server - xmpp

I have browsed this forum searching solution for this problem but couldnt find one. My issue is same as this,
https://vanity-igniterealtime.jiveon.com/message/225504
https://igniterealtime.org/issues/si/jira.issueviews:issue-html/OF-161/OF-161.ht ml
I have configured the Ping request from server side for 30 seconds. But still 30 second is huge time. During that time lots of message are getting lost.
XEP-0184 is more of a client side delivery receipt management. Is that possible that i can get the acknowledgement in server as well?
Is it possible to store all the message in OF until we receive the delivery receipt from receiver. And delete the message from OF once we get the delivery receipt.
Please suggest me on how to prevent this message loss.

Right now there is no working solution in openfire 3.9.3 version.
What i have done is created a custom plugin,
* This will intercept the message packet and add it to custom table, until it receives ack packet from the receiver.
By this way we are avoiding the message loss.

Related

Trigger something only after Resend Request is satisfied if exists with QuickFIX/J

Our system generates some messages (unsolicited cancel for example) it needs to send to the other party after a disconnect/connection lost, as soon as the connection recovers.
The problem is that we trigger sending those in onLogon(), but if there's a Resend Request that's too early and we had problems (maybe just because of how is implemented on the other end) when we had too many messages to send (hundreds).
I'm aware that ResendRequest may not come and it is impossible to figure that out without simply waiting, but what would be the best approach for us using QuickFIX/J to send our messages as soon as possible but after sequence numbers are synchronized?
EDIT: I'm trying to solve this using FIX 4.2. FIX 4.4 actually introduced http://www.onixs.biz/fix-dictionary/4.4/tagNum_789.html which would solve my problem (as long as the other party sends this optional tag too).
Thanks
My 10 cents is it sounds like you're trying to treat 2 scenarios in 1 go, and that's difficult. Do 1 thing at a time. For example, if it's your network that causes you to disconnect, before your client knows you've disconnected, your clients will send resend requests, right? Meanwhile, if a client disconnects but you don't then when they reconnect you gap fill. You've got to look carefully at the scenarios. Yes, a resend request may not come at all, it all depends how the client configures things their side. Maybe, per this question you want to send sequence resets because actually, the messages you're trying to send are quotes, right? I mean, what kind of messages are you trying to resend after a disco?

How to prevent sending same data to different clients in REST api GET?

I have 15 worker clients and one master connected through internet. Job & data are been passed through REST api in json format.
Jobs are not restricted to any particular client. Any worker can query for the available job in regular interval(say 30 seconds), process it and will update the status.
In this scenario, how can I prevent same records been sent to different clients while GET request.
Followings are my solution approach to overcome this issue:
Take top 5 unprocessed records from the database and make it as SENT and expose via REST GET.
But the problem is, it creates inconsistency. Some times, the client doesn't got data due to network connectivity issue. But in server, it will be marked as SENT. So, no other clients can get that data. It will remain as SENT forever.
Get the list from server, and reply back the list of job IDs to Server as received. But in-between this time gap, some other clients also getting same set of Jobs.
You've stumbled upon a fundamental problem in distributed systems: there is no way to know if the other side received your message. You can certainly improve the situation with TCP and ack messages. But if you never get the ACK did the message never arrive, did it arrive but the recipient die before processesing, or did the recipient send he ACK and the ACK get dropped?
That means you need to design your system to handle receiving data more than once.
You offer two partial solutions; if you combine them, your solution starts to look like how SQS works. Mark the item as pending_ack with a timestamp. After client replies, it is marked sent. Any pending_ackss past a certain time period are eligible to be resent.
Pick your time period to allow for slow network and slow clients and it boils down to only sending duplicates when you really don't know if the client died or not.
Maybe you should reconsider the approach to blocking resources. REST architecture - by definition is not obliged to save information about client. Instead, you may want to consider optimistic concurrency control (http://en.wikipedia.org/wiki/Optimistic_concurrency_control).

Where can I find a NServiceBus 4.1 message during an SLR retry?

We are currently implementing a new system. It now happens, that the content of my message is wrong and gets rejected by the connecting system (we transfer data over a REST service). I can edit my message as soon as it is in the error queue and re-queue it. But while NServiceBus is trying to re-send it (which will of course fail every time), I can't seem to find the message to correct it for the next time around. Any idea where the message is "parked" during SLR?
The message gets moved to our timeout storage, which is by default RavenDB.

XMPP Framework maximum messages received

I'm making a XMPP client and I would like if there is some timer or memory cache for messages received because i send 1000 messages to my client and the server send 1000 messages ok but my client only receive 300.
Possible Solution:
...Overcoming those limits
Every time HTTP has a solution for “fixing” XMPP.
The first two limits can be fixed by running a WebDAV server. Upload to the WebDAV server, share the link. That’s a solution everyone can do without XMPP client support. Of course, having a way to do that transparently with client and server support, with signed URLs (à la S3) would greatly improve the process.
For the connected socket problem, there’s BOSH. That’s basically running XMPP over HTTP. With the added bonus of having the server retaining the “connection” for a couple of minutes – that fixes my iPhone problem. Once I relaunch the client in the two minutes window, all the pending messages are delivered.
Your receiver is receiving only 300 messages means they might be the offline messages. If this is the case you need to increase the Per-user offline message storage limit in your admin panel.
I would like to suggest you to go for message archiving and retrieving instead of depending on offline messages.
Hope this helps you :)

How can I get QuickFix to process messages that come in from a resend request?

I am writing an acceptor application and using a persistent FIX session. I am trying to write a recovery mode, such that if I go offline or my program restarts, when I reconnect I want to reprocess all the messages sent to me during the day to get back to the current state.
To do this, when I start up I send a resend request for all messages to the server. They fire me back all the relevant messages, and they are marked possdupflag=Y and possresend=Y. Before each message, they send a sequence reset for the repeated message they are about to send.
The problem is though, these messages do not seem to be processed by my message cracker. Both fromAdmin and fromApp do not get these messages. I assume they are being ignored because of the dup flag and/or resend. So is there a way for me to tell QuickFIX that I want to see these messages?
On that note- if anyone has any recommendations on better recovery processes I would be open to them.
Thanks.
There's at least a couple of potential problems with this recovery strategy. The first is that it's not very friendly to your trading counterparty. If you only receive a small number of messages during your session then it may not be an issue, but if you receive hundreds of thousands of messages then your counterparty might complain about the massive resends.
The other issue is that message resend is intended for error recovery and is managed by the session protocol layer. In QuickFIX/J (and other FIX engines) the session maintains recovery state in addition to sending the ResendRequest automatically when it detects a sequence number gap. Your approach might work if you reset the next expected incoming sequence number to 1. When the session receives the next message with a higher sequence number it will detect the gap and request the missing messages. If the messages are validated, they will be forwarded to application layer with the PossDup flag set. If you send the ResendRequest message yourself the behavior is undefined since the session state will not have been set up properly.
I recommend using a MessageLog implementation to store your incoming messages in a form you can use for recovery when your application starts. You can look at the implementation of the existing message logs (FileLog, JdbcLog) to get some ideas.
The behaviour occurs because the engine's persistance system tells it that the recieved messages are resent messages and so (per the FIX protocol specification) are discarded. Here we save FIXml strings into our database to provide a similar recovery ability to that which you describe(they are also written to xml files on disk for other reasons). I don't believe that there is any way to tell quickfix that you want to see duplicate messages but it is probably better to use a different form or persistance to save on connection overheads. Quickfix does provide a way of outputting messages to file as they come in if that helps.
I too have the same issue and What Frank Says is absolutely correct ,
Just use the below method to set the target sequence number to the begin seq number of the desired resend req .
getSession()->setNextTargetMsgSeqNum(atoi(seq.c_str()));
The engine internally identifies that the target number is way too large and automatically sends resend request , and all messages will be captured in onMessage call back itself as usual