In case the destination mail server is temporarily down, how much time would my own mail server keep retrying sending the mail before giving up? I would love to know the default timeout value if it is mentioned in the RFC (which I could not figure out).
Sendmail documentation in Message timeouts sections references RFC1123
RFC1123
Requirements for Internet Hosts -- Application and Support
...
5.3.1 SMTP Queueing Strategies
...
5.3.1.1 Sending Strategy
... Retries continue until the message is transmitted or the sender gives up; the give-up time generally needs to be at least 4-5 days. ...
Related
If i set the ejabberd iq time_out threshold to 1 second, what would be the downside be?
The default is set at 60seconds ---- every 60 seconds the ejabberd xmpp server will ping the device to get a respond, if there is no respond, the server will kill the socket.
I want to set the ping interval to 1 second, would there be a downside to it?
I am wanting to set it to this short amount is due to if a device suddenly lost internet connection, the socket would still be listed as connected. So i would want a fast respond time to see if a user is actually connected or not.
The downside is that your server and client will consume a huge amount of resources (to parse all the packets) and bandwidth.
It is a very bad idea to set XMPP ping timeout to 1 second.
I have browsed this forum searching solution for this problem but couldnt find one. My issue is same as this,
https://vanity-igniterealtime.jiveon.com/message/225504
https://igniterealtime.org/issues/si/jira.issueviews:issue-html/OF-161/OF-161.ht ml
I have configured the Ping request from server side for 30 seconds. But still 30 second is huge time. During that time lots of message are getting lost.
XEP-0184 is more of a client side delivery receipt management. Is that possible that i can get the acknowledgement in server as well?
Is it possible to store all the message in OF until we receive the delivery receipt from receiver. And delete the message from OF once we get the delivery receipt.
Please suggest me on how to prevent this message loss.
Right now there is no working solution in openfire 3.9.3 version.
What i have done is created a custom plugin,
* This will intercept the message packet and add it to custom table, until it receives ack packet from the receiver.
By this way we are avoiding the message loss.
In an online farm-like game I need to validate on server client's long processes like building a house. Say the house need 10 minutes to be built. Client sends "Start build" message asynchronously over TCP connection when it starts building house and "Finish build" when it thinks the house is built. On server I need to validate that house was built in at least 10 minutes. The issue is server doesn't know when client sent "start build" and "finish build" messages. It knows when message was received, but there is a network lag, possible network failures and messages can be long enough to take a few tcp segments. As I understand the time client took to send message can be up to few minutes and depends on client TCP configuration. The question is: is there a way to know when message was issued on client side? If not, how can I guarantee time period in that message was sent, possibly some server TCP configuration? Some timeout in that server either receives the message or fails would be ok. Any other solutions to main task I may not think about are also welcome.
Thanks in advance!
If I understand you correctly, your main issue is not related to TCP itself (because the described scenario could also happen using UDP) but to the chronology of your messages and securing that the timeline has not been faked.
So the only case you want to avoid is the following:
STARTED send at 09:00:00 and received at 09:00:30 (higher latency)
FINISHED send at 09:10:00 and received at 09:10:01 (lower latency)
As it looks to the server as if there were only 9.5 minutes spent constructing your virtual building. But the client didn't cheat it was only that the first message had a higher latency than the second.
The other way around would be no problem:
STARTED send at 09:00:00 and received at 09:00:01 (lower latency)
FINISHED send at 09:10:00 and received at 09:10:30 (higher latency)
or
STARTED send at 09:00:00 and received at 09:00:10 (equal latency)
FINISHED send at 09:10:00 and received at 09:10:10 (equal latency)
As at least 10 minutes elapsed between the receiving of the two messages.
Unfortunately there is no way to ensure the client does not cheat by using timestamps or such. It does not matter if your client writes the timestamps in the messages or if the protocol does it for you. There are two reasons for that:
Even if your client does not cheat, the system clocks of client and
server might not be in sync
All data written in the network packet are just bytes and can be manipulated. Someone could use a RAW socket and fake the entire TCP layer
So the only thing that is for sure is the time when the messages were received by the server. A simple solution would be to send some sort of RETRY message containing the time left to the client if the server thinks that not enough time elapsed when receiving the FINISHED message. So the client could adjust the construction animation and then send the FINISHED message again, depending on how much time was left.
We have two applications using QuickFIX engine, both are running in the same machine.
Sometimes we see that the session ends due to lack of heartbeats.
How can it be since both are running on the same machine?
FIX heartbeat mechanism has nothing to do with the fact where applications communicating using FIX protocol run. If you see the session being dropped due to lack of heartbeat then you have to determine which session did not send heartbeat (it will also fail to respond to «Test Request» message, if any) and why did that happen. Possible reasons are:
Server and client have different heartbeat interval settings and server does not honor client's heartbeat interval (field #108 in «Logon» message) and test request/response logic is broken (or turned off).
Underlying transport errors (i.e. TCP/IP errors or UDP packet drops).
Other software/hardware bugs.
Something else.
Hope it helps. Good Luck!
I'm making a XMPP client and I would like if there is some timer or memory cache for messages received because i send 1000 messages to my client and the server send 1000 messages ok but my client only receive 300.
Possible Solution:
...Overcoming those limits
Every time HTTP has a solution for “fixing” XMPP.
The first two limits can be fixed by running a WebDAV server. Upload to the WebDAV server, share the link. That’s a solution everyone can do without XMPP client support. Of course, having a way to do that transparently with client and server support, with signed URLs (à la S3) would greatly improve the process.
For the connected socket problem, there’s BOSH. That’s basically running XMPP over HTTP. With the added bonus of having the server retaining the “connection” for a couple of minutes – that fixes my iPhone problem. Once I relaunch the client in the two minutes window, all the pending messages are delivered.
Your receiver is receiving only 300 messages means they might be the offline messages. If this is the case you need to increase the Per-user offline message storage limit in your admin panel.
I would like to suggest you to go for message archiving and retrieving instead of depending on offline messages.
Hope this helps you :)