XMPP Framework maximum messages received - iphone

I'm making a XMPP client and I would like if there is some timer or memory cache for messages received because i send 1000 messages to my client and the server send 1000 messages ok but my client only receive 300.
Possible Solution:
...Overcoming those limits
Every time HTTP has a solution for “fixing” XMPP.
The first two limits can be fixed by running a WebDAV server. Upload to the WebDAV server, share the link. That’s a solution everyone can do without XMPP client support. Of course, having a way to do that transparently with client and server support, with signed URLs (à la S3) would greatly improve the process.
For the connected socket problem, there’s BOSH. That’s basically running XMPP over HTTP. With the added bonus of having the server retaining the “connection” for a couple of minutes – that fixes my iPhone problem. Once I relaunch the client in the two minutes window, all the pending messages are delivered.

Your receiver is receiving only 300 messages means they might be the offline messages. If this is the case you need to increase the Per-user offline message storage limit in your admin panel.
I would like to suggest you to go for message archiving and retrieving instead of depending on offline messages.
Hope this helps you :)

Related

websocket communication between clients in distributed system

I'm trying to build instant messaging app. Clients will not only send messages but also often send audios. And I've decided to use websocket connection to communicate with clients. It is fast and allows to send binary data.
The main idea is to receive from client1 message and notify about it client2. But here's the thing. My app will be running on GAE. And what if client1's socket is opened on server1 and client2's is opened on server2. This servers don't know about each others clients.
I have one idea how to solve it, but I am sure it is shitty way. I am going to use some sort of communication between servers(for example JMS or open another websocket connection between servers, doesn't matter right now).
But it surely will lead to a disaster. I can't even imagine how often those servers will speak to each other. For each message server1 should notify server2, server2 should notify client2. But things become even worse when serverN comes into play.
Another way I see this to work is Firebase. But it restricts message size to 4KB. So I can't send audios via it. As a solution I can notify client about new audio and he goes to my server for it.
Hope I clearly explained the problem. Does anyone know how to solve it? Or maybe there are another ways to build such apps?
If you are building a messaging cluster and expect communicating clients to connect to different instances of the server then server-server communication is inevitable. Usually it's not a problem though.
First, if you don't use any load balancing your clients will connect to the same server 50% of time on average (in case of 2 servers).
Second, intra-datacenter links are fast and free in all known public clouds.
Third, you can often do something smart on the frontend to make sure two likely to communicate clients connect to the same server. For instance direct all clients from the same country to the same server using DNS load balancing.
The second part of the question is about passing large media files. It's a common best practice to send it out of band - store on the server and only pass the reference to it. Like someone suggested in the comment, save the audio on the server and just send a message like "audio is available, fetch it from here ...". You don't need to poll the server for that. Just fetch it once when the receiving client requests it.
In general, it seems like you are trying to reinvent the wheel. Just use something off the shelf.
Let all client get connected to multiple servers and each server keeps this metadata
A centralized system like zookeeper stores active servers details
When a client c1 sends a message to client c2:
the message is received by a server (say s1, we can add a load balancer to distribute incoming requests)
s1 will broadcast this information to all other servers to get which server the client c2 is connected to OR a better approach to use consistent hashing which decides which server the client can connect to & in this approach message broadcast is not required
the corresponding server responses to server s1 (say s2)
now s1 sends the message m to s2 and server s2 to client c2
Cons of the above approach:
Each server will have a connection with the n-1 servers, creating a mesh topology
Centralized system (zookeeper) becomes a single point of failures (which is solvable)
Apps like Whatsapp, G-Talk uses XMPP and TCP/IP.

Message lost while the receiver's presence is not updated in OF server

I have browsed this forum searching solution for this problem but couldnt find one. My issue is same as this,
https://vanity-igniterealtime.jiveon.com/message/225504
https://igniterealtime.org/issues/si/jira.issueviews:issue-html/OF-161/OF-161.ht ml
I have configured the Ping request from server side for 30 seconds. But still 30 second is huge time. During that time lots of message are getting lost.
XEP-0184 is more of a client side delivery receipt management. Is that possible that i can get the acknowledgement in server as well?
Is it possible to store all the message in OF until we receive the delivery receipt from receiver. And delete the message from OF once we get the delivery receipt.
Please suggest me on how to prevent this message loss.
Right now there is no working solution in openfire 3.9.3 version.
What i have done is created a custom plugin,
* This will intercept the message packet and add it to custom table, until it receives ack packet from the receiver.
By this way we are avoiding the message loss.

When two Jabber (XMPP) clients connected, only one is able to receive messages, both can send

I have a Windows XMPP client - PSI and an android one - IMO. I'm connected to the same custom server, using two different resources (hostname on desktop, don't know what IMO uses as resource). When someone sends me a message, only desktop client is able to receive it. Android client can only send.
What to configure in clients to be able to receive messages on both clients simultaneously?
Figured it out. XMPP protocol has priorities assigned to resources. See 11.1 in http://xmpp.org/rfcs/rfc3921.html#rules. Valid range is -127 .. +128
IMO sends priority 1 (at least in my version). Setting priority in PSI to -120 made my phone client always receive the message. I'll play with priorities to take advantage of auto-away feature that lowers priority.
If you've got admin permissions on an Openfire server, setting the system property "route.all-resources" to "true" should allow all connected client to receive a message sent to a Jabber ID. This worked in my case.

Do I stay in the MUC when pause() and attach()?

I have a client written using Strophe that is loaded on every page on my website. To minimize latency I save the rid, the jid and the sid at each page change so that I can use Strophe's attach() method.
However, I am unsure of if the pausing and attaching keeps me in the MUC. If it does, is there a patch to the Strophe MUC plugin that lets me set handlers without rejoining the MUC?
Yes, you do. BOSH pause and attach leaves your stream open, the XMPP server does not even know it happened (since it happens at the BOSH layer).
Pausing is just a graceful way of telling the BOSH connection manager not to expect requests from you for a short period of time. In BOSH it is not necessary to keep a HTTP request open at all times to keep the XMPP stream alive, only that you make requests often enough for the connection manager to be satisfied that you have not gone offline without warning.

GWT Server Push using Jetty Continuations?

I'm supposed to implement a web application where the user log's in and by that registers for some sort of events (in this case, alarms). When an alarm happens, the server needs to push the alarm to all of the clients.
At the moment I'm using
GWT on the Client side
Jetty on the Server side
Is implementing the server push by using Jetty Continuations a good idea? My requirements are:
the number of clients will be quite small (<20) but could increase in the future
alarms must not get lost (i.e. if the client will be down, it must not miss any alarms)
if a client goes down, other clients need to be informed about it (or at least the admin should receive some sort of notification, e.g. by Mail).
The main reason for using Comet (e.g. Jetty Continuations) is, that it allows to reduce the polling frequency. In other words: You can achieve the same thing without Comet, by using frequent polling from the client side. Which alternative to choose depends on the characteristics of your application - depending on that, each alternative can be more or less efficent than the other!
In your case, since you need notifications when a client goes down, it makes sense to use frequent polling. Comet (long polling) is not suited very well for this task: Because of its priciple, it can take a long time until a client sends a new request. And receiving a new request is the only way a server can know that a client is still alive (remember, that a web server - no matter if Comet or not - can never send a request to the client).
Your requirement states that alarms must not get lost, implies a more complicated solution than long polling or frequent polling.
Your client should send an acknowledgement message to the server, because your user could close the application just after the alarm message arrived, he/she can loss that alarm.
Also, your user should click an alarm message to acknowledge the server . You can put a time limit to acknowledge , if the client does not send an ack message , then you can assume that alarm has been lost..
Long polling with acknowledgment algortihm would be my choices to solve your problem..