I have a chat application (xmpp / muc) that is going to be served by apache (we might change to nginx later but right now it's not easily done). If a user is in 2 rooms, he'll have between 2 and 4 active connections to the server (long-polling connections), so if we have 200 users per room and we have 5 rooms, what should the ServerLimit, MaxClients be set to? For example, these are the default values:
ServerLimit 256
MaxClients 256
MaxRequestsPerChild 4000
Thanks,
Each user should only have a single connection to the XMPP server if your using something like BOSH to make the Web->XMPP connection. Each MUC room activity will be sent to the XMPP server with the user's JID as the target and that will cause the server to send that to the appropriate BOSH connection. Same for presence.
The information you need to load balance the Apache setup is the same as load balancing any long-polling connection - XMPP doesn't add anything exotic to the mix. An XMPP BOSH connection is a long-polling connection to your web server which is then sent to a persistent connection to the XMPP server - causing 2 sockets to be opened per user connect.
I'm guessing this a pre-fork Apache....
If you've got 1000 concurrent chat connections, then you need at least 1000 webservers (presumably you'll be serving up static content too, and you say that each presence results in 2 connections - so you can double the 1000) - a ServerLimit/MaxClients of 256 is not going to cut it, you probably need around 2200 to support this (but without hard metrics its hard to give an exact figure).
This is a ridiculously large amount. To support this I'd be looking for 3 boxes, each with about 2Gb free memory before the webserver is started.
MaxRequestsPerChild is not really relevant other than the fact you want some turnover of processes, particularly if you are using long polling.
This is one of the reasons why COMET is not a good idea. Using AJAX polling would be much more efficient. Assuming this is not something you can change, you might want to have a look at using a threaded Apache webserver which is marginally more memory efficient,
C.
Related
I looked online and can't seem to find too many similar stories, which is surprising.
I rented a server from a provider, and I run my server software on there. My iOS app connects to it. It's been up for over a month with no issues, and gets about dozens of connections a day.
Starting about 30 hours ago, someone began to connect to it every 2 seconds. Always same IP. I doubt he leaved his phone on for 30 hours? And my app has only <100 daily users, so I have no competitors who would gain from this.
I finally blocked him from my server using iptable. From the software engineering side, what is the common practice for preventing these kinds of things? Am I supposed to keep track from the server side and refuse to accept repeated connections? Do I use some sort of login/handshake, or what do I do?
To clarify, I do not use http or apache. I wrote a server based on BSD sockets using a custom protocol over TCP. I also have a crypto hash and would terminate a connection if the message doesn't hash correctly.
EDIT: I did a count in my connection log. 3 IPs connected totaling 10,000 times over the last 30 hours. 2 of them have since stopped. All from a tiny country in Asia which I won't name.
You need to implement some sort of antirobot in your application:
Drop incoming connection if there are already >= N established connections from the same IP address.
Drop incoming connection if total number of connections from this IP address for last 1, 5, and 30 minutes exceeds predefined thresholds (yes, you will need to keep track of number of connections made from each IP address).
More sophisticated antirobots used in high-load environments would also have means of detecting and blocking not only single IP addresses, but also whole ranges of IP addresses (networks). And also it would be better not to drop connections blindly but to redirect clients to captcha.
Anyway, there is no perfect solution to distinguish between robot and normal client: either you drop some good clients, or you allow some robots to sift through your antirobot.
I setup an Active/Passive cluster with Pacemaker/Corosync/DRBD. I wanted to make an Asterisk server HA. The solution works perfectly but when the service fails on one server and starts on another all registered SIP clients with the active server will be lost. And the passive server show nothing in the output of:
sip show peers
Until clients make a call or register again. One solution is to set the Registration rate on clients to 1 Min or so. Are there other options? For example integrating Asterisk with a DBMS helps to save this kind of state in a DB??
First of all doing clusters by non-expert is bad idea.
You can use realtime sip architecture, it save state in database. Complexity - average. Note, "sip show peers" for realtime also show nothing.
You can use memory duplicating cluster(some solution for xen exists) which will copy memory state from one server to other. Complexity - very complex.
I am building a distributed system that consists of potentially millions of clients which all need to keep an open (preferrably HTTP) connection to wait for a command from the server (which is running somewhere else). The load of messages / commmands will not be very high, maybe one message / sec / 1000 clients which means it would be 1000 msg/sec # 1 million clients. => it's basically about the concurrent connections.
The requirements are simple too. One way messaging (server->client), only 1 client per "channel".
I am pretty open in terms of technology (xmpp / websockets / comet / ...). I am using Google App Engine as server, but their "channels" won't work for me unfortunately (too low quotas and no Java client). XMPP was an option but is quite expensive. So far I was using URL Fetch & pubnub, but they just started charging for connections (big time).
So:
Does anyone know of a service out there that can do that for me in an affordable way? Most I have found restrict or heavily charge for connections.
Any experience with implementing such a server yourself? I have actually done that already and it works pretty well (based on Tomcat & NIO) but I haven't had the time yet to actually set up a large load test environment (partially because this is still a fallback solution, I'd prefer a battle hardened msg server). Any experience to how many users you get per GB? Any hard limits?
My architecture also allows to fragment the msg servers, but I'd like to maximize the concurrent connections because the msg processing CPU overhead is minimal.
I have meanwhile implemented my own message server using netty.io. Netty makes use of Java NIO and scales extremely well. For idle connections I get a memory footprint of 500 bytes per connection. I am doing only very simple message forwarding (no caching, storage or other fancy stuff) but with that am easily getting 1000 - 1500 msg / sec (each half a KB) on the small amazon instance (1ECU / 1.6GB).
Otherwise if you are looking for a (paid) service then I can recommend spire.io (they do not charge for connections but have a higher price per message) or pubnub (they do charge for connections but are cheaper per message).
You have to look more in architecture of making such environment.
First of all, if you will write sockets management by yourself, then don't use Thread per Client Socket. Use Asynchronous methods for receiving and sending data.
WebSockets might be too heavy if your messages are small. Because it implements framing, which has to be applied to each message for each socket individually (caching can be used for different versions of WebSockets protocols), that makes them slower to process both directions: for receive and for send, especially because of data masking.
It is possible to create millions of sockets, but only most advanced technologies are capable to do so. Erlang is able to handle millions connections, and is pretty scalable.
If you would like to have millions of connections using other higher level technologies, then you need to think about clustering of what you are trying to accomplish.
For example using gateway server that will keep track of all processing servers. And have data of them (IP, ports, load (if it will be one internal network, firewalling and port forwarding might be handy here).
Client software connects to that gateway server, gateway server checks the least loaded server and sends ip and port to client. Client creates connection directly to working server using provided address.
That way you will have gateway which as well can handle authorization, and wont hold connections for long, so one of them might be enough. And many workers that are doing publishing of data and keeping connections.
This is very related to your needs, and might not be suitable for your solutions.
I've got some programs that occasionally (anywhere from every few minutes to once an hour) need to send metrics to Graphite. Should I keep the socket to the graphite server open for the duration of my process or make a new connection every time I need to send some metrics? What are the considerations when doing one or the other?
Sounds like you need a TCP connection.
If you should keep the connection active or not depends on answers to points like:
- Would you like to monitor the "connected" clients at the server at any given time?
- Is there a limit at the Server side in relation to the previous point?
- The amount of such clients "connected" to the server?
- Is it a problem if the connection creation takes some time?
If you keep the connection open, just make sure to send keep-alive messages from time to time (application level proffered).
A large amount of clients connected to the server, even when not active, may consume resources of memory or objects (for example, if there is one thread per connection).
On the other hand, keeping the connection on, will allow the client to detect if there is a connection problem to the server much faster (if that even matters).
it all depends on when is needed.
Is it possible to multiplex sa ocket connection?
I need to establish multiple connections to yahoo messenger and i am looking for a way to do this efficiently without having to hold a socket open for each client connection.
so far i have to use one socket for each client and this does not scale well above 50,000 connections.
oh, my solution is for a TELCO, so i need to at least hit 250,000 to 500,000 connections
i'm planing to bind multiple IP addresses to a single NIC to beat the 65k port restriction per IP address.
Please i would any help, insight i can get.
**most of my other questions on this site have gone un-answered :) **
Thanks
This is an interesting question about scaling in a serious situation.
You are essentially asking, "How do I establish N connections to an internet service, where N is >= 250,000".
The only way to do this effectively and efficiently is to cluster. You cannot do this on a single host, so you will need to be able to fragment and partition your client base into a number of different servers, so that each is only handling a subset.
The idea would be for a single server to hold open as few connections as possible (spreading out the connectivity evenly) while holding enough connections to make whatever service you're hosting viable by keeping inter-server communication to a minimum level. This will mean that any two connections that are related (such as two accounts that talk to each other a lot) will have to be on the same host.
You will need servers and network infrastructure that can handle this. You will need a subnet of ip addresses, each server will have to have stateless communication with the internet (i.e. your router will not be doing any NAT in order to not have to track 250,000+ connections).
You will have to talk to AOL. There is no way that AOL will be able to handle this level of connectivity without considering cutting your connection off. Any service of this scale would have to be negotiated with AOL so both you and they would be able to handle the connectivity.
There are i/o multiplexing technologies that you should investigate. Kqueue and epoll come to mind.
In order to write this massively concurrent and teleco grade solution, I would recommend investigating erlang. Erlang is designed for situations such as these (multi-server, massively-multi-client, massively-multithreaded telecommunications grade software). It is currently used for running Ericsson telephone exchanges.
While you can listen on a socket for multiple incoming connection requests, when the connection is established, it connects a unique port on the server to a unique port on the client. In order to multiplex a connection, you need to control both ends of the pipe and have a protocol that allows you to switch contexts from one virtual connection to another or use a stateless protocol that doesn't care about the client's identity. In the former case you'd need to implement it in the application layer so that you could reuse existing connections. In the latter case you could get by using a proxy that keeps track of which server response goes to which client. Since you're connecting to Yahoo Messenger, I don't think you'll be able to do this since it requires an authenticated connection and it assumes that each connection corresponds to a single user.
You can only multiplex multiple connections over a single socket if the other end supports such an operation.
In other words it's a function protocol - sockets don't have any native support for it.
I doubt yahoo messenger protocol has any support for it.
An alternative (to multiple IPs on a single NIC) is to design your own multiplexing protocol and have satellite servers that convert from the multiplex protocol to the yahoo protocol.
I'll trow in another approach you could consider (depending on how desperate you are).
Note that operating system TCP/IP implementations need to be general purpose, but you are only interested in a very specific use-case. So it might make sense to implement a cut-down version of TCP/IP (which only handles your use-case, but does that very well) in your application code.
For example, if you are using Linux, you could route a couple of IP addresses to a tun interface and have your application handle the IP packets for that tun interface. That way you can implement TCP/IP (optimised for your use-case) entirely in your application and avoid any operating system restriction on the number of open connections.
Of course, it's quite a bit of work doing the TCP/IP yourself, but it really depends on how desperate you are - i.e. how much hardware can you afford to throw at the problem.
500,000 arbitrary yahoo messenger connections - is your telco doing this on behalf of Yahoo? It seems like whatever solution has been in place for many years now should be scalable with the help of Moore's Law - and as far as I know all the IM clients have been pretty effective for a long time, and there's no pressing increase in demand that I can think of.
Why isn't this a reasonable problem to address with hardware plus traditional solutions?