Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I am trying to make a realtime notification just like facebook.After learning and searching alot i m very confuse please explain me what is right and what is wrong..
Please make sure that the site may would have same number of users as Facebook
We can make Realtime notification with long polling or not? IF yes what is the advantages, disadvantages and limitations.
we can make Realtime notifiaction with websockets or not?Please mind the number of users can be same as facebook .If yes what is the advantages, disadvantages and limitations.
If there is any other method please explain.
Confusion
How Far I learn in web and found that Websocket is good but there is a limitation(Max 5K) in number of open connection which means that at a time the max number of user is just 5K,this is very less than facebook's number of users.. if I m wrong please explain.
You're wrong, a websocket based solution is not limited to 5K concurrent connections.
According to the Facebook Newsroom they have about 727 million daily active users on average in September 2013 or about 504k unique users that hit the Facebook page every minute. Given an average visit time of 18 minutes (researched by statisticbrain.com) their notification infrastructure must be able to serve about 9 million (18*504k) concurrent TCP connections 24/7. Although this number is a very far approximation it gives a far idea of what they are dealing with and what you have to deal with if you are going to build such a system.
You can use long polling as well as websockets to build your realtime notification system. In both cases you face similar problems which are related to your OS (Explanations are for a Unix based system):
limitation of ports, one tcp listener socket can only accept 2^16 connections on the same IP/Port it is listening, so you'll need to listen on multiple ports and/or multiple IP adresses.
memory, every open connection uses at least one file descriptor
Read more about the limitations in What is the theoretical maximum number of open TCP connections that a modern Linux box can have
Long-polling vs. Websockets:
Every poll in your long-poll solution requires a new HTTP request, which requires more bandwidth than what is needed to keep a websocket connection alive. Moreover the notification is returned as a HTTP response resulting in a new poll request. Although the websocket solution can be more efficient in terms of bandwidth and consumption of system resources, it has a major drawback: lack of browser support.
Depending on the stats at hand, a websocket-only solution ignores about 20-40% of your visitors (stats from statscounter.com). For this reason different server libraries were developed that abstract the concept of a connection away from the 'physical' underlying transport model. As a result more modern browsers create the connection using websockets and older browsers fall back to an alternative transport such as e.g. HTTP long polling, jsonp polling, or flash. Prominent examples of such libraries are Sock.js and Socket.io.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I have some problem.
I have client and server. Client connect with server over TCP.
Then, client send some data (separated by chunks), I don't know what is the length of data (TLS handshake). But I know that client send some data with fixed length, and then stop, until not received some response, then he send data with fixed length again.
I need read all chunks, until client stopped send (because so many chunks). How to do that ?
I have only one idea, it's timeout. Read data in loop and set timeout between iterate. If timeout is ended, then data complete collected.
Perhaps there is a more elegant solution?
Based on the information in your comments, you're doing this wrong. The correct way to write an HTTPS proxy is to read the CONNECT line, make the upstream connection, send the appropriate response back o the client, and then if successful start copying bytes in both directions simultaneously. You're not in the least concerned with packets or read sizes, and you should certainly not make any attempt to 'collect' packets before retransmission, as that will just add latency to the system.
You can accomplish this either by starting two threads per connection, one in each direction, or via non-blocking sockets and select()/poll()/epoll(), or whatever that looks like in Go.
BUT I have no idea why you're doing this at all. There are plenty of open-source HTTP proxies already in existence, and as you're dealing with HTTPS there is no value you can possibly add to them. Your claim about 'business logic' is meaningless, or at least unimplementable.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
in South America, many gamers use something called a proxy service which takes their network connection, route's it through their own infrastructure and then exit close to the game server location.. E.g. they want to control that the TCP Traffic does not cross the USA for latency reasons.. So, how could they manipulate the path taken by a TCP connection ?
a) Do they just open up TCP conections in low traffic times (e.g. 4 in the morning) and then keep those for the rest of the day ?
b) Do they keep trying to open up TCP connections UNTIL they get lower latency one and then switch their internal traffic to that connection ?
c) Is the only thing they can do to minify TCP latency over long distances to rent private peerings or choose a hoster with good ones?
d) Could sending UDP packets over such distances reduce latency IF and only if you skip out packet loss (e.g. by sending the traffic redudant/multiple packets each) ?
It all boils down to the question whether u can control somehow what path a TCP connection takes or if you cant.
This talk is all about the networking part which is NOT about the endusers computer (Leantrix/TCP Optimizations) or the game servers.. They can somehow gain additional latency savings and im curious how they do it.
Thank you for the great year I've been with SO for now - its been a pleasure to talk to experts about stuff.
If you are referring to thins like Battleping and the likes, here's what someone has written in a forum that seems to make sense, i suppose the same holds true for South America. The relevant info is "SSH tunnel"
The advent of proxy tunnels came from the demand of Oceanic WoW
players. Incase you don't know, the backbone connecting Oceania to
America is a huge piece of shit and once you leave Australia/New
Zealand packets gain an extra 200ms because gaming packets get shaped
leaving our country, and then they get shaped going into America.
Generally you can ping about 200~ to US Servers, but in real-time the
game data will end up getting prioritized to hell and back and you'll
have a latency of around 500.
The way Lowerping, Battleping and Smoothping etc all work are by
establishing an SSH tunnel to a proxy in America and sending the
SC2/WoW data through it. SSH traffic has much higher priority than
gaming traffic does, so instead of being delayed, the data flies
through. Afaik, it also doesn't get shaped as incoming traffic from
the Blizzard serverside, because they're originating from a proxy
inside of the US.
Feel free to correct me, I might be wrong on some things, but that's
what I've picked up from using the very first tunneling service
(Lowerping) since it came out
As per my knowledge proxy servers do not speed up your connection. They usually send your data through a longer path and the receiver will see the data packet as a packet originated from the proxy server.
The computer that sends data can not determine the path that it takes. It can only determine the end points. When connected to a proxy. The end point of the 1st trip is the proxy and then proxy retransmits the data packet to the second destination.Proxy is a special type of server configured for this retransmission.
Lets see the internet as a spider web. Then each joint is a router. These routers maintain a table called the 'routing table'. Routing table has information about where to foreword the data packets according to their destination. This routing table is updated automatically to send the packets in the shortest path.
So you see if we do not put any interference the data packets will go in the shortest path.
Now the exception,
If the proxy service provider has a different private network connection from the proxy location to game server location (some thing like a privet highway with no traffic) the data packets can be delivered quickly. But this is a highly unlikely thing because no one will draw their privet wires around the world instead of already existing internet backbones.
Lets say
A - Origin
B - Destination
C - Proxy
Then,
Normal way packets go
A -> B (Quickest path determined by the routers)
With Proxy packets go,
A -> C -> B (Usually a longer path)
With a proxy and a high-speed privet network C-D (This is a highly unlikely scenario no one have such things.)
A -> C --(less traffic)--> D -> B (Can have a speed gain)
Some other ways of increasing the speed of the connection.
You can use UDP instead of using TCP. TCP usually has some error correcting features. All the routers double check the data to be correct. This slows down the data transmission. With UDP this error checking happens in a minimal level. So if you use UDP the transmitted data might occasionally has errors but will transmit quickly.
The stranded protocols that transmit data ie.HTTP has many fields other than the actual data. These are checksums, browser information, OS information etc. If you make a different protocol by removing these data, the amount of data to be transmitted become small. This will also speedup the communication.
This is probably not the best forum for such a specialized question, but at the moment I don't know of a better one (open to suggestions/recommendations).
I work on a video product which for the last 10+ years has been using proprietary communications protocol (DCOM-based) to send the video across the network. A while ago we recognized the need to standardize and currently are almost at a point of ripping out all that DCOM baggage and replacing it with a fully compliant RTP/RTSP client/server framework.
One thing we noticed during testing over the last few months is that when we switch the client to use RTP/RTSP, there's a noticeable increase in start-up latency. The problem is that it's not us but RTSP.
BEFORE (DCOM): we would send one DCOM command and before that command even returned back to the client, the server would already be sending video. -- total latency 1 RTT
NOW (RTSP): This is the sequence of commands, each one being a separate network request: DESCRIBE, SETUP, SETUP, PLAY (assuming the session has audio and video) -- total of 4 RTTs.
Works as designed - unfortunately it feels like a step backwards because prior user experience was actually better.
Can this be improved? If you stay with the standard, short answer is, NO. However, my team fully controls our entire RTP/RTSP stack and I've been thinking we could introduce a new RTSP command (without touching any of existing commands so we are still fully inter-operable) as a solution: DESCRIBE_SETUP_PLAY.
We could send this one command, pass in types of streams interested in (typically, there's only one video and 0..1 audio). Response would include the full SDP text, as well as all the port information and just like before, server would start streaming instantly without waiting for anything else from the client.
Would this work? any downside that I may not be seeing? I'm curious why this wasn't considered (or was dropped) from official spec, since latency even in local intranet is definitely noticeable.
FYI, it is possible according to the RTSP 1.0 specification:
9.1 Pipelining
A client that supports persistent connections or connectionless mode
MAY "pipeline" its requests (i.e., send multiple requests without
waiting for each response). A server MUST send its responses to those
requests in the same order that the requests were received.
The RTSP 2.0 draft also contains support for pipelining.
However none of the clients/servers I've used implement it AFAIK.
On this post, I read about the usage of XMPP. Is this sort of thing necessary, and more importantly, my main question expanded: Can a chat server and client be built efficiently using only standard HTTP and browser technologies (such as PHP and JS, or RoR and JS, etc)? Or, is it best to stick with old protocols like XMPP find a way to integrate them with my application?
I looked into CampFire via LiveHTTPHeaders and Firebug for about 5 minutes, and it appears to use Ajax to send a request which is never answered until another chat happens. Is this just CampFire opening a new thread on the server to listen for an update and then returning a response to the request when the thread hears an update? I noticed that they're requesting on a specific port (8043 if memory serves me) which makes me think that they're doing something more complex than just what I mentioned. Also, the URL requested started with /tcp/ which I found interesting.
Note: I don't expect to ever have more than 150 users live-chatting in all the rooms combined at the same time. I understand that if I was building a hosted pay for chat service like CampFire with thousands of concurrent users, it would behoove me to invest time in researching special technologies vs trying to reinvent the wheel in a simple way in my app.
Also, if you're going to do it with server polling, how often would you personally poll to maximize response without slamming the server?
The technology is broadly called Comet, which is supposedly some hilarious pun on Ajax1.
The XmlHTTPResponse variant seems to be the most popular.
The XHR version isn't strictly polling per se; as you said, the client connects with a long timeout and the server doesn't actually send a response until there is anything to send. Once the response is sent, it drops the connection and the client reconnects. They call it long polling, because the client is initiating the connection, but it differs from classic polling in that the client doesn't constantly connect requesting new content even if nothing has changed (i.e. no "is there a message now? no? how about now? what about now?")
It's more like trying to keep a constantly dropping connection open.
Yes it can absolutely be built using standard web technologies.
1I prefer to think of Ajax as a mighty Greek warrior rather than a cleaning product, so I frown mightily upon this pun.
That would first depend on your strategy of your webserver load balancing. 150 concurrent users that publish data over a stateless medium (HTTP) is certainly efficient with the bit of scripting (client- and server side). Remember that chat applications are just many client -> one server strategies, that fits perfectly over the web.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I'm looking to establish some kind of socket/COMET type functionality from my server(s) to my iPhone application. Essentially, anytime a user manages to set an arbitrary object 'dirty' on the server, by say, updating their Address.. the feedback should be pushed from the server to any clients keeping a live poll to the server. The buzzword for this is COMET I suppose. I know there is DWR out there for web browser applications, so I'm thinking, maybe it's best to set a hidden UIWebView in each of my controllers just so I can get out of the box COMET from their javascript framework? Is there a more elegant approach?
There are a couple of solutions available to use a STOMP client.
STOMP is incredibly simple and lightweight, perfect for the iPhone.
I used this one as my starting point, and found it very good. It has a few object allocation/memory leak problems, but once I got the hang of iPhone programming, these were easy to iron out.
Hope that helps!
Can you use ordinary TCP/IP socket in your application?
A) If yes then definitely a raw TCP/IP socket is more elegant solution. From your iPhone app you just wait for notification events. The socket is open as long as your application is open. If you want you can even use HTTP protocol / headers.
On the server side you can use some framework to write servers which efficiently handle thousands of open TCP/IP connections. e.g Twisted, EventMachine or libevent. Then just bind the server main socket to http port (80).
The idea is to use a server which keeps just a single data structure per client. Receives update event from some DB application and then pushes it to right client.
B) No, you have to use Apache and http client on iPhone side. Then you should know that whole COMET solution is in fact work around for limitations of HTTP protocol and Apache / PHP.
Apache was designed to handle many short time connections. As far I know only newest versions Apache (mpm worker) can handle efficiently big number of opened connection. Previously Apache was keeping one process per connection.
Web browsers have a limit of concurrent open connections to one web server (URL address in fact, eg. www.foo.com, not IP address of www.foo.com). And the limit is 2 connections. Additionally, a browser will allow only for AJAX connections to the same server from which the main HTML page was downloaded.
I wrote a web server for doing exactly this kind of thing. I'm pushing realtime updates through the server with long polling and, as an example, I had safari on the iPhone displaying that data.
A given instance of the server should be able to handle a few thousand concurrent clients without trying too hard. I've got a plan to put them in a hierarchy to allow for more horizontal scaling (should be quite trivial, but doesn't affect my current application).
WebSync has a javascript client that works on the iPhone, if that's what you're after
Would long-polling work for what you want to achieve? You can implement the client-side in a few lines of regular Javascript, which will be lighter than any framework could possibly be.
It would also be trivial to implement it in ObjC (connect, wait for a response or timeout, repeat)
The answers to my question Simple "Long Polling" example code? hopefully explain how extremely simple Long Polling is..
Basically you would just request a URL as usual - the web-server would accept the connection, but not send any data until it's available. When you receive data, or the connection times-out, you reconnect (and repeat)
The most complicated bit would be server server-side, as you cannot use a regular threaded web-server like Apache, although this is also the case with Comet..
StreamHub Comet Server works with the iPhone out of the box, no plugins or anything required. Just browsed to their website on my iPhone and all the examples worked, didn't need to install Flash or anything.
Do you want/have do the communication for your app over http? If not, you can use CFNetwork framework to use sockets (TCP/UDP) to allow your app and server to communicate. From what I have seen of the CFNetwork stack, it is pretty cool, and makes it fairly straitforward to read and write to streams, and allows for synchronous and asynchronous communication. It also allows for you to define callbacks on your socket allowing you to get notified of events like data received, connection made, etc. So, in your example you could send the information over the socket to your server, and then you could define a callback that would listen for incoming data on the stream and then update your app accordingly.
EDIT: Did a little more research, and if you go the socket approach, you may want to also look at the NSStream classes. They are Cocoa abstractions build on top of the CFSocket stuff.
you didn't mention what serverside tech you're using. But in case it's microsoft .net (or for any other googlers who come across this), there is a simple option for comet: http://www.codeplex.com/ncomet.
COMET, LightStreamer, AJAX all that junk is broken. It is basics of TCP that no 'keep-alives' are ever guaranteed without pinging traffic.. So you can forget that long-polling if any decent reliability or timely delivery is to be guaranteed..
It's just hype everyone saw through back in 2003 when the lame-mania kicked off..