I am seeing lots of these requests
POST /zkau?dtid=
Anyway to reduce them? I guess these are the requests to keep the data in sync between the browser and server.
I am seeing these requests for each session repeated almost # 1 sec.
Related
Background
We are writing a Messenger-like app. We have setup Websockets to Inbox and Chat.
Question
My question is simple. What are the advantages and disadvantages when sending data from Client to Server using REST instead of Websockets? (I am not interested in updates now.)
We know that REST has higher overhead in terms of message sizes and that WS is duplex (thus open all time). What about the other things we didn't keep in mind?
Here's a summary of the tradeoffs I'm aware of.
Reasons to use webSocket:
You need/want server-push of data.
You are sending lots of small pieces of data from client to server and doing it very regularly. Using webSocket has significantly less overhead per transmission.
Reasons to use REST:
You want to use server-side frameworks or modules that are built for REST, not for webSocket (such as auth, rate limiting, security, streaming, etc...).
You aren't sending data very often from client to server and thus the server-side burden of keeping a webSocket connection open all the time may lessen your server scalability.
You want your client to run in places where a long-connected webSocket during inactive periods of time may not be practical (perhaps mobile).
You want your client to run in old browsers that don't support webSocket.
You want the browser to enforce same-origin restrictions (those are enforced for REST Ajax calls, but not for webSocket connections).
You don't want to have to write code that detects when the webSocket connection has died and then auto-reconnects and handles back-offs and handles mobile issues with battery usage issues, etc...
You need to run in situations where there are proxies or other network infrastructure that may not support long running webSocket connections.
If you want request/response built in. REST is request/response. WebSocket is not - it's message based. Responses from a webSocket are done by sending a messge back. That message back is not, by itself, a response to any specific request, it's just data being sent back. If you want request/response with webSocket, then you have to build some infrastructure yourself where you tag an id into a request and the response for that particular request contains that specific id. Otherwise, if there are every multiple requests in flight at the same time, then you don't know which response belongs with which request because all the data is being sent over the same connection and you would have no way of matching response with request.
If you want other clients to be able to carry out this operation via an Ajax call.
So, if you already have a webSocket implementation, don't have any problem with it that are lessened with REST and aren't interested in any of the reasons that REST might be better, then stick with your webSocket implementation.
Related references:
websocket vs rest API for real time data?
Ajax vs Socket.io
Adding comments per your request:
It sounds like you're expecting someone to tell you the "right" way to do it. There are reasons to pick one way over the other. If none of those reason compel you one way vs. the other, then it's just an architectural choice and you must take in the whole context of what you are doing and decide which architectural choice makes more sense to you. If you already have the reliably established webSocket connection and none of the advantages of REST apply to your situation then you can optimize for "efficiency" and send your data to the server over the webSocket connection.
On the other hand, if you wanted there to be a simple API on your server that could be reached with an Ajax call from other clients, then you'd want your server to support this operation via REST so it would simplest for these other clients to carry out this one operation. So, it all depends upon which direction your requirements drive you and, if there is no particular driving reason to go one way or the other, you just make an architectural choice yourself.
I have used Sails framework for my web application. Now we are calling services from mobile app.
I found that the Sails executes only 4 request at a time. Is there any way I can increase that?
Are you sure about that?
I've tested my sails app multiple times with: https://github.com/alexfernandez/loadtest
For example: loadtest http://localhost:1337 --rps 150 -c 20 -k
You can set it to any route of your app, and send GET/POST/PUT/DEL requests. The -c option means concurrent, so you can play with that number and check your app limits, but 5 concurrent is too low.
Most likely your browser or http client is limiting the number of requests per server. Refer to https://stackoverflow.com/a/985704/401025 or lookup the maximum number of requests from the manual of your http client.
There are no tcp connection limits imposed by sails or by nodejs. In matter of fact nodejs is known to be able to handle thousands of simultaneous connections.
Without knowing your server/nodejs setup it is hard to say why you are seeing a limit of 4 connections.
Depending on which nodejs version you use, you might check the http module of nodejs and following attributes:
https://nodejs.org/api/net.html#net_server_maxconnections
Set this property to reject connections when the server's connection
count gets high.
https://nodejs.org/api/http.html#http_agent_maxsockets
By default set to Infinity. Determines how many concurrent sockets the
agent can have open per origin. Origin is either a 'host:port' or
'host:port:localAddress' combination.
I am using lift 3 round trip and I am trying to understand what happens behind the scene.
Why are there 2 kinds of request :
GET on comet_request
POST on ajax_request
Lift's uses HTTP Long Polling for asynchronous responses to the browser. I won't go into great detail on why the Lift developers have chosen Long Polling over other implementations, like Web Sockets, but there are well thought out reasons and if you're interested just do a quick search through the Lift mailing list where it's been discussed many times.
The gist of how it works is that the browser makes a request to the server, and the server holds the request open until there is information to send. When information becomes available, it gets pushed down the pipe, the browser processes it, and the browser initiates a new long poll request. Lift uses the servlet container's asynchronous support to hold the connection open with very little resource consumption, and because Javascript is asynchronous by nature, waiting on new information is not resource intensive for the browser either. Since there is a limit on the number of requests a browser can make to the same domain at once, Lift only opens one of these long poll connections at a time and multiplexes responses from what could be many different "responders" through it.
Initially Lift's asynchronous support was added so that data generated by server side events could be pushed to the client as they occurred. With the growth in popularity of client side frameworks, the ability to push asynchronous data initiated by client events became useful, hence the addition of round trips. The idea is that the client makes a request to the server, and rather than respond immediately, the server does some stuff in another thread then sends a response (potentially much) later. To users of the client side API, this is modeled as a promise, but behind the scenes what happens is that Lift receives the request and responds immediately (remember, we can't have too many requests open to the same domain) but will stream the actual data that satisfies the promise through the long polling connection when it becomes available.
So, that's what you're seeing. Your initial request is the ajax POST, which triggers the beginning of a round trip. If you were to look at the data returned by that request, you'd see that it's not the data that satisfies the promise. The actual response data is delivered via Lift's long polling mechanism, and that is what you see with the GET request.
I am consuming a rate-limited web service, which only allows me to make 5 calls per second. I am using my server to proxy these calls to a web client:
Mashery > My Web Server > Client's Browser
I have optimized the usage of this web service, but there are still occasional times when I go over the rate limit. What I would like to do instead is hold the client's request for one second (or longer if warranted) before making the web service call to Mashery.
There are some ways I can solve this by building a simple queue system with a database back-end, but I'd rather avoid that if something already exists. Does something already exist to rate-limit the consuming side of this?
To reliably throttle requests to only 5 per second, I would employ a queue/worker system. But before that I would just request a higher rate limit by contacting API support for that platform. Of course, I would also consider using caching if it's a read-only method and you're pulling down a lot of the same data and it's compliant with the API TOS.
This is specifically about GWT's RequestBuilder, but should apply to general XHR as well. My company is having me build a near realtime chat application over HTTP. Yes, I do realize there are better ways to do chat aplications, but this is what they want. Eventually we want it working on the iPad/iPhone as well so flash is out, which rules out websockets and comet as well, I think?
Anyway, I'm running into issues were I've set GWT's RequestBuilder timeout to 10 seconds and we get very random and sporadic timeouts. We've got error handling and emailing on the server side and never get any errors, which suggests the underlying XHR request that RequestBuilder is built on, never gets to the server and times out after 10 seconds.
We're using these request to poll the server for new messages rather often and also for sending new messages to the server and also polling (less frequently) for other parts of application. What I'm afraid of is that we're running into the browsers limit on concurrent connections to the same domain (2 for IE by default?).
Now my question is - If I construct a RequestBuilder and call it's send() method and the browser blocks it from sending until one of the 2 connections per domain is free, does the timeout still start while the request is being blocked or will it not start until the browser actually releases the underlying XHR?
I hope that's clear, if not please let me know and I'll try to explain more.
On the GWT Incubator doc page is an article explaining server push.
With said technique you only hold one connection open all the time.
Browsers allowed only 2 connections per hostname; that has now changed. 'Modern' browsers allow upto 6 simultaneous connections - it varies between browsers. See http://www.browserscope.org/ - network tab.
As regards the timer, it starts before GWT invokes xhr.send(), so your suspicion is right. See Request.java and RequestBuilder.java if you want to trace it out.
Seems like half the time, you answer your own question as soon as you post it.
Via: http://google-web-toolkit.googlecode.com/svn/javadoc/1.6/com/google/gwt/http/client/package-summary.html
Pending Request Limit
- Modern web browsers are limited to having only two HTTP requests outstanding at any one time. If your server experiences an error that prevents it from sending a response, it can tie up your outstanding requests. If you are concerned about this, you can always set timeouts for the request via RequestBuilder.setTimeoutMillis(int).