How can I tell data arrives on a HTTP keep-alive connection? - sockets

I am implementing a simple web server using libuv. Currently I am stuck with the keep-alive connection.
Based on my understanding of keep-alive, I just do not call uv_close() on the established connection(a TCP socket) after a request is processed, and reuse it latter.
I am wondering how can I tell a new request arrives on that connection?
That is, when should I call uv_read_start() on that alive connection?

When you use keep-alive, the connection will not be closed after the first request. When the client wants to send a new request, it will just reuse the same connection, so your read callback will be called again. You shouldn't even need to call uv_read_start() again.

Immediately you have finished writing the prior response.

Related

GET rest api fails at client side

Suppose client call server using GET API, is it possible that server send a response but client misses that response??
If yes how to handle such situation, as I want to make sure that client receives the data. For now I am using second REST call by client as ack of first.
It is certainly possible. For example if you are using a site with a REST API and a request is just sent to the API and your internet connection dies when the answer is supposed to arrive, then it is quite possible that the server has received your request, successfully handled it, even sent the response, but your computer did not receive it. It could be an issue on a server responsible for transmitting the request as well. The solution to this kind of issue is to have a timeout and if a request timed out, then resend it until it is no longer timed out.

What is the difference between Async Response and Server-Sent Events in Jersey?

What is the difference between Async Response and Server-Sent Events in Jersey and when to use them?
Both are for different usage, one allows to wait for a slow resource (long-polling), the other allows to send a stream of data on the same TCP-connection.
Here's more detail :
AsyncResponse was introduced in JAX-RS 2, in order to perform long-polling requests.
Client open connection
Client send request payload
Server receive payload, pause/suspend the connection and look for the resources
Then
If timeout has been reached the server can end the connection
Resource is ready, server resume the connection and send the resource payload.
Connection is closed
As this is part of the JAX-RS specification, so you can just use it with the default jersey dependencies. Note that on a too long connection where no data is transmitted network equipment like firewall can close the TCP connection.
Server-Sent Events is a specification that allows the server to send messages on the same TCP connection.
Client use javascript EventSource to get a resource
Then the server can send at some point in time a payload, a message.
Then another
And so on
The connection can be closed programmatically at any time by either the client or the server.
SSE is not part of JAX-RS, so you need to have the Jersey SSE module in your classpath (additionaly in earlier version of Jersey 2 you had to programmatically enable the SseFeature).
Other things to consider :
SSE does not allow to pass custom headers, so no Authorisation header. It's possible to use the URL query string, but if you're not on HTTPS this a security issue.
SSE does allow to POST data, so this might go in the URL query string
Connection can close due to network (equipment failing, firewall, phone not in covered area, etc.)
In my opinion websockets are more flexible than SSE, and they even allow the client to send multiple messages. But Jersey does not implement the JEE specification that support websocket (JSR 356).
But really you should read the documentation of their SSE implementation, their's additional info like what is polling and what web-sockets.
AsyncResponse is like an ajax polling with long waiting time. The client initiate a single AJAX request to check for updates that will not return until it receives data or a timeout occurs and trigger another request. It does create unnecessary checking loop (at the server side) and the load is equivalent to the number of client connected. More client, more loop initiated = more resources needed.
Server-Sent Events is somewhat similar to long-polling at the server side, both uses loop to check for update and trigger a response. The only difference is that long-polling will continuous send request (either after timeout or receive data) whereas SSE only need to initiate once. Thus SSE is more suitable for mobile application when you consider battery usage.
Websocket uses loop as well, but not only to check for updates; also to listen for new connections and upgrade the connections to WS/WSS after handshake. Unlike long-polling and SSE; where the load increases with the number of clients, websocket constantly running the loop like a daemon. In addition to the constant loop, the load adds on as more client are connected to the socket.
For example, if you are designing a web service for administrative purposes, server running on long-polling and SSE are allow to rest after office hour when no one is around, whereas websocket will continue to run, waiting for connection. And did I mention without proper authentication, anyone can create a client and connect to your websocket? Most of the time, authentication and refuse connection is not done at the handshaking part, but after the connection was made.
And should I continue on how to implement websocket on multiple tab?

How Does HTTP Response get sent back to correct Client in TCP?

I trying to understand how a HTTP server ensures the correct response is sent back to the correct client.
At a very high level:
At the TCP layer of the server implementation, some ServerSocket (listening on the host:port that the request was addressed to) creates a 'client socket' to handle the request
(if we assume its a threaded server) - a thread is allocated in the application and the work gets done
Questions:-
A.) Does the Response have to go back through the same Socket that handled the Request ?
B.) If Yes, how is the Response mapped to the same socket that handled the Request ?
C.) Is it the socket's responsibility to maintain the Client IP/host that the response packets need to get addressed back to, or is it the HTTP Headers which maintain this information and which is then used to address the response back to the correct client?
If the HTTP Header info is used to route the Response back to the calling client, then I assume the Response does not necessarily have to be handled by the same socket that handled the associated Request
Any help is much appreciated.
James
Sockets are bi-directional.
When the ServerSocket receives a new connection, it creates a new Socket and hands it to the thread that will handle the request. This socket is already connected and supports two-way communication. This thread will then send the response back through this socket, which will cause it to be routed back to the connected client. The worker thread does not explicitly need to know the IP/host of the other end because the socket is bi-directional. It just needs to send its response through the socket and close the connection when it is done.

How to put a breakpoint in a request sent through Fiddler?

In fiddler, how to terminate the request in between before it reaches the host. For eg I send a request and I want to put a breakpoint in that request so that I'm not able to receive the response. Basically, I want to inspect the response before it is returned to the original caller and how my service is behaving if there's a connection lost or some other termination by which the request was unable to reach the server. Any answer is highly appreciated. Sorry for any flaw, I'm a newbie in using Fiddler. :)
Fiddler offers several mechanisms for interfering with requests. If your goal is to simply kill the request without returning a response, you can create a rule with an Action of *drop or *reset rules in Fiddler's AutoResponder.
*drop will close the client connection immediately without sending a response. The closure is graceful at the TCP/IP level, returning a FIN to the client.
*reset will close the client connection immediately without sending a response. The closure is abrupt at the TCP/IP level, returning a RST to the client.
Alternatively, you can have Fiddler return any HTTP/4xx or HTTP/5xx to the client.
Lastly, you could use a breakpoint to allow you to manually manipulate the request before it's sent to the server, and/or manipulate the server's response before it's sent to the client. Use the bpu command in the QuickExec box to break on a given URL (e.g. bpu sample.asp).

Communication client-server-client

I have a question about the Communication between a client and a server.
I would like to create a GWT application that can do the following:
The client A fires an event to the server and the server in his turn fire an event to the client B.
Here the client B has to be able to listen to the event all the time.
I wanted to send some event with few data in real time to a connected client B.
is that possible? and if yes how can I do that?
Thanks
Here the client B has to be able to listen to the event all the time.
To let client wait for data, you can use Comet [1] (long lived HTTP requests) or WebSockets [2] if targetted JS runtime does support it.
[1] : http://code.google.com/p/gwt-comet/
[2] : http://code.google.com/p/gwt-ws/
here is one exampleof course its possible for the communication between client and server you have to use Rpc(Remote Procedure call). you can send and recieve data as a serialized objects via rpc
Just store the result of client's (A's) request in a Database. and write client side code to request the content from the db, process it in the server and give the result back to the client(in your case, client B)