How to emulate a Timeout property? - rest

I have a call to a remote Datasnap REST method where the server sometimes doesn't respond and I need to detect it and terminate the application instead of leaving it frozen.
I use a DSRestConnection, and a Datasnap REST Client Module where all the remote methods are defined, attached to the DSRestConnection. I can't find a Timeout property on them, AFAIK I can only set Timeouts on the Datasnap Server, but sometimes the client loses connection to the server, so I also need to raise a Timeout on the client.
How can I emulate a Timeout when I don't have one ?. Is there a class to help do that or do I need to code it from the ground up ?. in this case, the way to go is to do the remote call on a secondary threat, so the main thread is still responsive and use something like a Timer to check if the call has succeeded in time ?.
I would appreciate any suggestion. Thank you.

The TDSRestConnection should give you access to the underlying TDSHTTP client via its HTTP property. The TDSHTTP client exposes a ConnectTimeout and a ReadTimeout property. Perhaps the latter one is what you are looking for.

Related

Reusing connections with reqwest

I need to issue a long sequence of REST calls to the same server (let's call it myapi.com). At the moment, I am using the Rust library reqwest as follows:
I create a reqwest::Client with all default settings.
For each REST call:
I use client.post("https://myapi.com/path/to/the/api") to create a reqwest::RequestBuilder.
I configure the RequestBuilder to obtain a reqwest::Request.
I send() the Request, read the reqwest::Response.
I drop everything except the Client, start again.
I read in the docs that reqwest is supposed to pool connections within the same Client. Given that I always reuse the same Client, I would expect the first API call to take some more (owing to the initial TCP and HTTPS handshakes). However, I observe always a consistent, quite high latency across all requests. So, I am wondering if connections are reused at all, or re-established every time. If they are not, how do I get to recycle the same connection? I feel that latency would be drastically reduced if I could save myself some roundtrips.

golang grpc socket tuning

I have a golang client application talking a server via GRPC. I noticed that while the application is running that the number of sockets accumulated on the client app keeps climbing till around 9000. At which point I pause client. However, after there are no more traffic between the client and the server the number sockets still stayed at that level even after 8 hours.
Is there anyway we can tune GRPC for socket usage? Such as closing sockets after a timeout? Is using streaming another way to limit number of sockets being opened?
Thanks for any help.
I'd start by making sure that your client application cleans up unused connections (grpc.ClientConn) by calling Close() method on it.
Also, since I don't know what exactly your application does so I'm gonna go ahead and suggest reusing connections for multiple RPCs (you're probably already doing this).
And to answer your question about setting timeout deadline on connections:
1. You shouldn't have to do this. Feel free to open up an issue on https://github.com/grpc/grpc-go about whatever gRPC shortcoming is forcing you to take this route.
2. But if you must know, you can use a custom dialer(https://github.com/grpc/grpc-go/blob/13975c070286c7371aa3a8b3c230e90d7bf029fc/clientconn.go#L333) and set a deadline on the net.Conn that you return from it.
Best,
Mak

send data from server to an OSX app

I am building an OSX app that needs to get data from server. The easy way, is to make a GET request at some fixed time interval, and process results. Thats not what I want. I want the other way around: e.g. server to send data to my app, when something happens on the server side. That way I do not need to make constant requests from client side. I don't need the data to visually be displayed, just processed.
Can this be implemented in OSX with Swift?
You have two ways to achieve this:
Websocket:
Websocket is a full-duplex communication channel over a TCP-Connection. It's established via HTTP.
Long Polling:
Same as you said before but without responding directly. Your client makes a HTTP request and set a very long timeout timer. The server responds after something is happening. (More)
I would recommend you Websocket since it was built exactly for this use case. But if you have to implement it quickly you should probably go with long polling for now, since the barrier to implement it is much lower and switch to Websocket later.

Hello, Can anyone show me an example (code) of a Remote Call Back in RMI ? Is their standard code for this process?

Is it just for some revisions I'm doing on RMI? Been just researching and finding it difficult to find an example of "a remote call back".
OK, I shall add to this, is the following a good way to describe an example of a callback?
"You have a Server and a Client. Server calls method from Client,
Client has already looked up Server and passed reference to itself."
How's that? Is it better?
Thanks,
Caroline
is this a good way to describe an example of a callback ?
"You have a Server and a Client. Server calls method from Client, Client has already looked up Server and passed reference".
Yes. You've omitted that the client object must be an exported remote object, typically by extending UnicastRemoteObject, and must implement a remote interface. Just like the server.

How to handle timeouts in a REST Client when calling methods with side-effect

Let's say we have a REST client with some UI that lists items it GETs from the server. The server also exposes some REST methods to manipulate the items (POST / PUT).
Now the user triggers one of those calls that are supposed to change the data on the server side. The UI will reflect the server state change, if the call was successful.
But what are good strategies to handle the situation when the server is not available?
What is a reasonable timeout lengths (especially in a 3G / Cloud setup)?
How do you handle the timeout in the client, considering the fact that the client can't tell whether the operation succeeded or not?
Are there any common patterns to solve that, other than a complete client termination (and subsequent restart)?
This will be application specific. You need to decide what makes the most sense in your usage case.
Perhaps start with a timeout similar to that of the the default PHP session of 24 minutes. Adjust as necessary based on testing.
Do you have server and client mixed up here? If so the server cannot tell if the client times out other than reaching the end of a session. The client can always query the server for a progress update.
This one is a little general to provide an answer for.