gRPC Service Type - service

Do I need a streaming RPC if the server needs to send back more than 1 object to the client with each call? In particular, the client will call the server with an ID and the server will respond with a list of objects matching that ID and dump that list to a file. Is streaming required for this purpose? Thanks.

If all the objects are available at the same time, or if you are ok to hold the response until they are all available, you can just respond a protobuf message that aggregates all the objects you want to respond. Otherwise, you need (server) streaming.

Related

Capture incoming request information

I have an application wherein endpoints are configured on gravitee/api-gateway. My requirement is to be able to capture all the incoming requests on gravitee/api-gateway which includes their request-headers, uri, request body etc. and pass it onto a generic kind of rest api which further dumps the request related information to Apache Kafka.
I am unable to find the right way to capture the request data.
Any help is appreciated.

Handle REST API timeout in time consuming operations

How is possible to handle timeouts in time consuming operations in a REST API. Let's say we have the following scenario as example:
A client service sends a request to insert a resource through a REST API.
Timeout elapses. The client thinks the insertion failed.
REST API keep working and finishes the insertion.
Client do not notify the resource insertion and it status is "Failed".
I can think I a solution with a message broker to send orders to a queue and wait until they are solved.
Any other workaround?
EDIT 1:
POST-PUT Pattern as has been suggested in this thread.
A Message Broker (add more complexity to the system)
Callback or webhook. Pass in the request a return url that the server API can call to let the client know that the work is completed.
HTTP offers a set of properties for invoking certain methods. These are primarily safetiness, idempotency and cacheability. While the first one guarantees a client that no data is modified, the 2nd one gives a promise whether a request can be reissued in regards to connection issues and the client not knowing whether the initial request succeeded or not and only the response got lost mid way. PUT i.e. does provide such a property, i.e.
A simple POST request to "insert" some data does not have any of these properties. A server receiving a POST request furthermore processes the payload according to its own semantics. The client does not know beforehand whether a resource will be created or if the server just ignores the request. In case the server created a resource the server will inform the client via the Location HTTP response header pointing to the actual location the client can retrieve information from.
PUT is usually used only to "update" a resource, though according to the spec it can also be used in order to create a new resource if it does not yet exist. As with POST on a successful resource creation the PUT response should include such a Location HTTP response header to inform the client that a resource was created.
The POST-PUT-Creation pattern separates the creation of the URI from the actual persistence of the representation by first firing off POST requests to the server until a response is received containing a Location HTTP response header. This header is used in a PUT request to actually send the payload to the server. As PUT is idempotent the server simply can reissue the request until it receives a valid response from the server.
On sending the initial POST request to the server, a client can't be sure whether the request reached the server and only the response got lost, or the initial request didn't make it to the server. As the request is only used to create a new URI (without any content yet) the client may simply reissue the request and in worst case just create a new URI that points to nothing. The server may have a cleanup routine that frees unused URIs after a certain amount of time.
Once the client receives the URI, it simply can use PUT to reliably send data to the server. As long as the client didn't receive a valid response, it can just reissue the request over and over until it receives a response.
I therefore do not see the need to use a message-oriented middleware (MOM) using brokers and queues in order to guarantee reliable messaging.
You could also cache the data after a successful insertion with a previously exchanged request_id or something of that sort. But I believe message broker with some asynchronous task runner is a much better way to deal with the problem especially if your request thread is a scarce resource. What I mean by that is. If you are receiving a good amount of requests all the time. Then it is a good idea to keep your responses as quickly as possible so the workers will be available for any requests to come.

Rest Communication Design For Callback Mechanism

I had a use case that there is a server that can have n number of source. There can be several clients that can connect to this server and get the sources list and then can subscribe to the server to listen the source add, update and delete operation.
To implement this with REST principle, I have thought that first time when the client gets connected, the server gives the full source list along with the session id. Then with this session id, the client polls the url after a configured time interval and listen to the source updates.
The communication will looks like
Client>
GET: /Federation/Sources
Server>>
{"sessionId":xyz,"data":{"source1"...........}}
Client>
GET: /Federation/Sources/{sessionId}
Server>>
{"sessionId":xyz,"data":{"sourceadded"...........}}
Client>
PUT: /Federation/Sources/{sessionId}
{"data":{"Recieved"}}
This client call will then updates the server to remove the source correspond to this session id.
And then client poll continues with the session id.
Can expert please give their feedbacks or comments if this is a good approach or can there be any alternative good approach that can be follow with REST principle?
Instead of passing back id's for the client to use to build the URL, simply pass back the entire URL to the client. Perhaps with more information about what the URL is for. This is the HATEOAS part of REST.

Communication client-server-client

I have a question about the Communication between a client and a server.
I would like to create a GWT application that can do the following:
The client A fires an event to the server and the server in his turn fire an event to the client B.
Here the client B has to be able to listen to the event all the time.
I wanted to send some event with few data in real time to a connected client B.
is that possible? and if yes how can I do that?
Thanks
Here the client B has to be able to listen to the event all the time.
To let client wait for data, you can use Comet [1] (long lived HTTP requests) or WebSockets [2] if targetted JS runtime does support it.
[1] : http://code.google.com/p/gwt-comet/
[2] : http://code.google.com/p/gwt-ws/
here is one exampleof course its possible for the communication between client and server you have to use Rpc(Remote Procedure call). you can send and recieve data as a serialized objects via rpc
Just store the result of client's (A's) request in a Database. and write client side code to request the content from the db, process it in the server and give the result back to the client(in your case, client B)

http interface for long operation

I have a running system that process short and long running operations with a Request-Response interface based on Agatha-RRSL.
Now we want to change a little in order to be able to send requests via website in Json format so i'm trying many REST server implementation that support Json.
REST server will be one module or "shelve" handled by Topshelf, another module will be the processing module and the last the NoSQL database runner module.
To talk between REST and processing module i'm thinking about a servicebus but we have two types of request: short requests that perform work in 1-2 seconds and long requests that do work in 1 minute..
Is servicebus the right choice for this work? I'm thinking about returning a "response" for long running op with a token that can be used to request operation status and results with a new request. The problem is that big part of the requests must be used like sync request in order to complete http response.
I think I have also problems with response size (on MSMQ message transport) when I have to return huge list of objects
Any hint?
NServiceBus is not really suitable for request-response messaging patterns. It's more suited to asynchronous publish-subscribe.
Edit: In order to implement a kind of request response, you would need to message in both directions, but consisting of three logical steps:
So your client sends a message requesting the data.
The server would receive the message, process it, construct a return message with the data, and send it to the client.
The client can then process the data.
Because each of these steps takes place in isolation and in an asynchronous manner there can be no meaningful SLA or timeout enforced between when a client sends a request and receives a response. But this works nicely for large processing job which may take several minutes to complete.
Additionally a common value which can be used to tie the request to the response will need to be present in both messages. Otherwise a client could send more than one request, and receive multiple responses and not know which response was for which request.
So you can do this with NServiceBus but it takes a little more thought.
Also NServiceBus uses MSMQ as the underlying transport, not http.