gRPC context.interceptCall() and SimpleForwardingServerCallListener - threadpool

Can someone explain or point me into the right direction where I can understand when to use ForwardingServerCallListener.SimpleForwardingServerCallListener or Contexts.interceptCall in gRPC intercept call on the server side.
In REST a thread gets associated with a request and does the same happen in gRPC too or when a request gets delegated a new thread is created?

Related

Handle multiple guzzle request in proxy for REST API (local server crashes)

I have the following case: I have a REST API, that can only be accessed with credentials. I need the frontend to make requests directly to the API to get the data. Because I don't want to hide the credentials somewhere in the frontend, I set up a proxy server, which forwards my request with http://docs.guzzlephp.org/en/stable/index.html but adds the necessary authentication.
No that worked neatly for some time, but now I added a new view where I need to fetch from one more endpoint. (so far it was 3 requests locally (MAMP))
Whenever I add a fourth API request, which all are being executed right on page load, my local server crashes.
I assume it is linked to this topic here:
Guzzle async requests not really async?, specifically because I make a new request for every fetch.
First: Do you think that could be the case? Could my local server indeed crash, because I have only 3 (probably simultaneous) requests?
Second: How could I approach this problem.
I don't really see the possibility to group the requests, because they are just incoming to the proxy url and every call of the proxy url will create a new Guzzle client with its own request...
(I mean, how many things can a simple PHP server execute at the same time? And why would it not just add requests to the call stack and execute them in order?)
Thanks for any help on this issue.

Microservices - handling asynchronous data creation in frontend application

I have problem, that is pretty crucial, but I couldn't find good answer to it for a while.
I have a microservice-based backend with the gateway, a few other microservices, and Kafka brokers.
Gateway offers synchronous REST API for reads/queries and asynchronous for writes.
Write scenario looks as follows. The gateway returns 202 Accepted status and publishes event e.g. CreateItem to Kafka. Item service subscribes to this kind of event, creates an item and emits ItemCreated event.
My problem is how to handle such scenario on frontend side.
The most basic approach I thought about is to route to the items list page and poll for items, so the newly created items show there eventually (maybe with some kind of indicator, that shipment creation is being processed) but it's kinda stupid.
I also thought about pushing writes from frontend over WebSocket to the gateway and on ItemCreated event gateway would push that info back to the client, but it doesn't resolve the problem - what to show to the user?
On the other hand, I can use the WebSocket solution and show some loading screen with an indeterminate progress bar, when waiting for a response over a socket, but that would make the write effectively synchronous - at least on the frontend side. Just as well I could make the write HTTP POST endpoint synchronous on the Gateway side and return the response only after the ItemCreated event has been received.
So, what would be the best solution to that problem? Are some of these I listed any good?

Two channels for one API

We have a SaaS. It consists of Single Page application (client), Gateway, Data Service 1, Data Service 2 and Notification Service.
Client talk with Gateway (using REST) and service route the request to appropriate Data Service (1 or 2) or do own calculations.
One request from the client can be split on multiple at Gateway service. The result is an aggregation of responses from the sub-services.
Notification Service - is a service which pushing information about changes made by other users using MQ and WebSocket connection to the client. Notification can be published by any service.
With enginers, we had a discussion how the process can be optimized.
Currently, the problem that Gateway spending a lot of time just waiting for the response from Data Services.
One of the proposals is letting Gateway service response 200 Ok as soon as message pushed to the Data Service and let client wait for operation progress throw Notification channel (WebSocket connection).
It means that client always sends HTTP request for operation and get confirmation that operation is executed by WebSocket from the different endpoint.
This schema can be hidden by providing JS client library which will hide all this internal complexity.
I think something wrong with this approach. I have never seen such design. But I don't have valuable arguments against it, except complexity and two points of failure (instead of one).
What do you think about this design approach?
Do you see any potential problems with it?
Do you know any public solutions with
such approach?
Since your service is slow it might makes sense to treat it more like a batch job.
Client sends a job request to Gateway.
Gateway returns a job ID immediately after accepting it from the Client.
Client periodically polls the Gateway for results for that job ID.

GET rest api fails at client side

Suppose client call server using GET API, is it possible that server send a response but client misses that response??
If yes how to handle such situation, as I want to make sure that client receives the data. For now I am using second REST call by client as ack of first.
It is certainly possible. For example if you are using a site with a REST API and a request is just sent to the API and your internet connection dies when the answer is supposed to arrive, then it is quite possible that the server has received your request, successfully handled it, even sent the response, but your computer did not receive it. It could be an issue on a server responsible for transmitting the request as well. The solution to this kind of issue is to have a timeout and if a request timed out, then resend it until it is no longer timed out.

Retrieve data from webservice, with side effects - what method to use

I am in the process of writing a webservice that sends data to the client, but it has side effects.
This webservice will be called periodically, and any data that is being sent to the client will be marked as such and will not be sent again.
The client is 100% stateless, I can't expect it to send something like a timestamp of the last request. The administration of state lies with the web service.
I am a firm believer that GET requests must be idempotent, so I cannot use that as the method. POST and PUT on the other hand are used to create/update resources, which is not the case here.
What http method would you choose and why?
I finally went with POST.
Mostly the argument
If the client can not be expected to implement basic HTTP measures such as a conditional GET with an If-Modified-Since or sth. like that ... then the other end is probably not the one to be puristic about HTTP either.
is what persuaded me.