Haproxy: Is there a way to queue incoming request? - queue

Is there a way in HA-Proxy to queue incoming request.
I am trying to throttle request based on certain rate limiting criteria, but my current implementation denies the request or blocks any new connection.
This is a crude implementation at this moment.
Instead of denying the request I want to queue them till my backend recovers.
Also, is there a way in Haproxy to rate-limit or queue request based on IP.

Related

Reduce client wait time during traffic surge

I am trying to understand how client response time behaves during a traffic surge. Lets say we have a concert ticket website (nginx HTTP2 and a CDN provider for handling requests) which shall release special tickets at a certain (pre-defined) time of the day. Naturally at that point of time, there would be a huge traffic surge for ticket buyers. Is there a way a client (ticket buyer) can always ensure to be first in the queue and get the ticket ? Won't constant requesting the ticket webpage constitute DoS (denial-of-service) attack ?
Tried to read about M/M/1 queue but don't think its used in practice.

Rate Limiting by Request in Haproxy

My goal is to: Limit no of requests (not connection) for API per backend server, so that server won't get loaded with too many requests.
I can do this in middleware of each server, but problem is if server goes in stuck state, and won't able to perform any action on request, Request will go in wait state and that will impact client.
So I won't to perform this using haproxy, where haproxy based on requests on each server, will transfer request to next available node
I read documentation on haproxy.
But it has connection based rate limiting on each server. Or total request rate limiter on frontend, problem with this is if no of servers increase no of allowed request should increase and this does not limit request on each server instead its for service
https://www.haproxy.com/blog/four-examples-of-haproxy-rate-limiting/
Any help will be appreciated

Should users await for a response after the http request in a saga pattern architecture?

I am designing a microservice architecture, using a database per service pattern.
Following the example of Order Service and Shipping Service, when a user makes an HTTP REST request to the Order Service, this one fires an event to notify shipping service. All this happens asynchronously. So, what happens with the user experience? I mean, the user needs an immediate response from the HTTP request. How can I handle this scenario?
All this happens asynchronously. So, What happen with the user experience? I mean, the user needs an immediately response from the HTTP request. How can I handle this scenario?
Respond as soon as you have stored the request.
Part of the point of microservices is that you have a system composed of independently deployable elements that do not require coordination.
If you want a system that is reliable even though the services don't have 100% uptime, then you need to have some form of durable message storage so that the sender and the receiver don't need to be running at the same time.
Therefore, your basic pattern for data from the outside is that the information from the incoming HTTP request is copied, not directly into a running service, but instead into the message store, to be processed by the service at some later time.
In other words, your REST API is a facade in front of your storage, not in front of the service itself.
The actor model may be a useful analogy; information moves around by copying messages into different inboxes, and are later consumed by the subscribing actor.
From the perspective of the client, the HTTP response is an acknowledgement that the request has been received and recognized as valid. Think "thank you for your order, we'll send you an email when your purchase is ready for pick up."
On the web, we would include in the response links to other useful resources; click here to see the status of your order, click there to see your history of recent orders, and so on.

Handle REST API timeout in time consuming operations

How is possible to handle timeouts in time consuming operations in a REST API. Let's say we have the following scenario as example:
A client service sends a request to insert a resource through a REST API.
Timeout elapses. The client thinks the insertion failed.
REST API keep working and finishes the insertion.
Client do not notify the resource insertion and it status is "Failed".
I can think I a solution with a message broker to send orders to a queue and wait until they are solved.
Any other workaround?
EDIT 1:
POST-PUT Pattern as has been suggested in this thread.
A Message Broker (add more complexity to the system)
Callback or webhook. Pass in the request a return url that the server API can call to let the client know that the work is completed.
HTTP offers a set of properties for invoking certain methods. These are primarily safetiness, idempotency and cacheability. While the first one guarantees a client that no data is modified, the 2nd one gives a promise whether a request can be reissued in regards to connection issues and the client not knowing whether the initial request succeeded or not and only the response got lost mid way. PUT i.e. does provide such a property, i.e.
A simple POST request to "insert" some data does not have any of these properties. A server receiving a POST request furthermore processes the payload according to its own semantics. The client does not know beforehand whether a resource will be created or if the server just ignores the request. In case the server created a resource the server will inform the client via the Location HTTP response header pointing to the actual location the client can retrieve information from.
PUT is usually used only to "update" a resource, though according to the spec it can also be used in order to create a new resource if it does not yet exist. As with POST on a successful resource creation the PUT response should include such a Location HTTP response header to inform the client that a resource was created.
The POST-PUT-Creation pattern separates the creation of the URI from the actual persistence of the representation by first firing off POST requests to the server until a response is received containing a Location HTTP response header. This header is used in a PUT request to actually send the payload to the server. As PUT is idempotent the server simply can reissue the request until it receives a valid response from the server.
On sending the initial POST request to the server, a client can't be sure whether the request reached the server and only the response got lost, or the initial request didn't make it to the server. As the request is only used to create a new URI (without any content yet) the client may simply reissue the request and in worst case just create a new URI that points to nothing. The server may have a cleanup routine that frees unused URIs after a certain amount of time.
Once the client receives the URI, it simply can use PUT to reliably send data to the server. As long as the client didn't receive a valid response, it can just reissue the request over and over until it receives a response.
I therefore do not see the need to use a message-oriented middleware (MOM) using brokers and queues in order to guarantee reliable messaging.
You could also cache the data after a successful insertion with a previously exchanged request_id or something of that sort. But I believe message broker with some asynchronous task runner is a much better way to deal with the problem especially if your request thread is a scarce resource. What I mean by that is. If you are receiving a good amount of requests all the time. Then it is a good idea to keep your responses as quickly as possible so the workers will be available for any requests to come.

Two channels for one API

We have a SaaS. It consists of Single Page application (client), Gateway, Data Service 1, Data Service 2 and Notification Service.
Client talk with Gateway (using REST) and service route the request to appropriate Data Service (1 or 2) or do own calculations.
One request from the client can be split on multiple at Gateway service. The result is an aggregation of responses from the sub-services.
Notification Service - is a service which pushing information about changes made by other users using MQ and WebSocket connection to the client. Notification can be published by any service.
With enginers, we had a discussion how the process can be optimized.
Currently, the problem that Gateway spending a lot of time just waiting for the response from Data Services.
One of the proposals is letting Gateway service response 200 Ok as soon as message pushed to the Data Service and let client wait for operation progress throw Notification channel (WebSocket connection).
It means that client always sends HTTP request for operation and get confirmation that operation is executed by WebSocket from the different endpoint.
This schema can be hidden by providing JS client library which will hide all this internal complexity.
I think something wrong with this approach. I have never seen such design. But I don't have valuable arguments against it, except complexity and two points of failure (instead of one).
What do you think about this design approach?
Do you see any potential problems with it?
Do you know any public solutions with
such approach?
Since your service is slow it might makes sense to treat it more like a batch job.
Client sends a job request to Gateway.
Gateway returns a job ID immediately after accepting it from the Client.
Client periodically polls the Gateway for results for that job ID.