How does one create a REST-like pattern with zeromq? Specifically, I have the following architecture:
A user sends a POST request to Server A, which then sends that request to Server B to do some more processing. After that, Server B sends that request to Server C, doesn't wait on its response, then responds back to Server A with a message indicating that the request has been enqueued. I want Server A to wait on Server B until this response is received.
Initially, I made this so that Server A and Server B are connected through DEALER/ROUTER. But when multiple users hit the POST route at the same time, there is no guarantee that the ROUTER's response will correspond to the correct request.
For example, let's say that John sends a request that takes 60 seconds to process. After that, Jane sends a different request that takes 30 seconds to process. Even if John sent the request first, Server A's first recv on the DEALER socket will return Jane's request since it finished first.
How do I make sure that the responses are sent to the correct "client"? I technically have only one client (Server A), but multiple requests can be made at the same time.
Related
A friend of mine mentioned that I could send TLS application data (HTTP messages) in batch fashion inside a one TCP packet instead sending one by one.
I've researched his claim using Google but couldn't find any.
Is this possible? Wouldn't this cause parsing issues for browsers?
HTTP is a request-response protocol, i.e. each request expects a response. The traditional way with HTTP/1 was to first send a request, then wait for the response, then send another request (if HTTP keep-alive is supported and the connection stayed open) and expect the response etc. With this procedure it could not be possible that multiple requests end up in the same packet since one had to wait for the response of the current request before sending the next one. Similar multiple responses could usually not end up in the same packet since there was also a single response outstanding. A special case here is a preliminary response (status code 100) which is followed by the "real" response. These could be sent together.
It is possible with HTTP pipelining to send multiple requests at once and then wait for all the responses to be sent back in the same order as the requests were sent. In this case multiple requests could end up in the same packet and multiple responses too. But the server is actually allowed to close the connection after a response is done. In this case requests might be left unanswered and one might need to resent the requests. But it might actually happen that the server has processed the request but that the connection was closed before sending the response, so replaying should only be done if it does not change the semantics (i.e. idempotent requests).
HTTP/2 then supports parallel sending of requests, i.e. requests can actually be interleaved in each others and same with responses. This way also multiple requests or responses might end up in the same packet.
Apart from that a clarification might be needed: a normal application does not send TCP packets but it sends data. How these data are packetized for transport is up to the kernel and depend for example on MTU. It might be that a single send results in multiple packets. It might also be that multiple send shortly after each other are combined together into a single packet.
I'm trying very hard to understand the flow of a web request to a server which has a queue or message broker in the middle, but I can't find information about when and where the reply is given.
Imagine this use case:
Client A:
sends a invoice order request
the invoice is enqueued
the request is processed and dequeued.
at which time the client will receive a response?
right after the message is received by the queue?
right after the message is processed and dequeued? Other?
I'm asking because if the reply only comes after the message being processed the client might wait a long time. Imagine the message takes 3 minutes to process, would the client need to keep requesting the server to see if it is processed? or a connection is maintained using something like long polling?
I'm interested in scenarios using RabbitMq and kafka.
Advantage of having a messaging system is to ensure the frontend webserver and backend processing is decoupled. Best practice is Web server should publish the message and just wait for the messaging system to acknowledge receiving the message.
We have an UDP client that communicates against a server.
The server gives a single response on each request.
The client sends a request, and waits 5 seconds for the response.
If the server's response was not received by 5 seconds - the client assumes that the packet was lost in the network (this is UDP...), writes a report to a log, and sends the next request.
But, sometimes we have any delay in the network, and the server's response comes after 5 seconds.
Let's describe the scenario:
The client sent a packet named "X".
The timeout of 5 seconds expired, and the client reports that "X" is a lost packet.
The client sent another packet named "Y".
The server's response on "X" comes now to the client.
The client sees that the response is not compatible to the request, and report it to the log.
The client sent another packet named "Z".
The server's response on "Y" comes now to the client.
The client sees that the response is not compatible to the request, and report it to the log.
And this is an infinity loop!
What can we do?
Many UDP-based protocols include an identifier to indicate which request a given response belongs to. The client chooses the identifier and sends it to the server as part of the request, and then the server echoes it back in the response. This allows the client to match responses to requests, especially in situations like the one you describe. If the client received the response to X after having moved on, it would be able to simply ignore that response.
According to sip protocol when first invite send, sip returns proxy authentication required message (if there are any proxy server available), then client send an acknowledge message. But what happen if the acknowledge message failed to reach the sip server? Server returns forbidden after sometimes and ignore all new invite with authentication header. Also when sip gets multiple acknowledge message it's immediately send forbidden.
If your question is what would the correct behaviour be for a SIP server that has issued a 407 and not received an ACK for it, please see RFC 3261 17.2.1 for the description of the INVITE server transaction.
Sending the 407 moves the state machine into the "Completed" state, at which point the G and H timers have to be be set. When G fires, the 407 response needs to be retransmitted. And if all the ACK messages get lost, then timer H will make the server transaction give up eventually. But if the second ACK reaches the server then that's it. You will have seen two 407 responses, one with a lost ACK, the second one with a successful ACK.
The handling of the subsequent INVITE with the credentials should be entirely independent with the previously described process. The INVITE message with the credentials will constitute a separate dialogue forming transaction.
I have a client application that repeatedly sends commands over a socket connection to a server and receives corresponding responses. This socket connection has a send/recv timeout set.
If the server is slow to respond for some reason, the client receives EAGAIN. On a timeout, I want the client to ignore that response and proceed with sending the next request.
However, currently when I ignore EAGAIN and send the next request, I receive the response from a previous request.
What is the best way to ignore/discard the response on an EAGAIN?
You can't. You have to read it. There is no mechanism to ignore bytes in a TCP byte stream.
EAGAIN may indicate a timeout elapsed (you also need to handle EWOULDBLOCK as well). If you are using TCP, you must read pending data before you can read any subsequent data. If you get an EAGAIN on a read, you have to perform the same read again, using the same parameters. Just because the server is slow to respond (or the network is slow to deliver the response) does not mean the response will not arrive at all, unless the connection is closed/lost.
If you really want to be able to receive responses out of order, you need to design your communication protocol to support that in the first place. Give each request a unique ID that is echoed in its response. Send the request but do not wait for the response to arrive. That will allow the client to have multiple requests in flight at a time, and allow the server to send back responses in any order. The client will have to read each response as it arrives (which means you have to do the reading asynchronously, typically using a separate thread or some other parallel signaling mechanism) and match up each response's ID to its original request so you know how to then process it.