Consider the following definition on server:
f:{show "Received ",string x; neg[.z.w] (`mycallback; x+1)}
on client side:
q)mycallback:{show "Returned ",string x;}
q)neg[h] (`f; 42)
q)"Returned 43"
In the q for motrtals, the tip says:
When performing asynchronous messaging, always use neg[.z.w] to ensure
that all messages are asynchronous. Otherwise you will get a deadlock
as each process waits for the other.
therefore I change the definition on the server as:
f:{show "Received ",string x; .z.w (`mycallback; x+1)}
everything goes fine, and I haven't seen any deadlocks.
Can anyone give me an example to show why I should always use neg[.z.w]?
If I understand you're question correctly I think your asking how sync and async messages work. The issue with the example you have provided is that x+1 is a very simple query that can be evaluated almost instantaneously. For a more illustrative example consider changing this to a sleep (or a more strenuous calculation, eg. a large database query).
On your server side define:
f:{show "Received ",string x;system "sleep 10"; neg[.z.w] (`mycallback; x+1)}
Then on your client side you can send the synchronous query:
h(`f; 42)
multiple times. Doing this you will see there is no longer a q prompt on the client side as it must wait for a response. These requests can be queued and thus block both the client and server for a significant amount of time.
Alternatively, if you were to call:
(neg h)(`f; 42)
on the client side. You will see the q prompt remain, as the client is not waiting for a response. This is an asynchronous call.
Now, in your server side function you are looking at using either .z.w or neg .z.w. This follows the exact same principal however from a server perspective. If the response to query is large enough, the messaging can take a significant amount of time. Consequently, by using neg this response can be sent asynchronously so the server is not blocked during this process.
NOTE: If you are working on a windows machine you will need to swap out sleep for timeout or perhaps a while loop if you are following my examples.
Update: I suppose one way to cause such a deadlock would be to have two dependant processes, attempting to synchronously call each other. For example:
q)\p 10002
q)h:hopen 10003
q)g:{h (`f1;`)}
q)h (`f;`)'
on one side and
q)\p 10003
q)h:hopen 10002
q)f:{h (`g;`)}
q)f1:{show "test"}
on the other. This would result in both processes being stuck and thus test never being shown.
Joe's answer covers pretty much everything, but to your specific example, a deadlock happens if the client calls
h (`f; 42)
Client is waiting response from the server before processing the next request, but the server is also waiting response from the client before it completes the client's request.
Related
I'm using 0MQ to let multiple processes talk to each other (IPC sockets, but should also work via TCP across different nodes). My code is similar to a client/server pattern, but REQ/REP sockets are not enough. Here is a sample conversation. See below for further details.
Process A
Process B
open socket
not started
start process B
-
-
open socket, connect to A
-
send hello (successful start, socket information)
request work
-
-
do work
-
send response (work result 1)
-
send response (work result 2)
-
send unsolicited message
-
send response (work finished)
request termination
-
Actually, A is (even though doing all the requests) closer to be the server component, since it is constantly running. Based on external triggers, A starts a sort of plugin process B.
Every request needs to be answered by a finished response. Before that, N (between 0 and an arbitrary upper bound) responses can be sent from B.
A new request can be sent from A even when the current request is still ongoing (no finished message received). If relevant, the code could be updated to buffer the requests.
B sends an initial message which is not preceded by a request from A.
B can send other messages (logging) anywhere in between, also not preceded by a request.
Optional: A single socket in A should handle multiple plugin processes B, C, D...
A DEALER/ROUTER combination would probably match all requirements, but might be a bit too much. Process B will only ever connect to a single peer. And without the optional requirement above, the same would be true for process A as well. So I'm a bit hesitant to use DEALER and ROUTER sockets which are both able to handle multiple peers.
I'm not find any document to clarify how does Postgres handle request-response for async non-block request at the socket protocol level.
As a example, suggest only two query-sql in pg-client which use one socket connection to pg-server. I'm suppose there are two way to handle asyc operation:
client can't send second command before the first request not response.
client socket1 server
-----query1---------->
<----response1--------
------query2--------->
<-----response1-------
socket send two querys at sametime and distinct response by a unique flag.
client socket1 server
-------query1 + uid:msg1----->
-------query2 + uid:msg2----->
<------response2 + uid:msg2---
<------response1 + uid:msg1---
I think it should be the second way to handle request as real async process but I'm not find any resource from document.
Question:
which one is the Postgres handle async socket operation?
If use the first way and why?
On the network protocol level, there is no distinction between synchronous and asynchronous mode. The difference is only in the way the client API works.
There is always at most one statement active at any given time. See for example the documentation for the simple protocol, but it is the same for the extended protocol.
The difference is in the way the client API works:
In synchronous mode, the client thread is blocked until the query result us complete.
In asynchronous mode, control is returned to the client thread immediately after sending the query, and the client can go and do something else while it waits for the server response. It has to poll the socket regularly to check if the result has arrived. Then it can read and process the result.
If you want to run two statements concurrently, you have to use two database sessions.
I didn't get whether Max transactions refer to client side or server side of CoAP. For instance, if COAP_MAX_OPEN_TRANSACTIONS is 4. Does it mean that CoAP Client can send 4 parallel request to different servers or it means that CoAP Server can process max 4 requests in parallel.
Because from the code I see that it initiates a blocking request from the client side which will not allow looping for another transaction.
So, need clarification here. If multiple CoAP transactions possible from client side then please mention how. Thank you.
According to paper dunkels.com/adam/kovatsch11low-power.pdf
Section III-F CoAP Clients provide a blocking function call implemented with protothreads to issue a request. This linear programming model can also hide blockwise transfers, as it continues first when all data were received. So based on this I am guessing client can generate one transaction at a time and blocks to wait for ack (or timeout).
Here is code reference https://github.com/contiki-os/contiki/blob/master/apps/er-coap/er-coap-engine.c#L370.
Contrarily, Server can respond to multiple transactions simultaneously because there are transactions which wait for response (from say sensors) and need to save state. This is my understanding of the question posted. If I am wrong then please correct.
According to links:
https://github.com/contiki-os/contiki/blob/bc2e445817aa546c0bb93a9900093ec276005e2a/apps/er-coap/er-coap-conf.h#L51
https://github.com/contiki-ng/contiki-ng/wiki/Documentation:-CoAP#configuration
I guess it's just a max number of confirmable requests (which have not yet received an ACK) to be stored simultaneously for retransmission.
And it used for reserving memory for the max number of those requests:
https://github.com/contiki-os/contiki/blob/3f4436bac9a9f6da0df188372d4374693eab8a52/apps/er-coap/er-coap-transactions.c#L57
MEMB(transactions_memb, coap_transaction_t, COAP_MAX_OPEN_TRANSACTIONS);
How come every site explains that in SSE a single connection stays opened between client and server "With SSE, a client sends a standard HTTP request asking for an event stream, and the server responds initially with a standard HTTP response and holds the connection open"
And then, when server decides it can send data to the client while what I am trying to implement SSE I see on fiddler requests being sent every couple of seconds
For me it feels like long polling and not a one single connection kept opened.
Moreover, It is not that the server decides to send data to the client and it sends it but it sends data only when the client sends next request
If i respond with "retry: 10000" even tough something has happened that the server wants to notify right now, will get to the client only on the next request (in 10 seconds from now) which for me does not really looks like connection that is kept opened and server sends data as soon as he wants to
Your server is closing the connection immediately. SSE has a built-in retry function for when the connection is lost, so what you are seeing is:
Client connects to server
Server myteriously dies
Client waits two seconds then auto-reconnects
Server myteriously dies
Client waits two seconds then auto-reconnects
...
To fix the server-side script, you want to go against everything your parents taught you about right and wrong, and deliberately create an infinite loop. So, it will end up looking something like this:
validate user, set up database connection, etc.
while(true){
get next bit of data
send it to client
flush
sleep 2 seconds
}
Where get next bit of data might be polling a DB table for new records since the last poll, or scan a file system directory for new files, etc.
Alternatively, if the server-side process is a long-running data analysis, your script might instead look like this:
validate user, set-up, etc.
while(true){
calculate next 1000 digits of pi
send them to client
flush
}
This assumes that the calculate line takes at least half a second to run; any more frequently and you will start to clog up the socket with lots of small packets of data for no benefit (the user won't notice that they are getting 10 updates/second instead of 2 updates/second).
I have a strange problem on one of my clients workstation. I have a simple application that exchanges some data over network between two endpoints.
Basically the transaction goes like this:
Client A listens for incomming connection
Client B connects to A and sends some data
Client A read this data for further processing
Now the strange part is that client A does not receive whole data (sometimes it a part of buffer sometimes it is empty).
The A client uses WSAEventSelect function and waits for FD_READ to read data sent by B and for FD_CLOSE to detect disconnection.
Usually ( everytime except this one particular client) the FD_READ is signaled, data is processed and after that FD_CLOSE is signaled and all is fine, but here instead FD_READ i receive FD_CLOSE.
Can someone tell me how this is possible? Another thing is that program was working fine for about a year and suddenly it crashed.
Now the strange part is that client A does not receive whole data (sometimes it a part of buffer sometimes it is empty).
There's nothing strange about that, that's how TCP works, except that you will never receive zero bytes in blocking mode.
Usually ( everytime except this one particular client) the FD_READ is signaled, data is processed and after that FD_CLOSE is signaled and all is fine, but here instead FD_READ i receive FD_CLOSE.
Note that FD_READ can be signalled any number of times, not just once. You're not guaranteed to receive an entire message in a single read.
Can someone tell me how this is possible?
The client has closed the connection.
Quoting http://msdn.microsoft.com/en-us/library/windows/desktop/ms741576%28v=vs.85%29.aspx
"An application should check for remaining data upon receipt of FD_CLOSE to avoid any possibility of losing data."
So if the error code associated with the FD_CLOSE notification is 0, you should check to see if you still have data to read, that might be where your missing data is.
If the error code is NOT 0, then there was an error and the missing data is probably lost.