I have a process with a several open handles to gateways, rdbs, hdbs etc.
These handles are maintained in a simple in-memory table.
I'd like to find a way to query a remote function but imposing a timeout on my side? is that possible?
e.g
h: .conn.getHandle[`someAlias]; / this is literally returning the handle number
h({
system"sleep 10"
}; ())
can I somehow impose a timeout such that after 5 seconds the above call throws an error or have some sort of retry logic?
add
\T 5
before the query, or when starting the q process use:
q -T 5
You could use the timeout functionality via the \T system command if you want the timeout applied to all remote calls
https://code.kx.com/q/basics/syscmds/#t-timeout
You would use this option on your backend process (RDB/HDB), then any queries sent to these processes will be killed after the timeout specified
Related
I'm not find any document to clarify how does Postgres handle request-response for async non-block request at the socket protocol level.
As a example, suggest only two query-sql in pg-client which use one socket connection to pg-server. I'm suppose there are two way to handle asyc operation:
client can't send second command before the first request not response.
client socket1 server
-----query1---------->
<----response1--------
------query2--------->
<-----response1-------
socket send two querys at sametime and distinct response by a unique flag.
client socket1 server
-------query1 + uid:msg1----->
-------query2 + uid:msg2----->
<------response2 + uid:msg2---
<------response1 + uid:msg1---
I think it should be the second way to handle request as real async process but I'm not find any resource from document.
Question:
which one is the Postgres handle async socket operation?
If use the first way and why?
On the network protocol level, there is no distinction between synchronous and asynchronous mode. The difference is only in the way the client API works.
There is always at most one statement active at any given time. See for example the documentation for the simple protocol, but it is the same for the extended protocol.
The difference is in the way the client API works:
In synchronous mode, the client thread is blocked until the query result us complete.
In asynchronous mode, control is returned to the client thread immediately after sending the query, and the client can go and do something else while it waits for the server response. It has to poll the socket regularly to check if the result has arrived. Then it can read and process the result.
If you want to run two statements concurrently, you have to use two database sessions.
Consider the following definition on server:
f:{show "Received ",string x; neg[.z.w] (`mycallback; x+1)}
on client side:
q)mycallback:{show "Returned ",string x;}
q)neg[h] (`f; 42)
q)"Returned 43"
In the q for motrtals, the tip says:
When performing asynchronous messaging, always use neg[.z.w] to ensure
that all messages are asynchronous. Otherwise you will get a deadlock
as each process waits for the other.
therefore I change the definition on the server as:
f:{show "Received ",string x; .z.w (`mycallback; x+1)}
everything goes fine, and I haven't seen any deadlocks.
Can anyone give me an example to show why I should always use neg[.z.w]?
If I understand you're question correctly I think your asking how sync and async messages work. The issue with the example you have provided is that x+1 is a very simple query that can be evaluated almost instantaneously. For a more illustrative example consider changing this to a sleep (or a more strenuous calculation, eg. a large database query).
On your server side define:
f:{show "Received ",string x;system "sleep 10"; neg[.z.w] (`mycallback; x+1)}
Then on your client side you can send the synchronous query:
h(`f; 42)
multiple times. Doing this you will see there is no longer a q prompt on the client side as it must wait for a response. These requests can be queued and thus block both the client and server for a significant amount of time.
Alternatively, if you were to call:
(neg h)(`f; 42)
on the client side. You will see the q prompt remain, as the client is not waiting for a response. This is an asynchronous call.
Now, in your server side function you are looking at using either .z.w or neg .z.w. This follows the exact same principal however from a server perspective. If the response to query is large enough, the messaging can take a significant amount of time. Consequently, by using neg this response can be sent asynchronously so the server is not blocked during this process.
NOTE: If you are working on a windows machine you will need to swap out sleep for timeout or perhaps a while loop if you are following my examples.
Update: I suppose one way to cause such a deadlock would be to have two dependant processes, attempting to synchronously call each other. For example:
q)\p 10002
q)h:hopen 10003
q)g:{h (`f1;`)}
q)h (`f;`)'
on one side and
q)\p 10003
q)h:hopen 10002
q)f:{h (`g;`)}
q)f1:{show "test"}
on the other. This would result in both processes being stuck and thus test never being shown.
Joe's answer covers pretty much everything, but to your specific example, a deadlock happens if the client calls
h (`f; 42)
Client is waiting response from the server before processing the next request, but the server is also waiting response from the client before it completes the client's request.
I am using libpq v9.6.8 for my Application (running 24/7), which inserts data into the postgres database. I also run PQexecParams to get the table columns. But randomly (sometimes just once a week, but then twice a weekend) this blocking PQexecParams call somehow returns after about 2 hours. Within these two hours my application just hangs... The inserts are done via async PQsendQueryParams.
Is there a way to configure the timeout for PQexecParams (as I cannot find any appropriate timeout settings in the lib maybe on the postgres server)? Is there a better way to perform the select synchronous?
Thank you in advance
The two hours suggest TCP keepalive kicking in and determining that the connection has gone bad.
You can set the keepalives_idle connection parameter so that the timeout happens earlier and you are not stalled for two hours.
But you probably also want to know what aborts the network connection. Your first look should be at the PostgreSQL server log; you should see an error message that matches the one on the client side. Probably a network component is at fault – look for firewalls in particular.
We have a ticker plant and sometimes someone mistakenly runs queries in HDB without a date or in RDB without a time or with some other processing logic which may kill KDB. How can we find and kill the query without restarting KDB instance?
You can set client query time out in your service:
param: '-T '
reference: http://code.kx.com/q4m3/13_Commands_and_System_Variables/#13121-timeout-t
From wiki: The timeout parameter (note upper case) is an int that specifies the number of seconds any call from a client will execute before it is timed out and terminated. The default value is 0 which means no timeout. This parameter corresponds to the command \T.
Ex: start your q process as:
q -T 40
it will set client query timeout to 40 seconds.
As #Rahul says, you can use T for timeout.
If you're on a unix system you can also kill -SIGINT <pid> - which kills the current thread. In multithreaded mode you might get mixed results though.
I'm using Perl sockets in AIX 5.3, Perl version 5.8.2
I have a server written in Perl sockets. There is a option called "Blocking", which can be set to 0 or 1. When I use Blocking => 0 and run the server and client send data (5000 bytes), I am able to recieve only 2902 bytes in one call. When I use Blocking => 1, I am able to recieve all the bytes in one call.
Is this how sockets work or is it a bug?
This is a fundamental part of sockets - or rather, TCP, which is stream-oriented. (UDP is packet-oriented.)
You should never assume that you'll get back as much data as you ask for, nor that there isn't more data available. Basically more data can come at any time while the connection is open. (The read/recv/whatever call will probably return a specific value to mean "the other end closed the connection.)
This means you have to design your protocol to handle this - if you're effectively trying to pass discrete messages from A to B, two common ways of doing this are:
Prefix each message with a length. The reader first reads the length, then keeps reading the data until it's read as much as it needs.
Have some sort of message terminator/delimiter. This is trickier, as depending on what you're doing you may need to be aware of the possibility of reading the start of the next message while you're reading the first one. It also means "understanding" the data itself in the "reading" code, rather than just reading bytes arbitrarily. However, it does mean that the sender doesn't need to know how long the message is before starting to send.
(The other alternative is to have just one message for the whole connection - i.e. you read until the the connection is closed.)
Blocking means that the socket waits till there is data there before returning from a recieve function. It's entirely possible there's a tiny wait on the end as well to try to fill the buffer before returning, or it could just be a timing issue. It's also entirely possible that the non-blocking implementation returns one packet at a time, no matter if there's more than one or not. In short, no it's not a bug, but the specific 'why' of it is the old cop-out "it's implementation specific".