We have a ticker plant and sometimes someone mistakenly runs queries in HDB without a date or in RDB without a time or with some other processing logic which may kill KDB. How can we find and kill the query without restarting KDB instance?
You can set client query time out in your service:
param: '-T '
reference: http://code.kx.com/q4m3/13_Commands_and_System_Variables/#13121-timeout-t
From wiki: The timeout parameter (note upper case) is an int that specifies the number of seconds any call from a client will execute before it is timed out and terminated. The default value is 0 which means no timeout. This parameter corresponds to the command \T.
Ex: start your q process as:
q -T 40
it will set client query timeout to 40 seconds.
As #Rahul says, you can use T for timeout.
If you're on a unix system you can also kill -SIGINT <pid> - which kills the current thread. In multithreaded mode you might get mixed results though.
Related
I have a program within the same machine as the Postgres server. The program creates JDBC requests to fetching tuples from the underlying Postgres and then perform some processing. For a JDBC request resultSet = statement.executeQuery();, how can I measure the time spent by Postgres, i.e., Postgres receive request and get the tuple from underlying disk and return the tuple? In other words, is there any more fine-grained way than measuring the time spent around executing above code, e.g.,
// timer start
resultSet = statement.executeQuery();
// timer end
Thanks!
To measure the time spent on the server, set the PostgreSQL parameter log_min_duration_statement to a small value. Then every statement whose execution time exceeds the value will be logged, along with the time it took to process on the server.
I have a process with a several open handles to gateways, rdbs, hdbs etc.
These handles are maintained in a simple in-memory table.
I'd like to find a way to query a remote function but imposing a timeout on my side? is that possible?
e.g
h: .conn.getHandle[`someAlias]; / this is literally returning the handle number
h({
system"sleep 10"
}; ())
can I somehow impose a timeout such that after 5 seconds the above call throws an error or have some sort of retry logic?
add
\T 5
before the query, or when starting the q process use:
q -T 5
You could use the timeout functionality via the \T system command if you want the timeout applied to all remote calls
https://code.kx.com/q/basics/syscmds/#t-timeout
You would use this option on your backend process (RDB/HDB), then any queries sent to these processes will be killed after the timeout specified
According to this response you can set a time limit for a query operation via find() parameter or a collection method:
cursor = db.collection.find(max_time_ms=1)
or
cursor = db.collection.find().max_time_ms(1)
The doc says:
max_time_ms (optional): Specifies a time limit for a query operation. If the specified time is exceeded, the operation will be aborted and ExecutionTimeout is raised. Pass this as an alternative to calling max_time_ms() on the cursor.
We're currently experiencing a problem that a query runs for ~30 minutes before it eats all the RAM and the server dies. I hope this parameter gives a hard limit on the query and after the given time the server gives up.
Since our app is full of finds and cursors: is there a way how to set this parameter directly in the MongoClient constructor?
The doc says:
socketTimeoutMS: (integer or None) Controls how long (in milliseconds) the driver will wait for a response after sending an ordinary (non-monitoring) database operation before concluding that a network error has occurred. Defaults to None (no timeout).
connectTimeoutMS: (integer or None) Controls how long (in milliseconds) the driver will wait during server monitoring when connecting a new socket to a server before concluding the server is unavailable. Defaults to 20000 (20 seconds).
serverSelectionTimeoutMS: (integer) Controls how long (in milliseconds) the driver will wait to find an available, appropriate server to carry out a database operation; while it is waiting, multiple server monitoring operations may be carried out, each controlled by connectTimeoutMS. Defaults to 30000 (30 seconds).
...couldn't find another timeout and none of these seem to be the equivalent of max_time_ms. Am I missing something?
I know there exists a wlm timeout which times out when the query 'executes' more than that time. But can i set a timeout for the amount of time a query waits in the queue ?
You can control the amount of time that query spends waiting in queue indirectly by specifying statement_timeout configuration parameter on session or whole cluster level in addition to max_execution_time parameter on WLM level. If both WLM timeout (max_execution_time) and statement_timeout are specified, the shorter timeout is used. In this case the maximum time that query will be able to wait in the queue is "statement_timeout" minus "max_execution_time".
You can modify your WLM configuration to create separate queues for the queries on the basis of time they require to run and at runtime, you can route queries to the queues according to user groups or query groups. Hope that is what you want.
I have two locks in my database I cannot kill.
when I type
KILL '8A551D5D-887D-4776-AEB3-F603A4CDF0E0'
I get an error saying:
Distributed transaction with UOW {8A551D5D-887D-4776-AEB3-F603A4CDF0E0} is rolling back: estimated rollback completion: 0%, estimated time left 0 seconds.
It's been like that for 24 hours now, so I believe that the completion level will not change.
when I try to kill the other one I get the error saying:
There is a connection associated with the distributed transaction with UOW {6B820CA6-5836-4CCB-BBA6-C3ED615EA933}. First, kill the connection using KILL SPID syntax.
I tried to kill all the connections using the following script:
USE master
GO
ALTER DATABASE YourDatabaseName
SET OFFLINE WITH ROLLBACK IMMEDIATE
GO
But the script never ends.
How do I kill these locks?