I'm trying to connect to an RDB from a client and get a row count for one of the tables every x seconds using the .z.ts timer function. So assuming my RDB is listening on port 5011 then my q code looks something like this:
h:hopen 5011
.z.ts:{h"count table"}[]
\t 1000
However, I get the following error showing the count returned can't be written to the handle as it's invalid...:
Cannot write to handle 183701. OS reports: The handle is invalid.
Any insight would be appreciated.
Because you're passing an empty argument to the lambda while defining .z.ts, you're just assigning .z.ts to be 'count table' and then when the timer ticks it tries to send the timestamp across whatever integer is stored in .z.ts which isn't a valid connection.
Remove the empty argument and it'll work
Related
I need to understand enable logging in Ingres stored procedure. I read a lot about "printqry", DBMS Server Query Tracing or security auditing. My requirement is does Ingres db gives option of custom logging where I can log custom messages.
db.trace("The value for x is ", x)
You can use the MESSAGE statement to write an arbitrary message. The message can either go to the current SESSION (meaning the calling program has to run INQUIRE_SQL to get the text) or the security audit log or errlog. I suspect the later would be most useful.
It takes an optional error number and/or message text. If you want to write messages involving values other than a constant string you'll need to assign it to a variable e.g.
msg_txt = 'The value for x is "+VARCHAR(:x);
MESSAGE :msg_txt WITH DESTINATION = (ERROR_LOG);
HTH
I am struggling to read my Postgres NOTICE messages in my C++ API. I can only read EXCEPTIONmessages using the function PQresultErrorMessage(PGresult), but not lower level messages.
PQresultErrorField(res, PG_DIAG_SEVERITY) returns null pointer.
How do I read NOTICE and other low level messages?
(Using PostgreSQL 9.2)
Set up a notice receiver or notice processor using PQsetNoticeReceiver / PQsetNoticeProcessor. Both set up callbacks that are invoked when asynchronous notifications are received. Note that this may happen before, during, or after processing of query data.
It's safe to assume that after all query results are returned (PQexec or whatever has returned) and you've called PQconsumeInput to make sure there's nothing else waiting, then all notices for the last command are received. The PQconsumeInput shouldn't really be necessary, it's just to be cautious.
See the documentation for libpq.
Considering the setup of kdb+ tick, how do the tables get pushed through the sockets?
In tick, it's possible to subscribe with a process (let's say) a to the tickerplant, which will then proceed to push the data of the subscribed 'tickers' to a as new data arrives.
I would like to do the same but I was wondering how. As far as I know, inter-process communication between q process is just the ability to transport commands from one process to the other, such that the commands will be executed on the other.
So how is it then possible to transport a complete table between processes?
I know the method which does this in tick is .u.pub and .u.sub, but it's not clear to me how the tables are transported between the processes.
So I have two questions:
How does kdb+ tick do this?
How can I push a table from one process to the other in general?
Let's understand the simple process of doing this:
We have one server 'S' and one client 'C'. When 'C' calls .u.sub function, that function code connects to 'S' using its host and port and call a specific function on 'S' (lets say 'request') with subscription parameters.
On getting this request, 'S request' function makes following entries to its subscribtion table which it maintains for subscription request.
-> Host and port of Client(incoming request)
-> Subscription params (for ex. clients send sym `VOD.L for subscription)
Now when 'S' gets any data update from feed, it goes thorugh it's subscription table and check the entries whose subscription param column value (sym in our case) matches with incoming data. Then it makes connection to each of them using their host and port from table and call their 'upd' function with new data.
Only thing is, client should have 'upd' function defined on their side.
This is a very basic process. KDB+ uses this with extra optimizations and features. For ex. more optimized structure for maintaining subscription table,log maintenance, replaying logs, unsubscription ,recovery logic, timer for publishing and lot more.
For more details, you can check definition of functions in 'u' namespace.
Is there a command to know if the kdb server is busy running a query? Even better, knowing what is the percentage completion of the query being run?
So far I've been looking at the top screen on linux to know which server to use...
Unfortunately, not directly. The reason is due to the single threaded nature of a KDB process. In practice, this is easily worked around by adding some basic logging to your server. So whenever a query comes in just log to a file the time the query came in and when the result was returned to the user.
Take a look at the .z.pg and the .z.ps functions which are called to handle synchronous or asynchronous requests, respectively. By default they are just set to "value", which means evaluate the string and return the result. Just replace this with your own function to log events to a file or a log server.
Besides above solution, a more simple way is: keep checking the port.
Normally all queries will be running against port, and kdb server can launched multiple ports for different purpose.
Details:
Use below code to query again port, if the port is busy, null res will return. And you can further kill the port and restart it or whatever the requirement is.
The code will send out 1 to the port and calculate.
.server.testQuery:{[inPort]
res:#[{hopen(x;3000)};`$":",":" sv string `,inPort;0N];
if[not null res;hclose res];
:res
};
I have a SqlBulkCopy operation that is taking data from an MS-Access 2007 database (via OleDbConnection) and using SqlBulkCopy to transfer that data to a SQL Server database. This has previously been working and continues to work for one MS-Access database, but not the other.
I get the error message:
Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
It is hard to believe it is a timeout ast the oledbCommand.CommandTimeout = 0 the sqlBulkCopy.BulkCopyTimeout = 0 and on either side (MS-Access and SQL Server the timeouts have now been set to 0).
Are there other issues/exceptions that the above error message could be hiding? Is there a way to determine what the base cause of a sqlBulkCopy.WriteToServer exception is (there doesn't appear to be any inner exceptions etc...)
So the issue was that there were dates being transfered and some of those dates were invalid for SQL, but valid in Access. For whatever reason this was presenting as a Timeout rather than "invalid date/time" - though if you reduce the data being transfered to a handful of rows (200) rather than the full transfer (500,000) it reports as invalid date/time ... curious.