callback on table update in kdb - kdb

I want to implement a client server mechanism in kdb where clients can register them self to receive a callback when some table is updated.
I know how the callback work in kdb, I was not able to figure how to bind table updates in server to a function from where I can call 'callback' from client.

Basically you want to implement 'Publish-Subscribe' mechanism. KDB already has a script 'u.q' in tick library which provides that:
https://code.kx.com/q/cookbook/publish-subscribe/
On server, it maintains list of clients along with their handles, subscription tables and callback functions. You will have to change function on server which handles data insert/update to also publish the data.
q) .u.pub[table name; table data]
This will take care of calling each client's callback function which are registered for this table.
On client side, create the connection to publisher and call the library function for subscription.
q) .u.sub[tablename;list_of_symbols_to_subscribe_to]
You can also look into example publisher and subscriber code: https://github.com/KxSystems/cookbook/tree/master/pubsub

Related

Is it possible to write a generic trigger for all tables in Postgres?

I want to listen to every transaction that is recorded in the database And I noticed that there is a concept in Postgres under the title LISTEN and NOTIFY (Of course, I am not sure that this is the right way)
Now I want to write a trigger that sends a notification to the channel when any operation occurs in any table
Is it possible?
My way is right?
thanks
Postgres TRIGGER to call NOTIFY with a JSON payload

How to create HTTP GET and POST methods in kdb

What is the best way to set up HTTP GET and POST methods with a kdb database?
I'd like to be able to extract the column names from a kdb table to create a simple form with fillable fields in the browser, allow users to input text into the fields, and then upsert and save that text to my table.
For example if I had the following table...
t:([employeeID:`$()]fName:`$(); mName:`$(); lName:`$())
So far I know how to open a port \p 9999 and then view that table in browser by connecting to the local host http://localhost:9999 and I know how to get only the column names:cols t.
Though I'm unsure how to build a useful REST API from this table that achieves the above objective, mainly updating the table with the inputted data. I'm aware of .Q.hg and .Q.hp from this blog post and the Kx reference. But there is little information and I'm still unsure how to get it to work for my particular purpose.
Depending upon your front-end(client) technology, you can either use HTTP request or WebSockets. Using HTTP request will require extra work to customize the output of the request as by default it returns HTML data.
If your client supports Websockets like Javascript then it would be easy to use it.
Basically, you need to do 2 things to setup WebSockets:
1) Start your KDB server and setup handler function for WebSocket request. Function for that is .z.ws. For eample simple function would be something like below:
q) .z.ws:{neg[.z.w].Q.s #[value;x;{`$ "'",x}]}
2) Setup message handler function on the client side, open websocket connection from the client and send a request to KDB server.
Details: https://code.kx.com/v2/wp/websockets/
Example: https://code.kx.com/v2/wp/websockets/#a-simpledemohtml

Model parametrized API Call in Activity Diagram

I have an an activity diagram with two swimlanes (Client and Server). I want to model a request call from Client to Server.
Is it correct to use Signals Notation for Calls between systems? Are there alternatives?
The call is parametrized, Client wants to send something which was created before. How to model this?
Thankful for any hint! Here's my example:
My answer has to be improved, but here is a first step.
The norm/spec says: "A SendSignalAction is an InvocationAction that creates a Signal instance and transmits the instance to the object given on its target InputPin. A SendSignalAction must have argument InputPins corresponding, in order, to each of the (owned and inherited) Properties of the Signal being sent, with the same type, ordering and multiplicity as the corresponding attribute.
And a SendSignalAction has an association to a target objet which is an input pin.
So for your question about Request:item I would use input pin, one for the object from which the Signal is created and one to define the Target. (in the schema the target comes from an output pin but a data store may be use). Then after sending the request, the client is waiting the answer. The AcceptEvent is linked to a trigger (not shown on the schema) which a signal, the one created by the server. But you can not link SendRequest of Client to ReceiveRequest of Server because this is not how it runs.
For the server, you can do similar reasoning.
Concerning the parametrization of the call I would use InputPin to model the arguments of the Call i.e. the Object sent by the Call as shown below.
Signal and Call Notations are correct for me but I am not used to have the sending and receive action in the same diagram so will suggest two alternatives.
1) First remove them...
2) Separate Client and Server Modelling
Let me know what you think about that and what seems to be clear for you...
I also recognize the tool you used so please find my project at:
https://www.dropbox.com/s/s1mx46cb3linop0/Project1.zip?dl=0
As I see it, it should be modeled like this:
The server runs in an independent loop and starts with waiting for a request. There's a object flow between Create request and Query result set. This symbolizes data placed in a queue (or what ever is appropriate). The receipt of the result set would be done below in a similar way, I just left that out for brevity.
You can also draw an object for the query set
instead of the ActionPins.

How to push tables through socket

Considering the setup of kdb+ tick, how do the tables get pushed through the sockets?
In tick, it's possible to subscribe with a process (let's say) a to the tickerplant, which will then proceed to push the data of the subscribed 'tickers' to a as new data arrives.
I would like to do the same but I was wondering how. As far as I know, inter-process communication between q process is just the ability to transport commands from one process to the other, such that the commands will be executed on the other.
So how is it then possible to transport a complete table between processes?
I know the method which does this in tick is .u.pub and .u.sub, but it's not clear to me how the tables are transported between the processes.
So I have two questions:
How does kdb+ tick do this?
How can I push a table from one process to the other in general?
Let's understand the simple process of doing this:
We have one server 'S' and one client 'C'. When 'C' calls .u.sub function, that function code connects to 'S' using its host and port and call a specific function on 'S' (lets say 'request') with subscription parameters.
On getting this request, 'S request' function makes following entries to its subscribtion table which it maintains for subscription request.
-> Host and port of Client(incoming request)
-> Subscription params (for ex. clients send sym `VOD.L for subscription)
Now when 'S' gets any data update from feed, it goes thorugh it's subscription table and check the entries whose subscription param column value (sym in our case) matches with incoming data. Then it makes connection to each of them using their host and port from table and call their 'upd' function with new data.
Only thing is, client should have 'upd' function defined on their side.
This is a very basic process. KDB+ uses this with extra optimizations and features. For ex. more optimized structure for maintaining subscription table,log maintenance, replaying logs, unsubscription ,recovery logic, timer for publishing and lot more.
For more details, you can check definition of functions in 'u' namespace.

How a Java client app. can "catch" (via JDBC) the result produced by a trigger procedure query?

I'm trying to understand how a java (client) application that communicates, through JDBC, with a pgSQL database (server) can "catch" the result produced by a query that will be fired (using a trigger) whenever a record is inserted into a table.
So, to clarify, via JDBC I install a trigger procedure prepared to execute a query whenever a record is inserted into a given database table, and from this query's execution will result an output (wrapped in a resultSet, I suppose). And my problem is that I have no idea how the client will be aware of those results, that are asynchronously produced.
I wonder if JDBC supports any "callback" mechanism able to catch the results produced by a query that is fired through a trigger procedure under the "INSERT INTO table" condition. And if there is no such "callback" mechanism, what is the best approach to achieve this result?
Thank you in advance :)
Triggers can't return a resultset.
There's no way to send such a result to the JDBC driver.
There are a few dirty hacks you can use to get results from a trigger to the client, but they're all exactly that. Things like:
DECLARE a cursor for the resultset, then send the cursor name as a NOTIFY payload, so the app can FETCH ALL FROM <cursorname>;
Create a TEMPORARY table and report the name via NOTIFY
It is more typical to append anything the trigger needs to communicate to the app to a table that exists for that purpose and have the app SELECT from it after the operation that fired the trigger ran.
In most cases if you need to do this, you're probably using a trigger where a regular function is a better fit.