change the table name that is under subscription - kdb

I have an engine that subscribes to the TP.
The table name in TP is called TradeTab.
When I subscribe in the engine I would like the table to be called TradeRec.
How could this be done?
h(`.u.sub;`TradeTab;`long$til 10)

You would need to update your upd function which your tickerplant will call in your subscriber process, which is used to insert data into your tables.
You can do this in various ways, e.g. use a dictionary where TradeTab is mapped to TradeRec, though you would need a mapping for each table being subscribed to.
upd:{
d:enlist[`TradeTab]!enlist `TradeRec;
d[x] insert y
}
Or you could use a conditional, the following would only map TradeTab to TradeRec, and the rest would be inserted into the given table name from the tickerplant.
upd:{
$[`TradeTab=x;
`TradeRec insert y;
x insert y
];
}
On top of this, the schema for your table would need to be called TradeRec, and match the schema to TradeTab.
If you are using the default .u.sub to create your empty table (using the empty table returned by it, e.g.
(set) . h(`.u.sub;`TradeTab;`)
You can change the name of the table using something like
(set) . `TradeRec,1_h(`.u.sub;`TradeTab;`)
Here, I am replacing the first item (the table name) in the list provided by the tickerplant with the table name you wish to rename it to.

Related

How can generic functions can be used for computed fields in hasura?

I've a logs table, which is it contains all the actions (updated, created) taken by operators (admin users).
On this table 2 of these columns (indexed as hash) target_entity and target_id Which respectively stores
table name: table name of the action taken on.
record id added or updated record's id in the target table.
So, what I am trying to achieve;
I would like to add a computed field named eg:logs which depends on a function;
FUNCTION public."fetchLogs"(
referenceId integer,
referenceName TEXT
)
First parameter is the current table's primary key.
I'm not sure if I can automatically send primary key as first argument
So probably it should be something like table_row table
Second parameter is a static value which is table's name, planning to statically pass as argument.
This function returns a JSON object;
RETURNS json LANGUAGE plpgsql STABLE
AS $function$
It should return log records related to this record.
At this point I have 2 things needs to be tackled with;
Since the first parameter is the reference (primary) key, I don't know If I can just use primary key as an argument. I'm guessing I need to use as table_row (anytable if that's a thing) then table_row.id
In the Hasura console, Add Computed Field > Function Name selector does not list this function, guessing because it doesn't explicitly indicated in the function which table is this action for.
Answers I'm looking for if this is achievable or is there a better practice for this kind of things?
Maybe I need a encapsulating function for each table needs this computed column. But I'm not sure is it possible or how can be done.
P.S. In case if you are wondering all primary keys are same type and name yes. All tables (will be using this computed column) has a primary key named as id and type is integer.

Storing duplicate data as a column in Postgres?

In some database project, I have a users table which somehow has a computed value avg_service_rating. And there is another table called services with all the services associated to the user and the ratings for that service. Is there a computationally-lite way which I can maintain the avg_service_rating rating without updating it every time an INSERT is done on the services table? Perhaps like a generate column but with a function call instead? Any direct advice or link to resources will be greatly appreciated as well!
CREATE TABLE users (
username VARCHAR PRIMARY KEY,
avg_service_ratings NUMERIC -- is it possible to store some function call for this column?,
...
);
CREATE TABLE service (
username VARCHAR NOT NULL REFERENCE users (username);
service_date DATE NOT NULL,
rating INTEGER,
PRIMARY KEY (username, service_date),
);
If the values should be consistent, a generated column won't fit the bill, since it is only recomputed if the row itself is modified.
I see two solutions:
have a trigger on the services table that updates the users table whenever a rating is added or modified. That slows down data modifications, but not your queries.
Turn users into a view. The original users table would be renamed, and it loses the avg_service_rating column, which is computed on the fly by the view.
To make the illusion perfect, create an INSTEAD OF INSERT OR UPDATE OR DELETE trigger on the view that modifies the underlying table. Then your application does not need to be changed.
With this solution you pay a certain price both on SELECT and on data modifications, but the latter price will be lower, since you don't have to modify two tables (and users might receive fewer modifications than services). An added advantage is that you avoid data duplication.
A generated column would only be useful if the source data is in the same table row.
Otherwise your options are a view (where you could call a function or calculate the value via a subquery), or an AFTER UPDATE OR INSERT trigger on the service table, which updates users.avg_service_ratings. With a trigger, if you get a lot of updates on the service table you'd need to consider possible concurrency issues, but it would mean the figure doesn't need to be calculated every time a row in the users table is accessed.

Table of tablenames

im using a platform called CKAN which saves datasets. When a dataset is added it creates a table with a (seemingly) random name. There are certain datasets that I want to use the data from. Therefore I want to map the relation between the table in another table and the data that is inside.
I would like to use this mapped variable (table name) in a select query as FROM statement.
SELECT * FROM (SELECT tablename FROM mappingtable WHERE id=1)
How do I do this?
Edit: As what kind of data type do I store the table name?

How does one call a function from a postgresql rule that has access to NEW and OLD?

I'm new to postgresql (and therefore rules) and I've looked around, but can't really find an example calling a 'global' function.
I am using a normalized database where rows will be flagged as deleted rather than deleted. However, I would like to retain the DELETE FROM... functionality for the end user, by using an instead of delete rule to update the table's deleted_time column. Each table should, therefore, be able to use a common function, but I am not sure how this would be called in this context, or how it would have access to NEW and OLD?
CREATE OR REPLACE RULE rule_tablename_delete AS ON DELETE
TO tablename DO INSTEAD (
/// call function here to update the table's delete_time column
);
Is this even the correct approach? (I note that INSTEAD OF triggers are restricted to views only)
Just use an UPDATE statement:
create rule rule_tablename_delete as
on delete to tablename
do instead
update tablename
set delete_time = current_timestamp
where id = old.id
and delete_time is null;
Assuming that the id column is the primary key of that table.
Some more examples are in the manual: http://www.postgresql.org/docs/9.1/static/rules-update.html

PostgreSQL: dynamic row values (?)

Oh helloes!
I have two tables, first one (let's call it NameTable) is preset with a set of values (id, name) and the second one (ListTable) is empty but with same columns.
The question is: How can I insert into ListTable a value that comes from NameTable? So that if I change one name in the NameTable then automagically the values in ListTable are updated aswell.
Is there INSERT for this or does the tables has to be created in some special manner?
Tried browsing the manual but without success :(
The suggestion for using INSERT...SELECT is the best method for moving between tables in the same database.
However, there's another way to deal with the auto-update requirement.
It sounds like these are your criteria:
Table A is defined with columns (x,y)
(x,y) is unique
Table B is also defined with columns (x,y)
Table A is a superset of Table B
Table B is to be loaded with data from Table A and needs to remain in sync with UPDATEs on Table A.
This is a job for a FOREIGN KEY with the option ON UPDATE CASCADE:
ALTER TABLE B ADD FOREIGN KEY (x,y) REFERENCES A (x,y) ON UPDATE CASCADE;
Now, not only will it auto-update Table B when Table A is updated, table B is protected against containing (x,y) pairs that do not exist in Table A. If you want records to auto-delete from Table B when deleted from Table A, add "ON UPDATE DELETE."
Hmmm... I'm a bit confused about exactly what you want to do or why, but here are a couple of pointers towards things you might want to take a look at: table inheritance, triggers and rules.
Table inheritance in postgresql allows a table to share the data of a another table. So, if you add a row to the base table, it won't show up in the inherited table, but if you add a row to the inherited table, it will now show up in both tables and updates in either place will reflect it in both tables.
Triggers allow you to setup code that will be run when insert, update or delete operations happen on a table. This would allow you to add the behavior you describe manually.
Rules allow you to setup a rule that will replace a matching query with an alternative query when a specific condition is met.
If you describe your problem further as in why you want this behavior, it might be easier to suggest the right way to go about things :-)