Construct q command to get metadata for all tables - kdb

I'd like to construct a query to retrieve table metadata for each table.
I can get metadata for a single table with the meta function. I can chain that with tables \`., which returns all of the tables in the . namespace, to construct (meta')tables `..
This is almost what I want as it returns a list of metadata tables. The problem is that I dont know what metadata table belongs to which kdb table.
Ideally, I could construct a query which returns a table where each row is tablename + results of meta tablename. Any advice for constructing such a query?

q)trade:([] sym: 10?`4; time:10?.z.t; prx:10?100f; sz:10?10000);
q)quote:([] sym: 10?`4; time:10?.z.t; bPrx:10?100f; aPrx:10?100f; bSz:10?10000; aSz:10?10000);
q)testTable:update `s#a from ([] a:til 10; b: 10?`3; c:10?.z.p);
q)raze {update table:x from 0!meta x}'[tables[]]
c t f a table
--------------------
sym s quote
time t quote
bPrx f quote
aPrx f quote
bSz j quote
aSz j quote
a j s testTable
b s testTable
c p testTable
sym s trade
time t trade
prx f trade
sz j trade
I could construct a query which returns a table where each row is tablename + results of "meta tablename". Any advice for constructing such a query?
If you did want to do it in this manner, there are many ways. One example:
q)update tableMeta:meta'[table] from ([] table:tables[])
table tableMeta
--------------------------------------------------------------------------------
quote (+(,`c)!,`sym`time`bPrx`aPrx`bSz`aSz)!+`t`f`a!("stffjj";``````;``````)
testTable (+(,`c)!,`a`b`c)!+`t`f`a!("jsp";```;`s``)
trade (+(,`c)!,`sym`time`prx`sz)!+`t`f`a!("stfj";````;````)

Related

How to reference a column in the select clause in the order clause in SQLAlchemy like you do in Postgres instead of repeating the expression twice

In Postgres if one of your columns is a big complicated expression you can just say ORDER BY 3 DESC where 3 is the order of the column where the complicated expression is. Is there anywhere to do this in SQLAlchemy?
As Gord Thompson observes in this comment, you can pass the column index as a text object to group_by or order_by:
q = sa.select(sa.func.count(), tbl.c.user_id).group_by(sa.text('2')).order_by(sa.text('2'))
serialises to
SELECT count(*) AS count_1, posts.user_id
FROM posts GROUP BY 2 ORDER BY 2
There are other techniques that don't require re-typing the expression.
You could use the selected_columns property:
q = sa.select(tbl.c.col1, tbl.c.col2, tbl.c.col3)
q = q.order_by(q.selected_columns[2]) # order by col3
You could also order by a label (but this will affect the names of result columns):
q = sa.select(tbl.c.col1, tbl.c.col2, tbl.c.col3.label('c').order_by('c')

Convert jsonb column to a user-defined type

I'm trying to convert each row in a jsonb column to a type that I've defined, and I can't quite seem to get there.
I have an app that scrapes articles from The Guardian Open Platform and dumps the responses (as jsonb) in an ingestion table, into a column called 'body'. Other columns are a sequential ID, and a timestamp extracted from the response payload that helps my app only scrape new data.
I'd like to move the response dump data into a properly-defined table, and as I know the schema of the response, I've defined a type (my_type).
I've been referring to the 9.16. JSON Functions and Operators in the Postgres docs. I can get a single record as my type:
select * from jsonb_populate_record(null::my_type, (select body from data_ingestion limit 1));
produces
id
type
sectionId
...
example_id
example_type
example_section_id
...
(abbreviated for concision)
If I remove the limit, I get an error, which makes sense: the subquery would be providing multiple rows to jsonb_populate_record which only expects one.
I can get it to do multiple rows, but the result isn't broken into columns:
select jsonb_populate_record(null::my_type, body) from reviews_ingestion limit 3;
produces:
jsonb_populate_record
(example_id_1,example_type_1,example_section_id_1,...)
(example_id_2,example_type_2,example_section_id_2,...)
(example_id_3,example_type_3,example_section_id_3,...)
This is a bit odd, I would have expected to see column names; this after all is the point of providing the type.
I'm aware I can do this by using Postgres JSON querying functionality, e.g.
select
body -> 'id' as id,
body -> 'type' as type,
body -> 'sectionId' as section_id,
...
from reviews_ingestion;
This works but it seems quite inelegant. Plus I lose datatypes.
I've also considered aggregating all rows in the body column into a JSON array, so as to be able to supply this to jsonb_populate_recordset but this seems a bit of a silly approach, and unlikely to be performant.
Is there a way to achieve what I want, using Postgres functions?
Maybe you need this - to break my_type record into columns:
select (jsonb_populate_record(null::my_type, body)).*
from reviews_ingestion
limit 3;
-- or whatever other query clauses here
i.e. select all from these my_type records. All column names and types are in place.
Here is an illustration. My custom type is delmet and CTO t remotely mimics data_ingestion.
create type delmet as (x integer, y text, z boolean);
with t(i, j, k) as
(
values
(1, '{"x":10, "y":"Nope", "z":true}'::jsonb, 'cats'),
(2, '{"x":11, "y":"Yep", "z":false}', 'dogs'),
(3, '{"x":12, "y":null, "z":true}', 'parrots')
)
select i, (jsonb_populate_record(null::delmet, j)).*, k
from t;
Result:
i
x
y
z
k
1
10
Nope
true
cats
2
11
Yep
false
dogs
3
12
true
parrots

Postgresql: how to query hstore dynamically

I have the following tables
ORDER (idOrder int, idCustomer int) [PK: idOrder]
ORDERLINE (idOrder int, idProduct int) [PK: idOrder, idProduct]
PRODUCT (idProduct int, rating hstore) [PK: idProduct]
In the PRODUCT table, 'rating' is a key/value column where the key is an idCustomer, and the value is an integer rating.
The query to count the orders containing a product on which the customer has given a good rating looks like this:
select count(distinct o.idOrder)
from order o, orderline l, product p
where o.idorder = l.idorder and l.idproduct = p.idproduct
and (p.rating->(o.idcust::varchar)::int) > 4;
The query plan seems correct, but this query takes forever. So I tried a different query, where I explode all the records in the hstore:
select count(distinct o.idOrder)
from order o, orderline l,
(select idproduct, skeys(p.rating) idcustomer, svals(p.rating) intrating from product) as p
where o.idorder = l.idorder and l.idproduct = p.idproduct
and o.idcustomer = p.idcustomer and p.intrating > 4;
This query takes only a few seconds. How is this possible? I assumed that exploding all values of an hstore would be quite inefficient, but it seems to be the opposite. Is it possible that I am not writing the first query correctly?
I'm suspecting it is because in the first query you are doing:
p.rating->(o.idcust::varchar)::int
a row at a time as the query iterates over the rest of the operations, whereas in the second query the hstore values are expanded in a single query. If you want more insight use EXPLAIN ANALYZE:
https://www.postgresql.org/docs/12/sql-explain.html

Postgresql Select all columns and column names with a specific value for a row

I have a table with many(+1000) columns and rows(~1M). The columns have either the value 1 , or are NULL.
I want to be able to select, for a specific row (user) retrieve the column names that have a value of 1.
Since there are many columns on the table, specifying the columns would yield a extremely long query.
You're doing something SQL is quite bad at - dynamic access to columns, or treating a row as a set. It'd be nice if this were easier, but it doesn't work well with SQL's typed nature and the concept of a relation. Working with your data set in its current form is going to be frustrating; consider storing an array, json, or hstore of values instead.
Actually, for this particular data model, you could probably use a bitfield. See bit(n) and bit varying(n).
It's still possible to make a working query with your current model PostgreSQL extensions though.
Given sample:
CREATE TABLE blah (id serial primary key, a integer, b integer, c integer);
INSERT INTO blah(a,b,c) VALUES (NULL, NULL, 1), (1, NULL, 1), (NULL, NULL, NULL), (1, 1, 1);
I would unpivot each row into a key/value set using hstore (or in newer PostgreSQL versions, the json functions). SQL its self provides no way to dynamically access columns, so we have to use an extension. So:
SELECT id, hs FROM blah, LATERAL hstore(blah) hs;
then extract the hstores to sets:
SELECT id, k, v FROM blah, LATERAL each(hstore(blah)) kv(k,v);
... at which point your can filter for values matching the criteria. Note that all columns have been converted to text, so you may want to cast it back:
SELECT id, k FROM blah, LATERAL each(hstore(blah)) kv(k,v) WHERE v::integer = 1;
You also need to exclude id from matching, so:
regress=> SELECT id, k FROM blah, LATERAL each(hstore(blah)) kv(k,v) WHERE v::integer = 1 AND
k <> 'id';
id | k
----+---
1 | c
2 | a
2 | c
4 | a
4 | b
4 | c
(6 rows)

use two .nextval in an insert statement

I'm using oracle database and facing a problem where two id_poduct.nextval is creating as error: ORA-00001: unique constraint (SYSTEM.SYS_C004166) violated
It is a primary key. To use all is a requirement. Can I use 2 .nextval in a statement?
insert all
into sale_product values (id_product.nextval, id.currval, 'hello', 123, 1)
into sale_product values (id_product.nextval, id.currval, 'hi', 123, 1)
select * from dual;
insert into sale_product
select id_product.nextval, id.currval, a, b, c
from
(
select 'hello' a, 123 b, 1 c from dual union all
select 'hi' a, 123 b, 1 c from dual
);
This doesn't use the insert all syntax, but it works the same way if you are only inserting into the same table.
The value of id_product.NEXTVAL in the first INSERT is the same as the second INSERT, hence you'll get the unique constraint violation. if you remove the constraint and perform the insert, you'll notice the duplicate values!
The only way is to perform two bulk INSERTS in sequence or to have two seperate sequences with a different range, the latter would require an awful lot of coding and checking.
create table temp(id number ,id2 number);
insert all
into temp values (supplier_seq.nextval, supplier_seq.currval)
into temp values (supplier_seq.nextval, supplier_seq.currval)
select * from dual;
ID ID2
---------- ----------
2 2
2 2
Refrence
The subquery of the multitable insert statement cannot use a sequence
http://docs.oracle.com/cd/B19306_01/server.102/b14200/statements_9014.htm#i2080134