How can generic functions can be used for computed fields in hasura? - postgresql

I've a logs table, which is it contains all the actions (updated, created) taken by operators (admin users).
On this table 2 of these columns (indexed as hash) target_entity and target_id Which respectively stores
table name: table name of the action taken on.
record id added or updated record's id in the target table.
So, what I am trying to achieve;
I would like to add a computed field named eg:logs which depends on a function;
FUNCTION public."fetchLogs"(
referenceId integer,
referenceName TEXT
)
First parameter is the current table's primary key.
I'm not sure if I can automatically send primary key as first argument
So probably it should be something like table_row table
Second parameter is a static value which is table's name, planning to statically pass as argument.
This function returns a JSON object;
RETURNS json LANGUAGE plpgsql STABLE
AS $function$
It should return log records related to this record.
At this point I have 2 things needs to be tackled with;
Since the first parameter is the reference (primary) key, I don't know If I can just use primary key as an argument. I'm guessing I need to use as table_row (anytable if that's a thing) then table_row.id
In the Hasura console, Add Computed Field > Function Name selector does not list this function, guessing because it doesn't explicitly indicated in the function which table is this action for.
Answers I'm looking for if this is achievable or is there a better practice for this kind of things?
Maybe I need a encapsulating function for each table needs this computed column. But I'm not sure is it possible or how can be done.
P.S. In case if you are wondering all primary keys are same type and name yes. All tables (will be using this computed column) has a primary key named as id and type is integer.

Related

Is it bad for columns in composite keys to have mismatching types?

Problem:
I'd like to make a composite primary key from columns id and user_id for a postgres database table. Column user_id is a foreign key with an integer type, whereas id is a string. Will this cause a conflict because the types are different?
Edit: Also, are there combinations of types that would cause problems?
Context:
I obviously should match the type of the User.id field for its foreign key. And, the id for my table will be derived from a uuid to prevent data leaks. So I would prefer not to change the types of either field I want in this table.
Research:
I am using sqlalchemy. Their documentation mentions how to create a composite primary key, but it doesn't discuss dealing with different types for each column.
No, this won't be a problem.
Your question seems to indicate that you think, the values of the indexed columns are somehow concatenated and then stored in the index as a single value. This is not the case. Each column value is stored independently but together. Similar to the way the column values are stored in the actual table.

How to create tsvector_update_trigger on Non-character type data

How can we create tsvector update trigger on Non-character data type.
For example, i have created a ts vector trigger something like this.
CREATE TRIGGER tr_doc_id_col
BEFORE INSERT OR UPDATE
ON document
FOR EACH ROW
EXECUTE PROCEDURE tsvector_update_trigger('tsv_doc_id', 'pg_catalog.english', doc_id');
Here,
tr_doc_id_col -- Name of the trigger
document -- Name of the table
tsv_doc_id -- tsvector form of the doc_id
doc_id -- Name of the column. It's data type is integer
This trigger should update the tsv_doc_id, when there is insert, delete or update happens on doc_id column.
But, the trigger is throwing an error saying that doc_id is not of character type (i.e it is not able to update based on non-character type column).
I have tried creating same kind of triggers on columns like title, body which are text data type. In this case it is working very well, but in the earlier case.
Can any of you tell me how to do in the case of non-character data type like doc_id?
tsvector_update_trigger is just a convenience function, and it does what it does. You can always write your own implementation of it with whatever embellishments you want (in this case, casting to text), but you can't change the behavior of the existing one. But it is also rather obsolete. The modern way to do it is with GENERATED columns.
create table document (
doc_id int,
tsv_doc_id tsvector generated always as (to_tsvector('english',doc_id::text)) stored
);
Although I would question the utility of a tsvector which always contains exactly one integer.

Way to migrate a create table with sequence from postgres to DB2

I need to migrate a DDL from Postgres to DB2, but I need that it works the same as in Postgres. There is a table that generates values from a sequence, but the values can also be explicitly given.
Postgres
create sequence hist_id_seq;
create table benchmarksql.history (
hist_id integer not null default nextval('hist_id_seq') primary key,
h_c_id integer,
h_c_d_id integer,
h_c_w_id integer,
h_d_id integer,
h_w_id integer,
h_date timestamp,
h_amount decimal(6,2),
h_data varchar(24)
);
(Look at the sequence call in the hist_id column to define the value of the primary key)
The business logic inserts into the table by explicitly providing an ID, and in other cases, it leaves the database to choose the number.
If I change this in DB2 to a GENERATED ALWAYS it will throw errors because there are some provided values. On the other side, if I create the table with GENERATED BY DEFAULT, DB2 will throw an error when trying to insert with the same value (SQL0803N), because the "internal sequence" does not take into account the already inserted values, and it does not retry with a next value.
And, I do not want to restart the sequence each time a provided ID was inserted.
This is the problem in BenchmarkSQL when trying to port it to DB2: https://sourceforge.net/projects/benchmarksql/ (File sqlTableCreates)
How can I implement the same database logic in DB2 as it does in Postgres (and apparently in Oracle)?
You're operating under a misconception: that sources external to the db get to dictate its internal keys. Ideally/conceptually, autogenerated ids will never need to be seen outside of the db, as conceptually there should be unique natural keys for export or reporting. Still, there are times when applications will need to manage some ids, often when setting up related entities (eg, JPA seems to want to work this way).
However, if you add an id value that you generated from a different source, the db won't be able to manage it. How could it? It's not efficient - for one thing, attempting to do so would do one of the following
Be unsafe in the face of multiple clients (attempt to add duplicate keys)
Serialize access to the table (for a potentially slow query, too)
(This usually shows up when people attempt something like: SELECT MAX(id) + 1, which would require locking the entire table for thread safety, likely including statements that don't even touch that column. If you try to find any "first-unused" id - trying to fill gaps - this gets more complicated and problematic)
Neither is ideal, so it's best to not have the problem in the first place. This is usually done by having id columns be autogenerated, but (as pointed out earlier) there are situations where we may need to know what the id will be before we insert the row into the table. Fortunately, there's a standard SQL object for this, SEQUENCE. This provides a db-managed, thread-safe, fast way to get ids. It appears that in PostgreSQL you can use sequences in the DEFAULT clause for a column, but DB2 doesn't allow it. If you don't want to specify an id every time (it should be autogenerated some of the time), you'll need another way; this is the perfect time to use a BEFORE INSERT trigger;
CREATE TRIGGER Add_Generated_Id NO CASCADE BEFORE INSERT ON benchmarksql.history
NEW AS Incoming_Entity
FOR EACH ROW
WHEN Incoming_Entity.id IS NULL
SET id = NEXTVAL FOR hist_id_seq
(something like this - not tested. You didn't specify where in the project this would belong)
So, if you then add a row with something like:
INSERT INTO benchmarksql.history (hist_id, h_data) VALUES(null, 'a')
or
INSERT INTO benchmarksql.history (h_data) VALUES('a')
an id will be generated and attached automatically. Note that ALL ids added to the table must come from the given sequence (as #mustaccio pointed out, this appears to be true even in PostgreSQL), or any UNIQUE CONSTRAINT on the column will start throwing duplicate-key errors. So any time your application needs an id before inserting a row in the table, you'll need some form of
SELECT NEXT VALUE FOR hist_id_seq
FROM sysibm.sysdummy1
... and that's it, pretty much. This is completely thread and concurrency safe, will not maintain/require long-term locks, nor require serialized access to the table.

Using the serial datatype as a foreign key

Lets say that I have two tables.
The first is: table lists, with list_id SERIAL, list_name TEXT
The second table is, trivially, a table which says if the list is public: list_id INT, is_public INT
Obviously a bit of a contrived case, but I am planning out some tables and this seems to be an issue. If I insert a new list_name into table lists, then it'll give me a new serial number...but now I will need to use that serial number in the second table. Obviously in this case, you could simply add is_public to the first table, but in the case of a linking list where you have a compound key, you'll need to know the serial value that was returned.
How do people usually handle this? Do they get the return type from the insert using whatever system they're interacting with the database with?
One approach to this sort of thing is:
INSERT...
SELECT lastval()
INSERT...
INSERT into the first table, use lastval() to get the "value most recently obtained with nextval for any sequence" (in the current session), and then use that value to build your next INSERT.
There's also INSERT ... RETURNING:
The optional RETURNING clause causes INSERT to compute and return value(s) based on each row actually inserted. This is primarily useful for obtaining values that were supplied by defaults, such as a serial sequence number.
Using INSERT ... RETURNING id basically combines the first two steps above into one so you'd do:
INSERT ... RETURNING id
INSERT ...
where the second INSERT would use the id returned from the first INSERT.

auto-increment column in PostgreSQL on the fly?

I was wondering if it is possible to add an auto-increment integer field on the fly, i.e. without defining it in a CREATE TABLE statement?
For example, I have a statement:
SELECT 1 AS id, t.type FROM t;
and I am can I change this to
SELECT some_nextval_magic AS id, t.type FROM t;
I need to create the auto-increment field on the fly in the some_nextval_magic part because the result relation is a temporary one during the construction of a bigger SQL statement. And the value of id field is not really important as long as it is unique.
I search around here, and the answers to related questions (e.g. PostgreSQL Autoincrement) mostly involving specifying SERIAL or using nextval in CREATE TABLE. But I don't necessarily want to use CREATE TABLE or VIEW (unless I have to). There are also some discussions of generate_series(), but I am not sure whether it applies here.
-- Update --
My motivation is illustrated in this GIS.SE answer regarding the PostGIS extension. The original query was:
CREATE VIEW buffer40units AS
SELECT
g.path[1] as gid,
g.geom::geometry(Polygon, 31492) as geom
FROM
(SELECT
(ST_Dump(ST_UNION(ST_Buffer(geom, 40)))).*
FROM point
) as g;
where g.path[1] as gid is an id field "required for visualization in QGIS". I believe the only requirement is that it is integer and unique across the table. I encountered some errors when running the above query when the g.path[] array is empty.
While trying to fix the array in the above query, this thought came to me:
Since the gid value does not matter anyways, is there an auto-increment function that can be used here instead?
If you wish to have an id field that assigns a unique integer to each row in the output, then use the row_number() window function:
select
row_number() over () as id,
t.type from t;
The generated id will only be unique within each execution of the query. Multiple executions will not generate new unique values for id.