Is it possible to update a column(automatically) with "current_timestamp" in PostgreSQL using "Generated Columns"? - postgresql

Is it possible to update a column (automatically) with "current_timestamp" in PostgreSQL using "Generated Columns", whenever the row gets update?
At present, I am using trigger to update the audit field last_update_date. But I am planning to switch to generated column
ALTER TABLE test ADD COLUMN last_update_date timestamp without time zone
GENERATED ALWAYS AS (current_timestamp) STORED;
Getting error while altering column
ERROR: generation expression is not immutable

No, that won't work, for the reason specified in the error.
Functions used in generated columns must always return the same value for the same arguments, that is, depend on nothing but the current database row. current_timestamp obviously is not of that kind.
If PostgreSQL did allow such functions to be used in generated columns, then the value of the column would change if the database is restored from a pg_dump, for example.
Use a BEFORE INSERT OR UPDATE trigger for this purpose.

Related

CockroachDB: Why do I get the error "lastval is not yet defined in this session"?

If have a SERIAL column on my table and insert a value, the column gets automatically populated but if I call SELECT lastval() to get the value afterwards, even though it's the same session, I get the error "lastval is not yet defined in this session". This works in Postgres but is an error in CockroachDB. Why is that and how do I fix it?
lastval() itself works the same in CockroachDB and Postgres--it returns the most recent value generated by nextval() in the same SQL session, and returns that error if it was never called. The difference is CockroachDB's default implementation of the SERIAL keyword. Postgres implements this by creating a sequence and implicitly calling nextval on it whenever you insert into the table. CockroachDB instead calls unique_rowid(), which is more performant but doesn't populate lastval. You can get compatible behavior by setting the serial_normalization variable to virtual_sequence before creating tables with SERIAL columns, and/or modifying existing serial columns to use a virtual sequence.
For example,
CREATE SEQUENCE dummy_seq VIRTUAL;
ALTER TABLE users ALTER COLUMN id SET DEFAULT nextval('dummy_seq');
Or you can avoid the extra trip to the database entirely by using a RETURNING clause on your insert.

How to add a column to a table on production PostgreSQL with zero downtime?

Here
https://stackoverflow.com/a/53016193/10894456
is an answer provided for Oracle 11g,
My question is the same:
What is the best approach to add a not null column with default value
in production oracle database when that table contain one million
records and it is live. Does it create any locks if we do the column
creation , adding default value and making it as not null in a single
statement?
but for PostgreSQL ?
This prior answer essentially answers your query.
Cross referencing the relevant PostgreSQL doc with the PostgreSQL sourcecode for AlterTableGetLockLevel mentioned in the above answer shows that ALTER TABLE ... ADD COLUMN will always obtain an ACCESS EXCLUSIVE table lock, precluding any other transaction from accessing the table for the duration of the ADD COLUMN operation.
This same exclusive lock is obtained for any ADD COLUMN variation; ie. it doesn't matter whether you add a NULL column (with or without DEFAULT) or have a NOT NULL with a default.
However, as mentioned in the linked answer above, adding a NULL column with no DEFAULT should be very quick as this operation simply updates the catalog.
In contrast, adding a column with a DEFAULT specifier necessitates a rewrite the entire table in PostgreSQL 10 or less.
This operation is likely to take a considerable time on your 1M record table.
According to the linked answer, PostgreSQL >= 11 does not require such a rewrite for adding such a column, so should perform more similarly to the no-DEFAULT case.
I should add that for PostgreSQL 11 and above, the ALTER TABLE docs note that table rewrites are only avoided for non-volatile DEFAULT specifiers:
When a column is added with ADD COLUMN and a non-volatile DEFAULT is specified, the default is evaluated at the time of the statement and the result stored in the table's metadata. That value will be used for the column for all existing rows. If no DEFAULT is specified, NULL is used. In neither case is a rewrite of the table required.
Adding a column with a volatile DEFAULT [...] will require the entire table and its indexes to be rewritten. [...] Table and/or index rebuilds may take a significant amount of time for a large table; and will temporarily require as much as double the disk space.

Appending array during Postgresql upsert & getting ambiguous column error

When trying to do a query like below:
INSERT INTO employee_channels (employee_id, channels)
VALUES ('46356699-bed1-4ec4-9ac1-76f124b32184', '{a159d680-2f2e-4ba7-9498-484271ad0834}')
ON CONFLICT (employee_id)
DO UPDATE SET channels = array_append(channels, 'a159d680-2f2e-4ba7-9498-484271ad0834')
WHERE employee_id = '46356699-bed1-4ec4-9ac1-76f124b32184'
AND NOT lower(channels::text)::text[] #> ARRAY['a159d680-2f2e-4ba7-9498-484271ad0834'];
I get the following error
[42702] ERROR: column reference "channels" is ambiguous Position: 245
The specific reference to channels it's referring to is the 'channels' inside array_append.
channels is a CITEXT[] data type
You may need to specify the EXCLUDED table in your set statement.
SET channels = array_append(EXCLUDED.channels, 'a159d680-2f2e-4ba7-9498-484271ad0834')
When using the ON CONFLICT DO UPDATE clause the values that aren't inserted because of the conflict are stored in the EXCLUDED table. Its an ephemeral table you don't have to actually make, the way NEW and OLD are in triggers.
From the PostgreSQL Manual:
conflict_action specifies an alternative ON CONFLICT action. It can be either DO NOTHING, or a DO UPDATE clause specifying the exact
details of the UPDATE action to be performed in case of a conflict.
The SET and WHERE clauses in ON CONFLICT DO UPDATE have access to the
existing row using the table's name (or an alias), and to rows
proposed for insertion using the special excluded table. SELECT
privilege is required on any column in the target table where
corresponding excluded columns are read.
Note that the effects of all per-row BEFORE INSERT triggers are reflected in excluded values, since those effects may have contributed
to the row being excluded from insertion.

Scala Slick auto update a column when the row is updated

I want add a updated_at column for every table.
The column will update its value with current time when the row is updated.
How can I automatic update it's value to avoid writing verbose code?
Thanks
You can do that directly in the database, don't need to modify your Slick code. you can define it like this, then it will automatically update whenever any value of this row is modified:
alter table xx add column updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
(this works mysql, not sure if needs adjustment for other databases)

Postgres pg_dump now stored procedure fails because of boolean

I have a stored procedure that has started to fail for no reason. Well there must be one but I can't find it!
This is the process I have followed a number of times before with no problem.
The source server works fine!
I am doing a pg_dump of the database on source server and imported it onto another server - This is fine I can see all the data and do updates.
Then I run a stored procedure on the imported database that does the following on the database which has 2 identical schema's -
For each table in schema1
Truncate table in schema2
INSERT INTO schema2."table" SELECT * FROM schema1."table" WHERE "Status" in ('A','N');
Next
However this gives me an error now when it did not before -
The error is
*** Error ***
ERROR: column "HBA" is of type boolean but expression is of type integer
SQL state: 42804
Hint: You will need to rewrite or cast the expression.
Why am I getting this - The only difference between the last time I followed this procedure and this time is that the table in question now has an extra column added to it so the "HBA" boolean column is not the last field. But then why would it work in original database!
I have tried removing all data, dropping and rebuilding table these all fail.
However if I drop column and adding it back in if works - Is there something about Boolean fields that mean they need to be the last field!
Any help greatly apprieciated.
Using Postgres 9.1
The problem here - tables in different schemas were having different column order.
If you do not explicitly specify column list and order in INSERT INTO table(...) or use SELECT * - you are relying on the column order of the table (and now you see why it is a bad thing).
You were trying to do something like
INSERT INTO schema2.table1(id, bool_column, int_column) -- based on the order of columns in schema2.table1
select id, int_column, bool_column -- based on the order of columns in schema1.table1
from schema1.table1;
And such query caused cast error because column type missmatch.