Postgresql create generated column syntax error, why? - postgresql

I have a postgres table with two columns (an identificator and a date) that are a composite primary key. I would like to hash the concatenation in another column, generating this value everytime a new record is inserted. For that I'm trying to alter my table in order to create a generated column:
ALTER TABLE my_table ADD COLUMN hash_id_date VARCHAR(50)
GENERATED ALWAYS AS (MD5(my_table.original_id||'-'||my_table.time))
STORED;
This raises me the following error:
ERROR: syntax error at or near "("
LINE 4: GENERATED ALWAYS AS (MD5(my_table.original_id,'-',my_table.t...
^
SQL state: 42601
Character: 178
I'm turning into madness to find where is the syntax error... I've read about STABLE and IMMUTABLE functions and generated columns should always have an IMMUTABLE function as expression. As far as I know MD5 is IMMUTABLE but the error message is not even capable to reach that level.
Any help?

Assuming the basic functionality for calculating the MD5 is common you can create a function for the calculation. Use this function wherever it's needed, including updating your current rows and invoke from a trigger on yo your table. If the particular MD5 calculation is not all that common you can just put the calculation in the trigger function and also use it in a independent update for current rows. See here for example with assumption it is common in your app.

Related

CockroachDB: Why do I get the error "lastval is not yet defined in this session"?

If have a SERIAL column on my table and insert a value, the column gets automatically populated but if I call SELECT lastval() to get the value afterwards, even though it's the same session, I get the error "lastval is not yet defined in this session". This works in Postgres but is an error in CockroachDB. Why is that and how do I fix it?
lastval() itself works the same in CockroachDB and Postgres--it returns the most recent value generated by nextval() in the same SQL session, and returns that error if it was never called. The difference is CockroachDB's default implementation of the SERIAL keyword. Postgres implements this by creating a sequence and implicitly calling nextval on it whenever you insert into the table. CockroachDB instead calls unique_rowid(), which is more performant but doesn't populate lastval. You can get compatible behavior by setting the serial_normalization variable to virtual_sequence before creating tables with SERIAL columns, and/or modifying existing serial columns to use a virtual sequence.
For example,
CREATE SEQUENCE dummy_seq VIRTUAL;
ALTER TABLE users ALTER COLUMN id SET DEFAULT nextval('dummy_seq');
Or you can avoid the extra trip to the database entirely by using a RETURNING clause on your insert.

Ambiguous column reference "ctid" in SELECT with more than one table

I'm using CRecordset to query one table, but I use a second table to filter data. If in my GetDefaultSQL override method I return a table list with more than one table then I get this ERROR: column reference "ctid" is ambiguous. I know what a "ctid" column is, but I don't use it in my code. It's inserted into the original SQL statement by ODBC driver. How to fix this? How to tell the ODBC driver not to insert the "ctid" column?
I tried to call CRecordset::Open with readOnly parameter, as I assume that ODBC needs ctid to update the row, and I don't need to update them. But the error remains.
Also tried to add a primary key to the second table that was missing it, thinking if a table has a primary key then ODBC can use that instead of 'ctid', but again no luck. Makes sense though, because I don't fetch any column of that second table, and the second table is used just for filtering.
If I make a DB view to work around the issue, I get ERROR: column "ctid" does not exist.
You have to call CRecordset::Open with two parameters changed:
m_pSet->Open(CRecordset::snapshot, NULL, CRecordset::readOnly);
Then you can fetch both the joined tables and the view without errors. No "ctid" then.

How to determine which column is implicated in "value too long for type character varying"?

I'm programatically adding data to a PostgreSQL table using Python and psycopg - this is working fine.
Occasionally though, a text value is too long for the containing column, so I get the message:
ERROR: value too long for type character varying(1000)
where the number is the width of the offending column.
Is there a way to determine which column has caused the error? (Aside from comparing each column's length to see whether it is 1000)
Many thanks to #Tometzky, whose comment pointed me in the right direction.
Rather than trying to determine which column caused the problem after the fact, I modified my Python script to ensure that the value was truncated before inserting into the database.
access the table's schema using select column_name, data_type, character_maximum_length from information_schema.columns where table_name='test'
when building the INSERT statement, use the schema definition to identify character fields and truncate if necessary
I don't think there's an easy way.
I tried to set VERBOSITY in psql, as I assumed this would help, but unfortunately not (on 9.4):
psql
\set VERBOSITY verbose
dbname=> create temporary table test (t varchar(5));
CREATE TABLE
dbname=> insert into test values ('123456');
ERROR: 22001: value too long for type character varying(5)
LOCATION: varchar, varchar.c:623
This might be something that warrants discussion on the mailing list, as you are not the only one with this problem.

DB2 column constraint for inserting values within restricted length

Is there a constraint available in DB2 such that when a column is restricted to a particular length, the values are trimmed to appropriate length before insertion. For Eg. if a column has been specified to be of length 5 , inserting a value 'overflow' will get inserted as 'overf'.
Can CHECK constraint be used here? My understanding of CHECK constraint is that it will allow insertions or not allow them but it can not modify values to satisfy the condition.
A constraint isn't going to be able to do this.
A before insert trigger is normally the mechanism you'd use to modify data during an insert before it is actually placed in the table.
However, I'm reasonably sure it won't work in this case. You'd get an SQLCODE -404 (SQLSTATE 22001) "The Sql Statement specified contains a String that is too long." thrown before the trigger gets fired.
I see two possible options
1) Create a view over the table where the column is cast to a larger size. Then create an INSTEAD OF trigger on the view to substring the data during write.
2) Create and use a stored procedure that accepts a larger size and substrings the data then inserts it.

Postgres pg_dump now stored procedure fails because of boolean

I have a stored procedure that has started to fail for no reason. Well there must be one but I can't find it!
This is the process I have followed a number of times before with no problem.
The source server works fine!
I am doing a pg_dump of the database on source server and imported it onto another server - This is fine I can see all the data and do updates.
Then I run a stored procedure on the imported database that does the following on the database which has 2 identical schema's -
For each table in schema1
Truncate table in schema2
INSERT INTO schema2."table" SELECT * FROM schema1."table" WHERE "Status" in ('A','N');
Next
However this gives me an error now when it did not before -
The error is
*** Error ***
ERROR: column "HBA" is of type boolean but expression is of type integer
SQL state: 42804
Hint: You will need to rewrite or cast the expression.
Why am I getting this - The only difference between the last time I followed this procedure and this time is that the table in question now has an extra column added to it so the "HBA" boolean column is not the last field. But then why would it work in original database!
I have tried removing all data, dropping and rebuilding table these all fail.
However if I drop column and adding it back in if works - Is there something about Boolean fields that mean they need to be the last field!
Any help greatly apprieciated.
Using Postgres 9.1
The problem here - tables in different schemas were having different column order.
If you do not explicitly specify column list and order in INSERT INTO table(...) or use SELECT * - you are relying on the column order of the table (and now you see why it is a bad thing).
You were trying to do something like
INSERT INTO schema2.table1(id, bool_column, int_column) -- based on the order of columns in schema2.table1
select id, int_column, bool_column -- based on the order of columns in schema1.table1
from schema1.table1;
And such query caused cast error because column type missmatch.