I have a stored procedure that has started to fail for no reason. Well there must be one but I can't find it!
This is the process I have followed a number of times before with no problem.
The source server works fine!
I am doing a pg_dump of the database on source server and imported it onto another server - This is fine I can see all the data and do updates.
Then I run a stored procedure on the imported database that does the following on the database which has 2 identical schema's -
For each table in schema1
Truncate table in schema2
INSERT INTO schema2."table" SELECT * FROM schema1."table" WHERE "Status" in ('A','N');
Next
However this gives me an error now when it did not before -
The error is
*** Error ***
ERROR: column "HBA" is of type boolean but expression is of type integer
SQL state: 42804
Hint: You will need to rewrite or cast the expression.
Why am I getting this - The only difference between the last time I followed this procedure and this time is that the table in question now has an extra column added to it so the "HBA" boolean column is not the last field. But then why would it work in original database!
I have tried removing all data, dropping and rebuilding table these all fail.
However if I drop column and adding it back in if works - Is there something about Boolean fields that mean they need to be the last field!
Any help greatly apprieciated.
Using Postgres 9.1
The problem here - tables in different schemas were having different column order.
If you do not explicitly specify column list and order in INSERT INTO table(...) or use SELECT * - you are relying on the column order of the table (and now you see why it is a bad thing).
You were trying to do something like
INSERT INTO schema2.table1(id, bool_column, int_column) -- based on the order of columns in schema2.table1
select id, int_column, bool_column -- based on the order of columns in schema1.table1
from schema1.table1;
And such query caused cast error because column type missmatch.
Related
I have to write a sql script to modify a lot of types of columns in my db2 database.
Everything goes well excpet for one specific table (script used is the same as others tables) and db2 returns always an error I don't understand.
Here is my script :
ALTER TABLE "TEST"."CLIENT"
ALTER COLUMN C_CODE
SET DATA TYPE CHAR(16 OCTETS);
and the error :
SQL Error [42997]: Function not supported (Reason code = "21")..
SQLCODE=-270, SQLSTATE=42997, DRIVER=4.26.14
I try to modify some others columns on the same table, but I always receive the same error.
Do you, by any chance, have an idea?
Thanks in advance
The error SQL0270N (sqlcode = -270) has many possible causes, and the specific cause is indicated by the "reason code".
In this case the "reason code 21" means:
A column cannot be dropped or have its length, data type, security,
nullability, or hidden attribute altered on a table that is a base
table for a materialized query table.
The documentation for this sqlcode on Db2-LUW is at:
https://www.ibm.com/docs/en/db2/11.5?topic=messages-sql0250-sql0499#sql0270n
Search for SQL0270N on that page, and notice the suggested user response:
To drop or alter a column in a table that is a base table for a materialized query table, perform the following steps:
1. Drop the dependent materialized query table.
2. Drop the column of the base table, or alter the length, data type, nullability, or hidden attribute of this column.
3. Re-create the materialized query table.
I'm using CRecordset to query one table, but I use a second table to filter data. If in my GetDefaultSQL override method I return a table list with more than one table then I get this ERROR: column reference "ctid" is ambiguous. I know what a "ctid" column is, but I don't use it in my code. It's inserted into the original SQL statement by ODBC driver. How to fix this? How to tell the ODBC driver not to insert the "ctid" column?
I tried to call CRecordset::Open with readOnly parameter, as I assume that ODBC needs ctid to update the row, and I don't need to update them. But the error remains.
Also tried to add a primary key to the second table that was missing it, thinking if a table has a primary key then ODBC can use that instead of 'ctid', but again no luck. Makes sense though, because I don't fetch any column of that second table, and the second table is used just for filtering.
If I make a DB view to work around the issue, I get ERROR: column "ctid" does not exist.
You have to call CRecordset::Open with two parameters changed:
m_pSet->Open(CRecordset::snapshot, NULL, CRecordset::readOnly);
Then you can fetch both the joined tables and the view without errors. No "ctid" then.
Is it possible to update a column (automatically) with "current_timestamp" in PostgreSQL using "Generated Columns", whenever the row gets update?
At present, I am using trigger to update the audit field last_update_date. But I am planning to switch to generated column
ALTER TABLE test ADD COLUMN last_update_date timestamp without time zone
GENERATED ALWAYS AS (current_timestamp) STORED;
Getting error while altering column
ERROR: generation expression is not immutable
No, that won't work, for the reason specified in the error.
Functions used in generated columns must always return the same value for the same arguments, that is, depend on nothing but the current database row. current_timestamp obviously is not of that kind.
If PostgreSQL did allow such functions to be used in generated columns, then the value of the column would change if the database is restored from a pg_dump, for example.
Use a BEFORE INSERT OR UPDATE trigger for this purpose.
I'm programatically adding data to a PostgreSQL table using Python and psycopg - this is working fine.
Occasionally though, a text value is too long for the containing column, so I get the message:
ERROR: value too long for type character varying(1000)
where the number is the width of the offending column.
Is there a way to determine which column has caused the error? (Aside from comparing each column's length to see whether it is 1000)
Many thanks to #Tometzky, whose comment pointed me in the right direction.
Rather than trying to determine which column caused the problem after the fact, I modified my Python script to ensure that the value was truncated before inserting into the database.
access the table's schema using select column_name, data_type, character_maximum_length from information_schema.columns where table_name='test'
when building the INSERT statement, use the schema definition to identify character fields and truncate if necessary
I don't think there's an easy way.
I tried to set VERBOSITY in psql, as I assumed this would help, but unfortunately not (on 9.4):
psql
\set VERBOSITY verbose
dbname=> create temporary table test (t varchar(5));
CREATE TABLE
dbname=> insert into test values ('123456');
ERROR: 22001: value too long for type character varying(5)
LOCATION: varchar, varchar.c:623
This might be something that warrants discussion on the mailing list, as you are not the only one with this problem.
I've got a table with 4 rows in it in a non-production database used for development. There are 2 varchar columns that I want to convert to bytea. I don't care about the contents so I could of course drop the columns and then add them back, but I became confused when I tried to just change the type:
alter table whatever
alter column col_1 set data type bytea using null,
alter column col_2 set data type bytea using null;
When I try that, the psql client just hangs. By that I mean that it just sits there giving no feedback until I eventually hit ^C and it aborts. I've tried that with a little test table and it works fine, but for some reason it doesn't work on the real table (which, really, is also just a "little test table").
The using clause doesn't seem to make a difference one way or the other; I can leave it out or give other values, and the command does the same thing.
I don't get an error, I just don't get anything. Is that what I should expect?
I'm on 9.1 on ubuntu 14.10 if it matters.
I don't care about the contents
In that case, this works on an empty table:
ALTER TABLE tablename
ALTER COLUMN colname TYPE bytea USING colname::bytea
;
Simple:
Get the active locks from pg_locks:
select t.relname,l.locktype,page,virtualtransaction,pid,mode,granted from pg_locks l, pg_stat_all_tables t where l.relation=t.relid order by relation asc;
Copy the pid(ex: 14210) from above result and substitute in the below command.
SELECT pg_terminate_backend('14210')