DB2 : SQL Error [42997]: Function not supported (Reason code = "21").. SQLCODE=-270, SQLSTATE=42997 - db2

I have to write a sql script to modify a lot of types of columns in my db2 database.
Everything goes well excpet for one specific table (script used is the same as others tables) and db2 returns always an error I don't understand.
Here is my script :
ALTER TABLE "TEST"."CLIENT"
ALTER COLUMN C_CODE
SET DATA TYPE CHAR(16 OCTETS);
and the error :
SQL Error [42997]: Function not supported (Reason code = "21")..
SQLCODE=-270, SQLSTATE=42997, DRIVER=4.26.14
I try to modify some others columns on the same table, but I always receive the same error.
Do you, by any chance, have an idea?
Thanks in advance

The error SQL0270N (sqlcode = -270) has many possible causes, and the specific cause is indicated by the "reason code".
In this case the "reason code 21" means:
A column cannot be dropped or have its length, data type, security,
nullability, or hidden attribute altered on a table that is a base
table for a materialized query table.
The documentation for this sqlcode on Db2-LUW is at:
https://www.ibm.com/docs/en/db2/11.5?topic=messages-sql0250-sql0499#sql0270n
Search for SQL0270N on that page, and notice the suggested user response:
To drop or alter a column in a table that is a base table for a materialized query table, perform the following steps:
1. Drop the dependent materialized query table.
2. Drop the column of the base table, or alter the length, data type, nullability, or hidden attribute of this column.
3. Re-create the materialized query table.

Related

Ambiguous column reference "ctid" in SELECT with more than one table

I'm using CRecordset to query one table, but I use a second table to filter data. If in my GetDefaultSQL override method I return a table list with more than one table then I get this ERROR: column reference "ctid" is ambiguous. I know what a "ctid" column is, but I don't use it in my code. It's inserted into the original SQL statement by ODBC driver. How to fix this? How to tell the ODBC driver not to insert the "ctid" column?
I tried to call CRecordset::Open with readOnly parameter, as I assume that ODBC needs ctid to update the row, and I don't need to update them. But the error remains.
Also tried to add a primary key to the second table that was missing it, thinking if a table has a primary key then ODBC can use that instead of 'ctid', but again no luck. Makes sense though, because I don't fetch any column of that second table, and the second table is used just for filtering.
If I make a DB view to work around the issue, I get ERROR: column "ctid" does not exist.
You have to call CRecordset::Open with two parameters changed:
m_pSet->Open(CRecordset::snapshot, NULL, CRecordset::readOnly);
Then you can fetch both the joined tables and the view without errors. No "ctid" then.

Alter the column type over several tables

In a PostgreSQL db I'm working on, half of the tables have one particular column, always named the same, that is of type varchar(5). The size became a bit too restricting and I want to change it to varchar(10).
The number of tables in my particular case is actually very manageable to do it by hand. But I was wondering how one could script this with a query for larger dbs. It generally should be possible in just a few steps.
Identify all the tables in the schema, then (?) filter by condition if column present.
Create ALTER TABLE statements for each table found
I have some idea about how to write a query that identifies all tables in the schema. But I wouldn't know how to filter them. And if I didn't filter them, I assume the generated alter table statements would break.
Would be great if someone could share their knowledge on this.
Thanks to Abelisto for providing some guidance. Eventually, this is how I did it.
First, I created a query that in turn creates the ALTER TABLE statements. MyDB and MyColumn need to reflect actual values.
SELECT
'ALTER TABLE '||columns.table_name||' ALTER COLUMN '||MyColumn||' TYPE varchar(20);'
FROM
information_schema.columns
WHERE
columns.table_catalog = 'MyDB' AND
columns.table_schema = 'public' AND
columns.column_name = 'MyColumn';
Then it was just a matter of executing the output as a new query. All done.

Creating a summary table violates a restriction

I'm trying to union two MQTs containing in a MQT.
CREATE SUMMARY TABLE MYUNIONEDTABLE
AS (
SELECT * FROM MQT1
UNION ALL
SELECT * FROM MQT2
)
DATA INITIALLY DEFERRED REFRESH DEFERRED
ORGANIZE BY ROW;
Which leads to the following error:
The statement failed because the fullselect specified for the materialized query table "MYUNIONEDTABLE" violates a restriction. Reason code = "2".. SQLCODE=-20058, SQLSTATE=428EC, DRIVER=4.18.60
The SELECT statement itself works fine.
Did you check the Info Center article on this error?
SQL20058N
The statement failed because the fullselect specified for the
materialized query table table-name violates a restriction. Reason
code = reason-code.
Explanation
Restrictions apply to the contents of a fullselect used in the
definition of a materialized query table. Some restrictions are based
on the materialized query table options, such as REFRESH DEFERRED or
REFRESH IMMEDIATE. Other restrictions are based on whether or not the
table is replicated. The fullselect in the statement that returned
this condition violates at least one of these restrictions.
If this message is returned during the creation of a staging table,
the error applies to the query used in the definition of the
materialized query table with which the staging table is associated.
Reason code 2, as your error message shows, means:
The fullselect referenced an unsupported object type.
Take a look at the article for MQT restrictions to see what statement you are using that is unsupported.

Postgres pg_dump now stored procedure fails because of boolean

I have a stored procedure that has started to fail for no reason. Well there must be one but I can't find it!
This is the process I have followed a number of times before with no problem.
The source server works fine!
I am doing a pg_dump of the database on source server and imported it onto another server - This is fine I can see all the data and do updates.
Then I run a stored procedure on the imported database that does the following on the database which has 2 identical schema's -
For each table in schema1
Truncate table in schema2
INSERT INTO schema2."table" SELECT * FROM schema1."table" WHERE "Status" in ('A','N');
Next
However this gives me an error now when it did not before -
The error is
*** Error ***
ERROR: column "HBA" is of type boolean but expression is of type integer
SQL state: 42804
Hint: You will need to rewrite or cast the expression.
Why am I getting this - The only difference between the last time I followed this procedure and this time is that the table in question now has an extra column added to it so the "HBA" boolean column is not the last field. But then why would it work in original database!
I have tried removing all data, dropping and rebuilding table these all fail.
However if I drop column and adding it back in if works - Is there something about Boolean fields that mean they need to be the last field!
Any help greatly apprieciated.
Using Postgres 9.1
The problem here - tables in different schemas were having different column order.
If you do not explicitly specify column list and order in INSERT INTO table(...) or use SELECT * - you are relying on the column order of the table (and now you see why it is a bad thing).
You were trying to do something like
INSERT INTO schema2.table1(id, bool_column, int_column) -- based on the order of columns in schema2.table1
select id, int_column, bool_column -- based on the order of columns in schema1.table1
from schema1.table1;
And such query caused cast error because column type missmatch.

PostgreSQL bulk insert with ActiveRecord

I've a lot of records that are originally from MySQL. I massaged the data so it will be successfully inserted into PostgreSQL using ActiveRecord. This I can easily do with insertions on row basis i.e one row at a time. This is very slow I want to do bulk insert but this fails if any of the rows contains invalid data. Is there anyway I can achieve bulk insert and only the invalid rows failing instead of the whole bulk?
COPY
When using SQL COPY for bulk insert (or its equivalent \copy in the psql client), failure is not an option. COPY cannot skip illegal lines. You have to match your input format to the table you import to.
If data itself (not decorators) is violating your table definition, there are ways to make this a lot more tolerant though. For instance: create a temporary staging table with all columns of type text. COPY to it, then fix offending rows with SQL commands before converting to the actual data type and inserting into the actual target table.
Consider this related answer:
How to bulk insert only new rows in PostreSQL
Or this more advanced case:
"ERROR: extra data after last expected column" when using PostgreSQL COPY
If NULL values are offending, remove the NOT NULL constraint from your target table temporarily. Fix the rows after COPY, then reinstate the constraint. Or take the route with the staging table, if you cannot afford to soften your rules temporarily.
Sample code:
ALTER TABLE tbl ALTER COLUMN col DROP NOT NULL;
COPY ...
-- repair, like ..
-- UPDATE tbl SET col = 0 WHERE col IS NULL;
ALTER TABLE tbl ALTER COLUMN col SET NOT NULL;
Or you just fix the source table. COPY tells you the number of the offending line. Use an editor of your preference and fix it, then retry. I like to use vim for that.
INSERT
For an INSERT (like commented) the check for NULL values is trivial:
To skip a row with a NULL value:
INSERT INTO (col1, ...
SELECT col1, ...
WHERE col1 IS NOT NULL
To insert sth. else instead of a NULL value (empty string in my example):
INSERT INTO (col1, ...
SELECT COALESCE(col1, ''), ...
A common work-around for this is to import the data into a TEMPORARY or UNLOGGED table with no constraints and, where data in the input is sufficiently bogus, text typed columns.
You can then do INSERT INTO ... SELECT queries against the data to populate the real table with a big query that cleans up the data during import. You can use a lot of CASE statements for this. The idea is to transform the data in one pass.
You might be able to do many of the fixes in Ruby as you read the data in, then push the data to PostgreSQL using COPY ... FROM STDIN. This is possible with Ruby's Pg gem, see eg https://bitbucket.org/ged/ruby-pg/src/tip/sample/copyfrom.rb .
For more complicated cases, look at Pentaho Kettle or Talend Studio ETL tools.