It's been a while.
Using DB2 10 for z/OS, I've been asked to change a specific column in a table from decimal(7,2) to decimal(7,4). Sounds easy, right?
alter table MySchema.MyTable
alter column myColumn
set data type decimal(7,4);
But, DB2 responds with this error: "Attributes specified for column 'MYCOLUMN' are incompatible with existing column definition."
I had thought that converting from decimal(7,2) to decimal(7,4) would be pretty straightforward, but DB2 disagrees.
Outside of dropping the table and recreating it from scratch, what alternatives do I have?
Thanks in advance!
Dave
The reason Db2 doesn't like that change is you're going from from 99999.99 to 999.9999
Is that really what you want? Going from (7,2) to (9,4) would just add two more decimal places without losing any data and should be allowed by the Db.
Db2 for i gives a warning, but allows you to ignore the warning...
Create a new column ALTER ADD COLUMN of the right type, use an UPDATE to populate it, ALTER DROP COLUMN the old column. RENAME COLUMN so set the name of the original column.
Related
I'm using CRecordset to query one table, but I use a second table to filter data. If in my GetDefaultSQL override method I return a table list with more than one table then I get this ERROR: column reference "ctid" is ambiguous. I know what a "ctid" column is, but I don't use it in my code. It's inserted into the original SQL statement by ODBC driver. How to fix this? How to tell the ODBC driver not to insert the "ctid" column?
I tried to call CRecordset::Open with readOnly parameter, as I assume that ODBC needs ctid to update the row, and I don't need to update them. But the error remains.
Also tried to add a primary key to the second table that was missing it, thinking if a table has a primary key then ODBC can use that instead of 'ctid', but again no luck. Makes sense though, because I don't fetch any column of that second table, and the second table is used just for filtering.
If I make a DB view to work around the issue, I get ERROR: column "ctid" does not exist.
You have to call CRecordset::Open with two parameters changed:
m_pSet->Open(CRecordset::snapshot, NULL, CRecordset::readOnly);
Then you can fetch both the joined tables and the view without errors. No "ctid" then.
Since Firebird 3, I can't modify a column type.
Before I use this kind of update:
update RDB$RELATION_FIELDS set
RDB$FIELD_SOURCE = 'MYTEXT'
where (RDB$FIELD_NAME = 'JXML') and
(RDB$RELATION_NAME = 'XMLTABLE')
because I get ISC error 335545030 ("UPDATE operation is not allowed for system table RDB$RELATION_FIELDS").
Maybe there is another way in Firebird 3?
Firebird 3 no longer allows direct updates to the system tables, as that was a way to potentially corrupt a database. See also System Tables are Now Read-only in the release notes. You will need to use DDL statements to do the modification.
It looks like you want to change the data type of a column to a domain. You will need to use alter table ... alter column ... for that. Specifically you will need to do:
alter table XMLTABLE
alter column JXML type MYTEXT;
This does come with some restrictions:
Changing the Data Type of a Column: the TYPE Keyword
The keyword TYPE changes the data type of an existing column to
another, allowable type. A type change that might result in data loss
will be disallowed. As an example, the number of characters in the new
type for a CHAR or VARCHAR column cannot be smaller than the existing
specification for it.
If the column was declared as an array, no change to its type or its
number of dimensions is permitted.
The data type of a column that is involved in a foreign key, primary
key or unique constraint cannot be changed at all.
This statement has been available since before Firebird 1 (InterBase 6.0).
Firebird 2.5 manual, chapter Data Definition (DDL) Statement, section TABLE:
ALTER TABLE tabname ALTER COLUMN colname TYPE typename
I have a table with 200 column, I have to drop 96 columns from this table.
Using the statement
alter table XYZ drop column a,b,c,d...................
is taking forever to drop the column on SQL Server2000
Can anyone help or give me some idea how can this be done effitiently..
thanks
This is not a trivial change to ask of any RDBMS and IS likely to take a while. Depending on the platform, it might also be exacerbated if there are a lot of rows and the columns you're dropping contain non-null data.
Perhaps it would be more feasible to SELECT the columns you want to keep into a new table, drop this one, and rename the new table to the original table name.
Answer intentionally written generically
I need to increase the size of a character varying(60) field in a postgres database table without data loss.
I have this command
alter table client_details alter column name set character varying(200);
will this command increase the the field size from 60 to 200 without data loss?
The correct query to change the data type limit of the particular column:
ALTER TABLE client_details ALTER COLUMN name TYPE character varying(200);
Referring to this documentation, there would be no data loss, alter column only casts old data to new data so a cast between character data should be fine. But I don't think your syntax is correct, see the documentation I mentioned earlier. I think you should be using this syntax :
ALTER [ COLUMN ] column TYPE type [
USING expression ]
And as a note, wouldn't it be easier to just create a table, populate it and test :)
Yes. But it will rewrite this table and lock it exclusively for duration of rewriting — any query trying to access this table will wait until rewrite finishes.
Consider changing type to text and using check constraint for limiting size — changing constraint would not rewrite or lock a table.
you can use this below sql command
ALTER TABLE client_details
ALTER COLUMN name TYPE varchar(200)
From PostgreSQL 9.2 Relase Notes E.15.3.4.2
Increasing the length limit for a varchar or varbit column, or removing the limit altogether, no longer requires a table rewrite.
Changing the Column Size in Postgresql 9.1 version
During the Column chainging the varchar size to higher values, table re write is required during this lock will be held on table and user table not able access
till table re-write is done.
Table Name :- userdata
Column Name:- acc_no
ALTER TABLE userdata ALTER COLUMN acc_no TYPE varchar(250);
I have a need to change the length of CHAR columns in tables in a PostgreSQL v7.4 database. This version did not support the ability to directly change the column type or size using the ALTER TABLE statement. So, directly altering a column from a CHAR(10) to CHAR(20) for instance isn't possible (yeah, I know, "use varchars", but that's not an option in my current circumstance). Anyone have any advice/tricks on how to best accomplish this? My initial thoughts:
-- Save the table's data in a new "save" table.
CREATE TABLE save_data AS SELECT * FROM table_to_change;
-- Drop the columns from the first column to be changed on down.
ALTER TABLE table_to_change DROP column_name1; -- for each column starting with the first one that needs to be modified
ALTER TABLE table_to_change DROP column_name2;
...
-- Add the columns back, using the new size for the CHAR column
ALTER TABLE table_to_change ADD column_name1 CHAR(new_size); -- for each column dropped above
ALTER TABLE table_to_change ADD column_name2...
-- Copy the data bace from the "save" table
UPDATE table_to_change
SET column_name1=save_data.column_name1, -- for each column dropped/readded above
column_name2=save_date.column_name2,
...
FROM save_data
WHERE table_to_change.primary_key=save_data.primay_key;
Yuck! Hopefully there's a better way? Any suggestions appreciated. Thanks!
Not PostgreSQL, but in Oracle I have changed a column's type by:
Add a new column with a temporary name (ie: TMP_COL) and the new data type (ie: CHAR(20))
run an update query: UPDATE TBL SET TMP_COL = OLD_COL;
Drop OLD_COL
Rename TMP_COL to OLD_COL
I would dump the table contents to a flat file with COPY, drop the table, recreate it with the correct column setup, and then reload (with COPY again).
http://www.postgresql.org/docs/7.4/static/sql-copy.html
Is it acceptable to have downtime while performing this operation? Obviously what I've just described requires making the table unusable for a period of time, how long depends on the data size and hardware you're working with.
Edit: But COPY is quite a bit faster than INSERTs and UPDATEs. According to the docs you can make it even faster by using BINARY mode. BINARY makes it less compatible with other PGSQL installs but you won't care about that because you only want to load the data to the same instance that you dumped it from.
The best approach to your problem is to upgrade pg to something less archaic :)
Seriously. 7.4 is going to be removed from "supported versions" pretty soon, so I wouldn't wait for it to happen with 7.4 in production.