In my database I have several fields with INTEGER Type. I need to change some of them to BIGINT.
So my question is, can I just use the following command?
ALTER TABLE MyTable ALTER COLUMN MyIntegerColumn TYPE BIGINT;
Are the contained data be converted the correct way? After the convert is this column a "real" BIGINT column?
I know this is not possible if there are constraints on this column (Trigger, ForeingKey,...). But if there are no constraints is it possible to do it this way?
Or is it better to convert it by a Help-Column:
MyIntegerColumn -> MyIntegerColumnBac -> MyBigIntColumn
When you execute
ALTER TABLE MyTable ALTER COLUMN MyIntegerColumn TYPE BIGINT;
Firebird will not convert existing data from INTEGER to BIGINT, instead it will create a new format version for the table.
When inserting new rows or updating existing rows, the value will be stored as a BIGINT, but when reading Firebird will convert 'old' rows on the fly from INTEGER to BIGINT. This happens transparently for you as the user. This is to prevent needing to rewrite all existing rows, which could be costly (IO, garbage collection of old versions of rows, etc).
So please, do use ALTER TABLE .. ALTER COLUMN, do not do MyIntegerColumn -> MyIntegerColumnBac -> MyBigIntColumn. There are some exceptions to this rule, eg (potentially) lossy character set transformations are better done that way to prevent transliterations errors on select if a character does not exist in the new character set, or changing a (var)char column to be shorter (which can't be done with alter column).
To be a little more specific: when a row is written in the database it contains a format version (aka version count) of that row. The format version points to a description of a row (datatypes, etc) how Firebird should read that row. An alter table will create a new format version, and that format will be applied when writing new rows or updating existing rows. When reading an old row, Firebird will apply necessary transformation to present that row as the new format (for example adding new columns with their default values, transforming a data type of a column).
These format versions are also a reason why the number of alter tables are restricted: if you apply more than 255 alter tables on a single table you must backup and restore the database (the format version is a single byte) before further changes are allowed to that table.
Related
I'm using Firebird 4.0 and I would convert a column from smallint 0|1 to boolean.
So I have this kind of domain:
CREATE DOMAIN D_BOOL
AS SMALLINT
DEFAULT 0
NOT NULL
CHECK (VALUE IN (0,1))
;
This domain is used in my test table:
CREATE TABLE TBOOL
(
ID INTEGER,
INTVAL D_BOOL
);
How can I convert the column INTVAL to BOOLEAN?
I tried this query but I got an error:
alter table tbool
alter column INTVAL TYPE BOOLEAN,
alter column INTVAL SET DEFAULT FALSE
Error:
Error: *** IBPP::SQLException ***
Context: Statement::Execute( alter table tbool
alter column INTVAL TYPE BOOLEAN,
alter column INTVAL SET DEFAULT FALSE )
Message: isc_dsql_execute2 failed
SQL Message : -607
This operation is not defined for system tables.
Engine Code : 335544351
Engine Message :
unsuccessful metadata update
ALTER TABLE TBOOL failed
MODIFY RDB$RELATION_FIELDS failed
Unfortunately, this is an incompatible column change, because there is no conversion defined from SMALLINT to BOOLEAN. Altering the type of a column only works for a limited combination of types (and there is no combination that allows modification to or from BOOLEAN).
The only real option is to add a new column, populate it based on the value of the old column, drop the old column and rename the new column. This can have a huge impact if this column is used in triggers, procedures and/or views.
Your options are basically:
Keep existing columns as-is, and only use BOOLEAN moving forward for new columns
Do a very invasive change to change all your columns.
If you have a lot of columns that need to change, this is likely easier to do by creating a new database from scratch and pumping the data over, than by changing the database in-place.
The background of this limitation is that Firebird doesn't actually modify existing values when changing the type of a column. Instead, it will convert values on the fly when reading rows created with an older "format version" (inserts and updates will write the new "format version").
This makes for fast DDL, but all conversions must be known to succeed. This basically means only "widening" conversions between similar types are allowed (e.g. longer (VAR)CHAR, longer integer types, etc).
Since Firebird 3, I can't modify a column type.
Before I use this kind of update:
update RDB$RELATION_FIELDS set
RDB$FIELD_SOURCE = 'MYTEXT'
where (RDB$FIELD_NAME = 'JXML') and
(RDB$RELATION_NAME = 'XMLTABLE')
because I get ISC error 335545030 ("UPDATE operation is not allowed for system table RDB$RELATION_FIELDS").
Maybe there is another way in Firebird 3?
Firebird 3 no longer allows direct updates to the system tables, as that was a way to potentially corrupt a database. See also System Tables are Now Read-only in the release notes. You will need to use DDL statements to do the modification.
It looks like you want to change the data type of a column to a domain. You will need to use alter table ... alter column ... for that. Specifically you will need to do:
alter table XMLTABLE
alter column JXML type MYTEXT;
This does come with some restrictions:
Changing the Data Type of a Column: the TYPE Keyword
The keyword TYPE changes the data type of an existing column to
another, allowable type. A type change that might result in data loss
will be disallowed. As an example, the number of characters in the new
type for a CHAR or VARCHAR column cannot be smaller than the existing
specification for it.
If the column was declared as an array, no change to its type or its
number of dimensions is permitted.
The data type of a column that is involved in a foreign key, primary
key or unique constraint cannot be changed at all.
This statement has been available since before Firebird 1 (InterBase 6.0).
Firebird 2.5 manual, chapter Data Definition (DDL) Statement, section TABLE:
ALTER TABLE tabname ALTER COLUMN colname TYPE typename
I'm switching from MongoDB to PostgreSQL and was wondering how I can implement the same concept as used in MongoDB for uniquely identifying each raws by MongoId.
After migration, the already existing unique fields in our database is saved as character type. I am looking for minimum source code changes.
So if any way exist in postgresql for generating auto increment unique Id for each inserting into table.
The closest thing to MongoDB's ObjectId in PostgreSQL is the uuid type. Note that ObjectId has only 12 bytes, while UUIDs have 128 bits (16 bytes).
You can convert your existsing IDs by appending (or prepending) f.ex. '00000000' to them.
alter table some_table
alter id_column
type uuid
using (id_column || '00000000')::uuid;
Although it would be the best if you can do this while migrating the schema + data. If you can't do it during the migration, you need to update you IDs (while they are still varchars: this way the referenced columns will propagate the change), drop foreign keys, do the alter type and then re-apply foreign keys.
You can generate various UUIDs (for default values of the column) with the uuid-ossp module.
create extension "uuid-ossp";
alter table some_table
alter id_column
set default uuid_generate_v4();
Use a sequence as a default for the column:
create sequence some_id_sequence
start with 100000
owned by some_table.id_column;
The start with should be bigger then your current maximum number.
Then use that sequence as a default for your column:
alter table some_table
alter id_column set default nextval('some_id_sequence')::text;
The better solution would be to change the column to an integer column. Storing numbers in a text (or varchar) column is a really bad idea.
Assume I have a table named tracker with columns (issue_id,ingest_date,verb,priority)
I would like to add 50 columns to this table.
Columns being (string_ch_01,string_ch_02,.....,string_ch_50) of datatype varchar.
Is there any better way to add columns with single procedure rather than executing the following alter command 50 times?
ALTER TABLE tracker ADD COLUMN string_ch_01 varchar(1020);
Yes, a better way is to issue a single ALTER TABLE with all the columns at once:
ALTER TABLE tracker
ADD COLUMN string_ch_01 varchar(1020),
ADD COLUMN string_ch_02 varchar(1020),
...
ADD COLUMN string_ch_50 varchar(1020)
;
It's especially better when there are DEFAULT non-null clauses for the new columns, since each of them would rewrite the entire table, as opposed to rewriting it only once if they're grouped in a single ALTER TABLE.
I need to increase the size of a character varying(60) field in a postgres database table without data loss.
I have this command
alter table client_details alter column name set character varying(200);
will this command increase the the field size from 60 to 200 without data loss?
The correct query to change the data type limit of the particular column:
ALTER TABLE client_details ALTER COLUMN name TYPE character varying(200);
Referring to this documentation, there would be no data loss, alter column only casts old data to new data so a cast between character data should be fine. But I don't think your syntax is correct, see the documentation I mentioned earlier. I think you should be using this syntax :
ALTER [ COLUMN ] column TYPE type [
USING expression ]
And as a note, wouldn't it be easier to just create a table, populate it and test :)
Yes. But it will rewrite this table and lock it exclusively for duration of rewriting — any query trying to access this table will wait until rewrite finishes.
Consider changing type to text and using check constraint for limiting size — changing constraint would not rewrite or lock a table.
you can use this below sql command
ALTER TABLE client_details
ALTER COLUMN name TYPE varchar(200)
From PostgreSQL 9.2 Relase Notes E.15.3.4.2
Increasing the length limit for a varchar or varbit column, or removing the limit altogether, no longer requires a table rewrite.
Changing the Column Size in Postgresql 9.1 version
During the Column chainging the varchar size to higher values, table re write is required during this lock will be held on table and user table not able access
till table re-write is done.
Table Name :- userdata
Column Name:- acc_no
ALTER TABLE userdata ALTER COLUMN acc_no TYPE varchar(250);