Disable default value of a column in Postgresql - postgresql

I know how to remove or set the default value of a column in PostgreSQL.
ALTER TABLE products ALTER COLUMN price DROP DEFAULT;
ALTER TABLE products ALTER COLUMN price SET DEFAULT 7.77;
However is there a way to disable the default value of a column?
My scenario is to ingest data in a table however (I think) the default value of a column is not letting me ingest the data. I want to either disable the default value for the ingestion and then enable it or I want to drop the default values of all the tables (keeping the record of dropped values) and then set the values again to default.
Is there an efficient way of doing this?

Related

db2 change column from null to not null

Have a Location column in XYZ table in db2, now I want to change to not null and using the below command
ALTER table xyz ALTER COLUMN LOCATIONID set not null
But asking to give default value. How to change the command for that
As you are making a previously optional column into a mandatory column, if there is already at least one row in the table that contains a NULL in LOCATIONID then Db2 may prevent the alteration (SQL0407N).
If the table has no rows, or if no rows have null in LOCATIONID column, then Db2-LUW will allow the alteration. You may need to REORG the table before/after the alteration in some cases.
If the table already has rows with LOCATIONID null, you must either set these rows LOCATIONID value to some not-null value before doing the alteration, or you must recreate the table.
When recreating the table, consider specifying a default value via 'NOT NULL WITH DEFAULT ...' if that makes sense for the data concerned.

Alter a Column from INTEGER To BIGINT

In my database I have several fields with INTEGER Type. I need to change some of them to BIGINT.
So my question is, can I just use the following command?
ALTER TABLE MyTable ALTER COLUMN MyIntegerColumn TYPE BIGINT;
Are the contained data be converted the correct way? After the convert is this column a "real" BIGINT column?
I know this is not possible if there are constraints on this column (Trigger, ForeingKey,...). But if there are no constraints is it possible to do it this way?
Or is it better to convert it by a Help-Column:
MyIntegerColumn -> MyIntegerColumnBac -> MyBigIntColumn
When you execute
ALTER TABLE MyTable ALTER COLUMN MyIntegerColumn TYPE BIGINT;
Firebird will not convert existing data from INTEGER to BIGINT, instead it will create a new format version for the table.
When inserting new rows or updating existing rows, the value will be stored as a BIGINT, but when reading Firebird will convert 'old' rows on the fly from INTEGER to BIGINT. This happens transparently for you as the user. This is to prevent needing to rewrite all existing rows, which could be costly (IO, garbage collection of old versions of rows, etc).
So please, do use ALTER TABLE .. ALTER COLUMN, do not do MyIntegerColumn -> MyIntegerColumnBac -> MyBigIntColumn. There are some exceptions to this rule, eg (potentially) lossy character set transformations are better done that way to prevent transliterations errors on select if a character does not exist in the new character set, or changing a (var)char column to be shorter (which can't be done with alter column).
To be a little more specific: when a row is written in the database it contains a format version (aka version count) of that row. The format version points to a description of a row (datatypes, etc) how Firebird should read that row. An alter table will create a new format version, and that format will be applied when writing new rows or updating existing rows. When reading an old row, Firebird will apply necessary transformation to present that row as the new format (for example adding new columns with their default values, transforming a data type of a column).
These format versions are also a reason why the number of alter tables are restricted: if you apply more than 255 alter tables on a single table you must backup and restore the database (the format version is a single byte) before further changes are allowed to that table.

POSTGRESQL:autoincrement for varchar type field

I'm switching from MongoDB to PostgreSQL and was wondering how I can implement the same concept as used in MongoDB for uniquely identifying each raws by MongoId.
After migration, the already existing unique fields in our database is saved as character type. I am looking for minimum source code changes.
So if any way exist in postgresql for generating auto increment unique Id for each inserting into table.
The closest thing to MongoDB's ObjectId in PostgreSQL is the uuid type. Note that ObjectId has only 12 bytes, while UUIDs have 128 bits (16 bytes).
You can convert your existsing IDs by appending (or prepending) f.ex. '00000000' to them.
alter table some_table
alter id_column
type uuid
using (id_column || '00000000')::uuid;
Although it would be the best if you can do this while migrating the schema + data. If you can't do it during the migration, you need to update you IDs (while they are still varchars: this way the referenced columns will propagate the change), drop foreign keys, do the alter type and then re-apply foreign keys.
You can generate various UUIDs (for default values of the column) with the uuid-ossp module.
create extension "uuid-ossp";
alter table some_table
alter id_column
set default uuid_generate_v4();
Use a sequence as a default for the column:
create sequence some_id_sequence
start with 100000
owned by some_table.id_column;
The start with should be bigger then your current maximum number.
Then use that sequence as a default for your column:
alter table some_table
alter id_column set default nextval('some_id_sequence')::text;
The better solution would be to change the column to an integer column. Storing numbers in a text (or varchar) column is a really bad idea.

ADD COLUMN with DEFAULT value to a huge table

I have a postgresql DB and a table with almost billion of rows.
when I try to add a new column with default value:
ALTER TABLE big_table
ADD COLUMN some_flag integer NOT NULL DEFAULT 0;
The transaction goes on for 30+ min .. and the DB logs starts to shoots warnings.
Any way to optimize the query ?
Besides doing it in batches (which will still take a while):
You could dump the table as COPY statements and write a script to edit the contents of the COPY statements to insert another column (COPY can be CSV IIRC).
Then you just reload your altered COPY dump and it should in theory be faster than the ALTER because COPY will not log transactions.
The other option is to turn off fsync while you run the command... just remember to turn it back on.
You can also do both of the above in batches.
Starting from PostgreSQL 11 this behaviour will change.
Waiting for PostgreSQL 11 – Fast ALTER TABLE ADD COLUMN with a non-NULL default:
So, for the longest time, when you did:
alter table x add column z text;
it was virtually instantaneous. Get a lock on table, add information about new column to system catalogs, and it's done.
But when you tried:
alter table x add column z text default 'some value';
then it took long time. How long it did depend on size of table.
This was because postgresql was actually rewriting the whole table, adding the column to each row, and filling it with default value.
"What happens if you want to set the column to NOT NULL also? Are we back to the slow version in that case or does this handle that as well?"
not null doesn’t change anything. it is a constraint for new rows. so adding a column with “not null default ‘xxx'” will be fast.
I'd consider creating the column without the default and manually updating the rows in batches with intermittent commits to apply the default.

Increasing the size of character varying type in postgres without data loss

I need to increase the size of a character varying(60) field in a postgres database table without data loss.
I have this command
alter table client_details alter column name set character varying(200);
will this command increase the the field size from 60 to 200 without data loss?
The correct query to change the data type limit of the particular column:
ALTER TABLE client_details ALTER COLUMN name TYPE character varying(200);
Referring to this documentation, there would be no data loss, alter column only casts old data to new data so a cast between character data should be fine. But I don't think your syntax is correct, see the documentation I mentioned earlier. I think you should be using this syntax :
ALTER [ COLUMN ] column TYPE type [
USING expression ]
And as a note, wouldn't it be easier to just create a table, populate it and test :)
Yes. But it will rewrite this table and lock it exclusively for duration of rewriting — any query trying to access this table will wait until rewrite finishes.
Consider changing type to text and using check constraint for limiting size — changing constraint would not rewrite or lock a table.
you can use this below sql command
ALTER TABLE client_details
ALTER COLUMN name TYPE varchar(200)
From PostgreSQL 9.2 Relase Notes E.15.3.4.2
Increasing the length limit for a varchar or varbit column, or removing the limit altogether, no longer requires a table rewrite.
Changing the Column Size in Postgresql 9.1 version
During the Column chainging the varchar size to higher values, table re write is required during this lock will be held on table and user table not able access
till table re-write is done.
Table Name :- userdata
Column Name:- acc_no
ALTER TABLE userdata ALTER COLUMN acc_no TYPE varchar(250);