I did some research but can't find the exact answer that I look for. Currently I have a primary key column 'id' which is set to serial but I want to change it to bigserial to map to Long in Java layer. What is the best way to achieve this considering this is a existing table? I think my Postgres version is 10.5. Also I am aware that both serial and bigserial are not a data type.
In Postgres 9.6 or earlier the sequence created by a serial column already returns bigint. You can check this using psql:
drop table if exists my_table;
create table my_table(id serial primary key, str text);
\d my_table
Table "public.my_table"
Column | Type | Collation | Nullable | Default
--------+---------+-----------+----------+--------------------------------------
id | integer | | not null | nextval('my_table_id_seq'::regclass)
str | text | | |
Indexes:
"my_table_pkey" PRIMARY KEY, btree (id)
\d my_table_id_seq
Sequence "public.my_table_id_seq"
Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
--------+-------+---------+---------------------+-----------+---------+-------
bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
Owned by: public.my_table.id
So you should only alter the type of the serial column:
alter table my_table alter id type bigint;
The behavior has changed in Postgres 10:
Also, sequences created for SERIAL columns now generate positive 32-bit wide values, whereas previous versions generated 64-bit wide values. This has no visible effect if the values are only stored in a column.
Hence in Postgres 10+:
alter sequence my_table_id_seq as bigint;
alter table my_table alter id type bigint;
-- backup table first
CREATE TABLE tablenamebackup as select * from tablename ;
--add new column idx
alter table tablename add column idx bigserial not null;
-- copy id to idx
update tablename set idx = id ;
-- drop id column
alter table tablename drop column id ;
-- rename idx to id
alter table tablename rename column idx to id ;
-- Reset Sequence to max + 1
SELECT setval(pg_get_serial_sequence('tablename', 'id'), coalesce(max(id)+1, 1), false) FROM tablename ;
Is there a way to change existing primary key type from int to serial without dropping the table? I already have a lot of data in the table and I don't want to delete it.
Converting an int to a serial more or less only means adding a sequence default to the value, so to make it a serial;
Pick a starting value for the serial, greater than any existing value in the table
SELECT MAX(id)+1 FROM mytable
Create a sequence for the serial (tablename_columnname_seq is a good name)
CREATE SEQUENCE test_id_seq MINVALUE 3 (assuming you want to start at 3)
Alter the default of the column to use the sequence
ALTER TABLE test ALTER id SET DEFAULT nextval('test_id_seq')
Alter the sequence to be owned by the table/column;
ALTER SEQUENCE test_id_seq OWNED BY test.id
A very simple SQLfiddle demo.
And as always, make a habit of running a full backup before running altering SQL queries from random people on the Internet ;-)
-- temp schema for testing
-- ----------------------------
DROP SCHEMA tmp CASCADE;
CREATE SCHEMA tmp ;
SET search_path=tmp;
CREATE TABLE bagger
( id INTEGER NOT NULL PRIMARY KEY
, tralala varchar
);
INSERT INTO bagger(id,tralala)
SELECT gs, 'zzz_' || gs::text
FROM generate_series(1,100) gs
;
DELETE FROM bagger WHERE random() <0.9;
-- SELECT * FROM bagger;
-- CREATE A sequence and tie it to bagger.id
-- -------------------------------------------
CREATE SEQUENCE bagger_id_seq;
ALTER TABLE bagger
ALTER COLUMN id SET NOT NULL
, ALTER COLUMN id SET DEFAULT nextval('player_id_seq')
;
ALTER SEQUENCE bagger_id_seq
OWNED BY bagger.id
;
SELECT setval('bagger_id_seq', MAX(ba.id))
FROM bagger ba
;
-- Check the result
-- ------------------
SELECT * FROM bagger;
\d bagger
\d bagger_id_seq
The problem: In Postgresql, if table temp_person_two inherits fromtemp_person, default column values on the child table are ignored if the parent table is altered.
How to replicate:
First, create table and a child table. The child table should have one column that has a default value.
CREATE TEMPORARY TABLE temp_person (
person_id SERIAL,
name VARCHAR
);
CREATE TEMPORARY TABLE temp_person_two (
has_default character varying(4) DEFAULT 'en'::character varying NOT NULL
) INHERITS (temp_person);
Next, create a trigger on the parent table that copies its data to the child table (I know this appears like bad design, but this is a minimal test case to show the problem).
CREATE FUNCTION temp_person_insert() RETURNS trigger
LANGUAGE plpgsql
AS '
BEGIN
INSERT INTO temp_person_two VALUES ( NEW.* );
RETURN NULL;
END;
';
CREATE TRIGGER temp_person_insert_trigger
BEFORE INSERT ON temp_person
FOR EACH ROW
EXECUTE PROCEDURE temp_person_insert();
Then insert data into parent and select data from child. The data should be correct.
INSERT INTO temp_person (name) VALUES ('ovid');
SELECT * FROM temp_person_two;
person_id | name | has_default
-----------+------+-------------
1 | ovid | en
(1 row )
Finally, alter parent table by adding a new, unrelated column. Attempt to insert data and watch a "not-null constraint" violation occur:
ALTER TABLE temp_person ADD column foo text;
INSERT INTO temp_person(name) VALUES ('Corinna');
ERROR: null value in column "has_default" violates not-null constraint
CONTEXT: SQL statement "INSERT INTO temp_person_two VALUES ( $1 .* )"
PL/pgSQL function "temp_person_insert" line 2 at SQL statement
My version:
testing=# select version();
version
-------------------------------------------------------------------------------------------------------
PostgreSQL 8.4.17 on x86_64-pc-linux-gnu, compiled by GCC gcc-4.4.real (Debian 4.4.5-8) 4.4.5, 64-bit
(1 row)
It's there all the way to 9.3, but it's going to be tricky to fix, and I'm not sure if it's just undesirable behaviour rather than a bug.
The constraint is still there, but look at the column-order.
Table "pg_temp_2.temp_person"
Column | Type | Modifiers
-----------+-------------------+-----------------------------------------------------------------
person_id | integer | not null default nextval('temp_person_person_id_seq'::regclass)
name | character varying |
Number of child tables: 1 (Use \d+ to list them.)
Table "pg_temp_2.temp_person_two"
Column | Type | Modifiers
-------------+----------------------+-----------------------------------------------------------------
person_id | integer | not null default nextval('temp_person_person_id_seq'::regclass)
name | character varying |
has_default | character varying(4) | not null default 'en'::character varying
Inherits: temp_person
ALTER TABLE
Table "pg_temp_2.temp_person_two"
Column | Type | Modifiers
-------------+----------------------+-----------------------------------------------------------------
person_id | integer | not null default nextval('temp_person_person_id_seq'::regclass)
name | character varying |
has_default | character varying(4) | not null default 'en'::character varying
foo | text |
Inherits: temp_person
It works in your first example because you are effectively doing:
INSERT INTO temp_person_two (person_id,name)
VALUES (person_id, name)
BUT look where your new column is added in the child table - at the end! So you end up with
INSERT INTO temp_person_two (person_id,name,has_default)
VALUES (person_id, name, foo)
rather than what you hoped for:
INSERT INTO temp_person_two (person_id,name,foo)...
So - what's the correct behaviour here? If PostgreSQL shuffled the columns in the child table that could break code. If it doesn't, that can also break code. As it happens, I don't think the first option is do-able without substantial PG code changes, so it's unlikely to do that in the medium term.
Moral of the story: explicitly list your INSERT column-names.
Could take a while by hand. You know any languages with regexes? ;-)
It's not a bug. NEW.* expands to the values of each column in the new row, so you're doing INSERT INTO temp_person_two VALUES ( NEW.person_id, NEW.name, NEW.foo ), the last of which is indeed NULL if you didn't specify it (and wrong if you did).
I'm surprised it even works before you added the new column, since the number of values doesn't match the number of fields in the child table. Presumably it assumes the default for missing trailing values.
Is there a way I can remove a constraint based on column names?
I have postgres 8.4 and when I upgrade my project the upgrade fails because a constraint was named something different in a different version.
Basically, I need to remove a constraint if it exists or I can just remove the constraint using the column names.
The name of the constraint is the only thing that has changed. Any idea if that's possible?
In this case, I need to remove "patron_username_key"
discovery=# \d patron
Table "public.patron"
Column | Type | Modifiers
--------------------------+-----------------------------+-----------
patron_id | integer | not null
create_date | timestamp without time zone | not null
row_version | integer | not null
display_name | character varying(255) | not null
username | character varying(255) | not null
authentication_server_id | integer |
Indexes:
"patron_pkey" PRIMARY KEY, btree (patron_id)
"patron_username_key" UNIQUE, btree (username, authentication_server_id)
You can use System Catalogs to find information bout constraints. Still, some constraints, like keys, are mentioned in the separate pg_constraint table, while others, like NOT NULL, are essentially a columns in the pg_attribute table.
For the keys, you can use this query to get a list of constraint definitions:
SELECT pg_get_constraintdef(c.oid) AS def
FROM pg_class t
JOIN pg_constraint c ON c.conrelid=t.oid
WHERE t.relkind='r' AND t.relname = 'table';
You can then filter out the ones that references your column and dynamically construct ALTER TABLE ... DROP CONSTRAINT ... statements.
Assuming that unique index is the result of adding a unique constraint, you can use the following SQL statement to remove that constraint:
do $$
declare
cons_name text;
begin
select constraint_name
into cons_name
from information_schema.constraint_column_usage
where constraint_schema = current_schema()
and column_name in ('authentication_server_id', 'username')
and table_name = 'patron'
group by constraint_name
having count(*) = 2;
execute 'alter table patron drop constraint '||cons_name;
end;
$$
I'm not sure if this will work if you have "only" added a unique index (instead of a unique constraint).
If you need to do that for more than 2 columns you also need to adjust the having count(*) = 2 part to match the number of columns in the column_name in .. condition.
(As you did not specify your PostgreSQL version I'm assuming the current version)
I'm switching from MySQL to PostgreSQL and I was wondering how can I have an INT column with AUTO INCREMENT. I saw in the PostgreSQL docs a datatype called SERIAL, but I get syntax errors when using it.
Yes, SERIAL is the equivalent function.
CREATE TABLE foo (
id SERIAL,
bar varchar
);
INSERT INTO foo (bar) VALUES ('blah');
INSERT INTO foo (bar) VALUES ('blah');
SELECT * FROM foo;
+----------+
| 1 | blah |
+----------+
| 2 | blah |
+----------+
SERIAL is just a create table time macro around sequences. You can not alter SERIAL onto an existing column.
You can use any other integer data type, such as smallint.
Example :
CREATE SEQUENCE user_id_seq;
CREATE TABLE user (
user_id smallint NOT NULL DEFAULT nextval('user_id_seq')
);
ALTER SEQUENCE user_id_seq OWNED BY user.user_id;
Better to use your own data type, rather than user serial data type.
If you want to add sequence to id in the table which already exist you can use:
CREATE SEQUENCE user_id_seq;
ALTER TABLE user ALTER user_id SET DEFAULT NEXTVAL('user_id_seq');
Starting with Postgres 10, identity columns as defined by the SQL standard are also supported:
create table foo
(
id integer generated always as identity
);
creates an identity column that can't be overridden unless explicitly asked for. The following insert will fail with a column defined as generated always:
insert into foo (id)
values (1);
This can however be overruled:
insert into foo (id) overriding system value
values (1);
When using the option generated by default this is essentially the same behaviour as the existing serial implementation:
create table foo
(
id integer generated by default as identity
);
When a value is supplied manually, the underlying sequence needs to be adjusted manually as well - the same as with a serial column.
An identity column is not a primary key by default (just like a serial column). If it should be one, a primary key constraint needs to be defined manually.
Whilst it looks like sequences are the equivalent to MySQL auto_increment, there are some subtle but important differences:
1. Failed Queries Increment The Sequence/Serial
The serial column gets incremented on failed queries. This leads to fragmentation from failed queries, not just row deletions. For example, run the following queries on your PostgreSQL database:
CREATE TABLE table1 (
uid serial NOT NULL PRIMARY KEY,
col_b integer NOT NULL,
CHECK (col_b>=0)
);
INSERT INTO table1 (col_b) VALUES(1);
INSERT INTO table1 (col_b) VALUES(-1);
INSERT INTO table1 (col_b) VALUES(2);
SELECT * FROM table1;
You should get the following output:
uid | col_b
-----+-------
1 | 1
3 | 2
(2 rows)
Notice how uid goes from 1 to 3 instead of 1 to 2.
This still occurs if you were to manually create your own sequence with:
CREATE SEQUENCE table1_seq;
CREATE TABLE table1 (
col_a smallint NOT NULL DEFAULT nextval('table1_seq'),
col_b integer NOT NULL,
CHECK (col_b>=0)
);
ALTER SEQUENCE table1_seq OWNED BY table1.col_a;
If you wish to test how MySQL is different, run the following on a MySQL database:
CREATE TABLE table1 (
uid int unsigned NOT NULL AUTO_INCREMENT PRIMARY KEY,
col_b int unsigned NOT NULL
);
INSERT INTO table1 (col_b) VALUES(1);
INSERT INTO table1 (col_b) VALUES(-1);
INSERT INTO table1 (col_b) VALUES(2);
You should get the following with no fragementation:
+-----+-------+
| uid | col_b |
+-----+-------+
| 1 | 1 |
| 2 | 2 |
+-----+-------+
2 rows in set (0.00 sec)
2. Manually Setting the Serial Column Value Can Cause Future Queries to Fail.
This was pointed out by #trev in a previous answer.
To simulate this manually set the uid to 4 which will "clash" later.
INSERT INTO table1 (uid, col_b) VALUES(5, 5);
Table data:
uid | col_b
-----+-------
1 | 1
3 | 2
5 | 5
(3 rows)
Run another insert:
INSERT INTO table1 (col_b) VALUES(6);
Table data:
uid | col_b
-----+-------
1 | 1
3 | 2
5 | 5
4 | 6
Now if you run another insert:
INSERT INTO table1 (col_b) VALUES(7);
It will fail with the following error message:
ERROR: duplicate key value violates unique constraint "table1_pkey"
DETAIL: Key (uid)=(5) already exists.
In contrast, MySQL will handle this gracefully as shown below:
INSERT INTO table1 (uid, col_b) VALUES(4, 4);
Now insert another row without setting uid
INSERT INTO table1 (col_b) VALUES(3);
The query doesn't fail, uid just jumps to 5:
+-----+-------+
| uid | col_b |
+-----+-------+
| 1 | 1 |
| 2 | 2 |
| 4 | 4 |
| 5 | 3 |
+-----+-------+
Testing was performed on MySQL 5.6.33, for Linux (x86_64) and PostgreSQL 9.4.9
Sorry, to rehash an old question, but this was the first Stack Overflow question/answer that popped up on Google.
This post (which came up first on Google) talks about using the more updated syntax for PostgreSQL 10:
https://blog.2ndquadrant.com/postgresql-10-identity-columns/
which happens to be:
CREATE TABLE test_new (
id int GENERATED BY DEFAULT AS IDENTITY PRIMARY KEY,
);
Hope that helps :)
You have to be careful not to insert directly into your SERIAL or sequence field, otherwise your write will fail when the sequence reaches the inserted value:
-- Table: "test"
-- DROP TABLE test;
CREATE TABLE test
(
"ID" SERIAL,
"Rank" integer NOT NULL,
"GermanHeadword" "text" [] NOT NULL,
"PartOfSpeech" "text" NOT NULL,
"ExampleSentence" "text" NOT NULL,
"EnglishGloss" "text"[] NOT NULL,
CONSTRAINT "PKey" PRIMARY KEY ("ID", "Rank")
)
WITH (
OIDS=FALSE
);
-- ALTER TABLE test OWNER TO postgres;
INSERT INTO test("Rank", "GermanHeadword", "PartOfSpeech", "ExampleSentence", "EnglishGloss")
VALUES (1, '{"der", "die", "das", "den", "dem", "des"}', 'art', 'Der Mann küsst die Frau und das Kind schaut zu', '{"the", "of the" }');
INSERT INTO test("ID", "Rank", "GermanHeadword", "PartOfSpeech", "ExampleSentence", "EnglishGloss")
VALUES (2, 1, '{"der", "die", "das"}', 'pron', 'Das ist mein Fahrrad', '{"that", "those"}');
INSERT INTO test("Rank", "GermanHeadword", "PartOfSpeech", "ExampleSentence", "EnglishGloss")
VALUES (1, '{"der", "die", "das"}', 'pron', 'Die Frau, die nebenen wohnt, heißt Renate', '{"that", "who"}');
SELECT * from test;
In the context of the asked question and in reply to the comment by #sereja1c, creating SERIAL implicitly creates sequences, so for the above example-
CREATE TABLE foo (id SERIAL,bar varchar);
CREATE TABLE would implicitly create sequence foo_id_seq for serial column foo.id. Hence, SERIAL [4 Bytes] is good for its ease of use unless you need a specific datatype for your id.
Since PostgreSQL 10
CREATE TABLE test_new (
id int GENERATED BY DEFAULT AS IDENTITY PRIMARY KEY,
payload text
);
This way will work for sure, I hope it helps:
CREATE TABLE fruits(
id SERIAL PRIMARY KEY,
name VARCHAR NOT NULL
);
INSERT INTO fruits(id,name) VALUES(DEFAULT,'apple');
or
INSERT INTO fruits VALUES(DEFAULT,'apple');
You can check this the details in the next link:
http://www.postgresqltutorial.com/postgresql-serial/
Create Sequence.
CREATE SEQUENCE user_role_id_seq
INCREMENT 1
MINVALUE 1
MAXVALUE 9223372036854775807
START 3
CACHE 1;
ALTER TABLE user_role_id_seq
OWNER TO postgres;
and alter table
ALTER TABLE user_roles ALTER COLUMN user_role_id SET DEFAULT nextval('user_role_id_seq'::regclass);