I did some research but can't find the exact answer that I look for. Currently I have a primary key column 'id' which is set to serial but I want to change it to bigserial to map to Long in Java layer. What is the best way to achieve this considering this is a existing table? I think my Postgres version is 10.5. Also I am aware that both serial and bigserial are not a data type.
In Postgres 9.6 or earlier the sequence created by a serial column already returns bigint. You can check this using psql:
drop table if exists my_table;
create table my_table(id serial primary key, str text);
\d my_table
Table "public.my_table"
Column | Type | Collation | Nullable | Default
--------+---------+-----------+----------+--------------------------------------
id | integer | | not null | nextval('my_table_id_seq'::regclass)
str | text | | |
Indexes:
"my_table_pkey" PRIMARY KEY, btree (id)
\d my_table_id_seq
Sequence "public.my_table_id_seq"
Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
--------+-------+---------+---------------------+-----------+---------+-------
bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
Owned by: public.my_table.id
So you should only alter the type of the serial column:
alter table my_table alter id type bigint;
The behavior has changed in Postgres 10:
Also, sequences created for SERIAL columns now generate positive 32-bit wide values, whereas previous versions generated 64-bit wide values. This has no visible effect if the values are only stored in a column.
Hence in Postgres 10+:
alter sequence my_table_id_seq as bigint;
alter table my_table alter id type bigint;
-- backup table first
CREATE TABLE tablenamebackup as select * from tablename ;
--add new column idx
alter table tablename add column idx bigserial not null;
-- copy id to idx
update tablename set idx = id ;
-- drop id column
alter table tablename drop column id ;
-- rename idx to id
alter table tablename rename column idx to id ;
-- Reset Sequence to max + 1
SELECT setval(pg_get_serial_sequence('tablename', 'id'), coalesce(max(id)+1, 1), false) FROM tablename ;
I am working on an ETL where we get data from hive and dump it to Postgres. Just to ensure the data is not corrupt I first store the data in a temp table (created as the main table with all the indexes and constraints) and if the data is validated copy it to the main table.
But it has been taking to long as the data is huge.
Once the data is validated I am now thinking of dropping the main table and then renaming the temp table to the main table.
Will renaming a table in Postgres drop the indexes,constraints and defaults defined on it?
It a word - no, it will not drop any indexes, constraints or defaults. Here's a quick demo:
db=> CREATE TABLE mytab (
id INT PRIMARY KEY,
col_uniq INT UNIQUE,
col_not_null INT NOT NULL DEFAULT 123
);
CREATE TABLE
db=> \d mytab
Table "public.mytab"
Column | Type | Modifiers
--------------+---------+----------------------
id | integer | not null
col_uniq | integer |
col_not_null | integer | not null default 123
Indexes:
"mytab_pkey" PRIMARY KEY, btree (id)
"mytab_col_uniq_key" UNIQUE CONSTRAINT, btree (col_uniq)
db=> ALTER TABLE mytab RENAME TO mytab_renamed;
ALTER TABLE
db=> \d mytab_renamed
Table "public.mytab_renamed"
Column | Type | Modifiers
--------------+---------+----------------------
id | integer | not null
col_uniq | integer |
col_not_null | integer | not null default 123
Indexes:
"mytab_pkey" PRIMARY KEY, btree (id)
"mytab_col_uniq_key" UNIQUE CONSTRAINT, btree (col_uniq)
I'm going through 7 Databases in 7 Weeks.
In PostgreSQL, I created a venues table that has a SERIAL venue_id column.
output of \d venues
Table "public.venues"
Column | Type | Modifiers
----------------+------------------------+-----------------------------------------------------------
venue_id | integer | not null default nextval('venues_venue_id_seq'::regclass)
name | character varying(255) |
street_address | text |
type | character(7) | default 'public'::bpchar
postal_code | character varying(9) |
country_code | character(2) |
Indexes:
"venues_pkey" PRIMARY KEY, btree (venue_id)
Check constraints:
"venues_type_check" CHECK (type = ANY (ARRAY['public'::bpchar, 'private'::bpchar]))
Foreign-key constraints:
"venues_country_code_fkey" FOREIGN KEY (country_code, postal_code) REFERENCES cities(country_code, postal_code) MATCH FULL
The next step is to create an event table that references venue_id with a foreign key.
I'm trying this:
CREATE TABLE events (
event_id SERIAL PRIMARY KEY,
title text,
starts timestamp,
ends timestamp,
FOREIGN KEY (venue_id) REFERENCES venues (venue_id));
And I get this error:
ERROR: column "venue_id" referenced in forgein key not found
What's wrong?
you need to initialize the foreign key column too. see here http://www.postgresql.org/docs/current/static/ddl-constraints.html#DDL-CONSTRAINTS-FK
source & credit from #mu is too short
I'm going through the second edition of this book, so things might have changed slightly.
To create the table, you explicitly have to declare the venues_id as a column in your table, just like the rest of your columns:
CREATE TABLE events (
event_id SERIAL PRIMARY KEY,
title text,
starts timestamp,
ends timestamp,
venue_id integer, -- this is the line you're missing!
FOREIGN KEY (venue_id)
REFERENCES venues (venue_id) MATCH FULL
);
Once you have executed that, the table is created:
7dbs=# \dt
List of relations
Schema | Name | Type | Owner
--------+-----------+-------+----------
public | cities | table | postgres
public | countries | table | postgres
public | events | table | postgres
public | venues | table | postgres
How to see all existed indexes for table? for example given table mytable, how to see his every index with appropriate columns?
Try this SQL
SELECT * FROM pg_indexes WHERE tablename = 'mytable';
In psql use the \d command:
postgres=> create table foo (id integer not null primary key, some_data varchar(20));
CREATE TABLE
postgres=> create index foo_data_idx on foo (some_data);
CREATE INDEX
postgres=> \d+ foo
Table "public.foo"
Column | Type | Modifiers | Storage | Stats target | Description
-----------+-----------------------+-----------+----------+--------------+------------
id | integer | not null | plain | |
some_data | character varying(20) | | extended | |
Indexes:
"foo_pkey" PRIMARY KEY, btree (id)
"foo_data_idx" btree (some_data)
Has OIDs: no
postgres=>
Other SQL tools have other means of displaying this information.
I'm switching from MySQL to PostgreSQL and I was wondering how can I have an INT column with AUTO INCREMENT. I saw in the PostgreSQL docs a datatype called SERIAL, but I get syntax errors when using it.
Yes, SERIAL is the equivalent function.
CREATE TABLE foo (
id SERIAL,
bar varchar
);
INSERT INTO foo (bar) VALUES ('blah');
INSERT INTO foo (bar) VALUES ('blah');
SELECT * FROM foo;
+----------+
| 1 | blah |
+----------+
| 2 | blah |
+----------+
SERIAL is just a create table time macro around sequences. You can not alter SERIAL onto an existing column.
You can use any other integer data type, such as smallint.
Example :
CREATE SEQUENCE user_id_seq;
CREATE TABLE user (
user_id smallint NOT NULL DEFAULT nextval('user_id_seq')
);
ALTER SEQUENCE user_id_seq OWNED BY user.user_id;
Better to use your own data type, rather than user serial data type.
If you want to add sequence to id in the table which already exist you can use:
CREATE SEQUENCE user_id_seq;
ALTER TABLE user ALTER user_id SET DEFAULT NEXTVAL('user_id_seq');
Starting with Postgres 10, identity columns as defined by the SQL standard are also supported:
create table foo
(
id integer generated always as identity
);
creates an identity column that can't be overridden unless explicitly asked for. The following insert will fail with a column defined as generated always:
insert into foo (id)
values (1);
This can however be overruled:
insert into foo (id) overriding system value
values (1);
When using the option generated by default this is essentially the same behaviour as the existing serial implementation:
create table foo
(
id integer generated by default as identity
);
When a value is supplied manually, the underlying sequence needs to be adjusted manually as well - the same as with a serial column.
An identity column is not a primary key by default (just like a serial column). If it should be one, a primary key constraint needs to be defined manually.
Whilst it looks like sequences are the equivalent to MySQL auto_increment, there are some subtle but important differences:
1. Failed Queries Increment The Sequence/Serial
The serial column gets incremented on failed queries. This leads to fragmentation from failed queries, not just row deletions. For example, run the following queries on your PostgreSQL database:
CREATE TABLE table1 (
uid serial NOT NULL PRIMARY KEY,
col_b integer NOT NULL,
CHECK (col_b>=0)
);
INSERT INTO table1 (col_b) VALUES(1);
INSERT INTO table1 (col_b) VALUES(-1);
INSERT INTO table1 (col_b) VALUES(2);
SELECT * FROM table1;
You should get the following output:
uid | col_b
-----+-------
1 | 1
3 | 2
(2 rows)
Notice how uid goes from 1 to 3 instead of 1 to 2.
This still occurs if you were to manually create your own sequence with:
CREATE SEQUENCE table1_seq;
CREATE TABLE table1 (
col_a smallint NOT NULL DEFAULT nextval('table1_seq'),
col_b integer NOT NULL,
CHECK (col_b>=0)
);
ALTER SEQUENCE table1_seq OWNED BY table1.col_a;
If you wish to test how MySQL is different, run the following on a MySQL database:
CREATE TABLE table1 (
uid int unsigned NOT NULL AUTO_INCREMENT PRIMARY KEY,
col_b int unsigned NOT NULL
);
INSERT INTO table1 (col_b) VALUES(1);
INSERT INTO table1 (col_b) VALUES(-1);
INSERT INTO table1 (col_b) VALUES(2);
You should get the following with no fragementation:
+-----+-------+
| uid | col_b |
+-----+-------+
| 1 | 1 |
| 2 | 2 |
+-----+-------+
2 rows in set (0.00 sec)
2. Manually Setting the Serial Column Value Can Cause Future Queries to Fail.
This was pointed out by #trev in a previous answer.
To simulate this manually set the uid to 4 which will "clash" later.
INSERT INTO table1 (uid, col_b) VALUES(5, 5);
Table data:
uid | col_b
-----+-------
1 | 1
3 | 2
5 | 5
(3 rows)
Run another insert:
INSERT INTO table1 (col_b) VALUES(6);
Table data:
uid | col_b
-----+-------
1 | 1
3 | 2
5 | 5
4 | 6
Now if you run another insert:
INSERT INTO table1 (col_b) VALUES(7);
It will fail with the following error message:
ERROR: duplicate key value violates unique constraint "table1_pkey"
DETAIL: Key (uid)=(5) already exists.
In contrast, MySQL will handle this gracefully as shown below:
INSERT INTO table1 (uid, col_b) VALUES(4, 4);
Now insert another row without setting uid
INSERT INTO table1 (col_b) VALUES(3);
The query doesn't fail, uid just jumps to 5:
+-----+-------+
| uid | col_b |
+-----+-------+
| 1 | 1 |
| 2 | 2 |
| 4 | 4 |
| 5 | 3 |
+-----+-------+
Testing was performed on MySQL 5.6.33, for Linux (x86_64) and PostgreSQL 9.4.9
Sorry, to rehash an old question, but this was the first Stack Overflow question/answer that popped up on Google.
This post (which came up first on Google) talks about using the more updated syntax for PostgreSQL 10:
https://blog.2ndquadrant.com/postgresql-10-identity-columns/
which happens to be:
CREATE TABLE test_new (
id int GENERATED BY DEFAULT AS IDENTITY PRIMARY KEY,
);
Hope that helps :)
You have to be careful not to insert directly into your SERIAL or sequence field, otherwise your write will fail when the sequence reaches the inserted value:
-- Table: "test"
-- DROP TABLE test;
CREATE TABLE test
(
"ID" SERIAL,
"Rank" integer NOT NULL,
"GermanHeadword" "text" [] NOT NULL,
"PartOfSpeech" "text" NOT NULL,
"ExampleSentence" "text" NOT NULL,
"EnglishGloss" "text"[] NOT NULL,
CONSTRAINT "PKey" PRIMARY KEY ("ID", "Rank")
)
WITH (
OIDS=FALSE
);
-- ALTER TABLE test OWNER TO postgres;
INSERT INTO test("Rank", "GermanHeadword", "PartOfSpeech", "ExampleSentence", "EnglishGloss")
VALUES (1, '{"der", "die", "das", "den", "dem", "des"}', 'art', 'Der Mann küsst die Frau und das Kind schaut zu', '{"the", "of the" }');
INSERT INTO test("ID", "Rank", "GermanHeadword", "PartOfSpeech", "ExampleSentence", "EnglishGloss")
VALUES (2, 1, '{"der", "die", "das"}', 'pron', 'Das ist mein Fahrrad', '{"that", "those"}');
INSERT INTO test("Rank", "GermanHeadword", "PartOfSpeech", "ExampleSentence", "EnglishGloss")
VALUES (1, '{"der", "die", "das"}', 'pron', 'Die Frau, die nebenen wohnt, heißt Renate', '{"that", "who"}');
SELECT * from test;
In the context of the asked question and in reply to the comment by #sereja1c, creating SERIAL implicitly creates sequences, so for the above example-
CREATE TABLE foo (id SERIAL,bar varchar);
CREATE TABLE would implicitly create sequence foo_id_seq for serial column foo.id. Hence, SERIAL [4 Bytes] is good for its ease of use unless you need a specific datatype for your id.
Since PostgreSQL 10
CREATE TABLE test_new (
id int GENERATED BY DEFAULT AS IDENTITY PRIMARY KEY,
payload text
);
This way will work for sure, I hope it helps:
CREATE TABLE fruits(
id SERIAL PRIMARY KEY,
name VARCHAR NOT NULL
);
INSERT INTO fruits(id,name) VALUES(DEFAULT,'apple');
or
INSERT INTO fruits VALUES(DEFAULT,'apple');
You can check this the details in the next link:
http://www.postgresqltutorial.com/postgresql-serial/
Create Sequence.
CREATE SEQUENCE user_role_id_seq
INCREMENT 1
MINVALUE 1
MAXVALUE 9223372036854775807
START 3
CACHE 1;
ALTER TABLE user_role_id_seq
OWNER TO postgres;
and alter table
ALTER TABLE user_roles ALTER COLUMN user_role_id SET DEFAULT nextval('user_role_id_seq'::regclass);