Determine next auto_increment value before an INSERT in Postgres - postgresql

I am in the process of switching from MariaDB to Postgres and have run into a small issue. There are times when I need to establish the next AUTO_INCREMENT value prior to making an INSERT. This is because the INSERT has an impact on a few other tables that would be quite messy to repair if done post the INSERT itself. In mySQL/MariaDB this was easy. I simply did
"SELECT AUTO_INCREMENT
FROM information_schema.tables
WHERE table_name = 'users'
AND table_schema = DATABASE( ) ;";
and used the returned value to pre-correct the other tables prior to making the actual INSERT. I am aware that with pgSQL one can use RETURNINGwith SELECT,INSERT and UPDATE statements. However, this would require a post-INSERT correction to the other tables which in turn would involve breaking code that has been tested and proven to work. I imagine that there is a way to find the next AUTO_INCREMENT but I have been unable to find it. Amongst other things I tried nextval('users_id_seq') which did not do anything useful.
To port my original MariaDB schema over to Postgres I edited the SQL emitted by Adminer with the MariaDB version to ensure it works with Postgres. This mostly involved changing INT(11) to INTEGER, TINYINT(3) to SMALL INT, VARCHAR to CHARACTER VARYING etc. With the auto-increment columns I read up a bit and concluded that I needed to use SERIAL instead. So the typical SQL I fed to Postgres was like this
CREATE TABLE "users"
(
"id" SERIAL NOT NULL,
"bid" INTEGER NOT NULL DEFAULT 0,
"gid" INTEGER NOT NULL DEFAULT 0,
"sid" INTEGER NOT NULL DEFAULT 0,
"s1" character varying(64)NOT NULL,
"s2" character varying(64)NOT NULL,
"name" character varying(64)NOT NULL,
"apik" character varying(128)NOT NULL,
"email" character varying(192)NOT NULL,
"gsm" character varying(64)NOT NULL,
"rights" character varying(64)NOT NULL,
"managed" character varying(256)NOT NULL DEFAULT
'M_BepHJXALYpLyOjHxVGWJnlAMqxv0KNENmcYA,,',
"senior" SMALLINT NOT NULL DEFAULT 0,
"refs" INTEGER NOT NULL DEFAULT 0,
"verified" SMALLINT NOT NULL DEFAULT 0,
"vkey" character varying(64)NOT NULL,
"lang" SMALLINT NOT NULL DEFAULT 0,
"leader" INTEGER NOT NULL
);
This SQL run from Adminer works correctly. However, when I then try to get Adminer to export the new users table in Postgres it gives me
CREATE TABLE "public"."users"
(
"id" integer DEFAULT nextval('users_id_seq') NOT NULL,
"bid" integer DEFAULT 0 NOT NULL,
It is perhaps possible that I have gone about things incorrectly when porting over the AUTO_INCREMENT columns - in which case there is still time to correct the error.

If you used serial in the column definition then you have a sequence named TABLE_COLUMN_seq in the same namespace of the table (where TABLE and COLUMN are, respectively, the names of the table and the column). You can just do:
SELECT nextval('TABLE_COLUMN_seq');
I see you have tried that, can you show your CREATE TABLE statement so that we can check all names are ok?

As documented in the manual serial is not a "real" data type, it's just a shortcut for a column that takes its default value from a sequence.
If you need the generated value in your code before inserting, use nextval() then use the value you got in your insert statement:
In PL/pgSQL this would be something like the following. The exact syntax obviously depends on the programming language you use:
declare
l_userid integer;
begin
l_userid := nextval('users_id_seq');
-- do something with that value
insert into users (id, ...)
values (l_userid, ...);
end;
It is important that you never pass a value to the insert statement that was not generated by the sequence. Postgres will not automagically sync the sequence values with "manually" provided values.

you can select last_value+1 from the sequence itself, eg:
t=# create table so109(i serial,n int);
CREATE TABLE
Time: 2.585 ms
t=# insert into so109(n) select i from generate_series(1,22,1) i;
INSERT 0 22
Time: 1.236 ms
t=# select * from so109_i_seq ;
sequence_name | last_value | start_value | increment_by | max_value | min_value | cache_value | log_cnt | is_cycled | is_called
---------------+------------+-------------+--------------+---------------------+-----------+-------------+---------+-----------+-----------
so109_i_seq | 22 | 1 | 1 | 9223372036854775807 | 1 | 1 | 11 | f | t
(1 row)
or use currval, eg:
t=# select currval('so109_i_seq')+1;
?column?
----------
23
(1 row)
UPDATE
While this answer gives an idea on how to Determine next auto_increment value before an INSERT in Postgres (which is the title), proposed methods would not fit the needs of post itself. If you are looking for "replacement" for RETURNING directive in INSERT statement, the better way is actually "reserving" the value with nextval, just as #fog proposed. So concurrent transactions would not get the same value twice...

Related

Weird now() time difference with Postgres triggers

In a Postgres 10.10 database, I have a table table1 , and an AFTER INSERT trigger on table1 for table2:
CREATE TABLE table1 (
id SERIAL PRIMARY KEY,
-- other cols
created_at timestamp with time zone NOT NULL,
updated_at timestamp with time zone NOT NULL
);
CREATE UNIQUE INDEX table1_pkey ON table1(id int4_ops);
CREATE TABLE table2 (
id SERIAL PRIMARY KEY,
table1_id integer NOT NULL REFERENCES table1(id) ON UPDATE CASCADE,
-- other cols (not used in query)
created_at timestamp with time zone NOT NULL,
updated_at timestamp with time zone NOT NULL
);
CREATE UNIQUE INDEX table2_pkey ON table2(id int4_ops);
This query is executed on application start:
CREATE OR REPLACE FUNCTION after_insert_table1()
RETURNS trigger AS
$$
BEGIN
INSERT INTO table2 (table1_id, ..., created_at, updated_at)
VALUES (NEW.id, ..., 'now', 'now');
RETURN NEW;
END;
$$
LANGUAGE 'plpgsql';
DROP TRIGGER IF EXISTS after_insert_table1 ON "table1";
CREATE TRIGGER after_insert_table1
AFTER INSERT ON "table1"
FOR EACH ROW
EXECUTE PROCEDURE after_insert_table1();
I noticed some created_at and updated_at values on table2 are different to table1. In fact, table2 has mostly older values.
Here are 10 sequential entries, which show the difference jumping around a huge amount within a few minutes:
|table1_id|table1_created |table2_created |diff |
|---------|--------------------------|-----------------------------|----------------|
|2000 |2019-11-07 22:29:47.245+00|2019-11-07 19:51:09.727021+00|-02:38:37.517979|
|2001 |2019-11-07 22:30:02.256+00|2019-11-07 13:18:29.45962+00 |-09:11:32.79638 |
|2002 |2019-11-07 22:30:43.021+00|2019-11-07 13:44:12.099577+00|-08:46:30.921423|
|2003 |2019-11-07 22:31:00.794+00|2019-11-07 19:51:09.727021+00|-02:39:51.066979|
|2004 |2019-11-07 22:31:11.315+00|2019-11-07 13:18:29.45962+00 |-09:12:41.85538 |
|2005 |2019-11-07 22:31:27.234+00|2019-11-07 13:44:12.099577+00|-08:47:15.134423|
|2006 |2019-11-07 22:31:47.436+00|2019-11-07 13:18:29.45962+00 |-09:13:17.97638 |
|2007 |2019-11-07 22:33:19.484+00|2019-11-07 17:22:48.129063+00|-05:10:31.354937|
|2008 |2019-11-07 22:33:51.607+00|2019-11-07 19:51:09.727021+00|-02:42:41.879979|
|2009 |2019-11-07 22:34:28.786+00|2019-11-07 13:18:29.45962+00 |-09:15:59.32638 |
|2010 |2019-11-07 22:36:50.242+00|2019-11-07 13:18:29.45962+00 |-09:18:20.78238 |
Sequential entries have similar differences (mostly negative/mostly positive), and similar orders of magnitude (mostly minutes vs mostly hours) within the sequence, though there are exceptions
Here are the top 5 largest positive differences:
|table1_id|table1_created |table2_created |diff |
|---------|--------------------------|-----------------------------|----------------|
|1630 |2019-10-25 21:12:14.971+00|2019-10-26 00:52:09.376+00 |03:39:54.405 |
|950 |2019-09-16 12:36:07.185+00|2019-09-16 14:07:35.504+00 |01:31:28.319 |
|1677 |2019-10-26 22:19:12.087+00|2019-10-26 23:38:34.102+00 |01:19:22.015 |
|58 |2018-12-08 20:11:20.306+00|2018-12-08 21:06:42.246+00 |00:55:21.94 |
|171 |2018-12-17 22:24:57.691+00|2018-12-17 23:16:05.992+00 |00:51:08.301 |
Here are the top 5 largest negative differences:
|table1_id|table1_created |table2_created |diff |
|---------|--------------------------|-----------------------------|----------------|
|1427 |2019-10-15 16:03:43.641+00|2019-10-14 17:59:41.57749+00 |-22:04:02.06351 |
|1426 |2019-10-15 13:26:07.314+00|2019-10-14 18:00:50.930513+00|-19:25:16.383487|
|1424 |2019-10-15 13:13:44.092+00|2019-10-14 18:00:50.930513+00|-19:12:53.161487|
|4416 |2020-01-11 00:15:03.751+00|2020-01-10 08:43:19.668399+00|-15:31:44.082601|
|4420 |2020-01-11 01:58:32.541+00|2020-01-10 11:04:19.288023+00|-14:54:13.252977|
Negative differences outnumber positive differences 10x. The database timezone is UTC.
table2.table1_id is a foreign key, so it should be impossible to insert before insert on table1 completes.
table1.created_at is set by Sequelize, using option timestamps: true on the model.
When a row is inserted into table1, it's done inside a transaction. From the documentation I can find, triggers are executed inside the same transaction, so I can't think of a reason for this.
I can fix the issue by changing my trigger to use NEW.created_at instead of 'now', but I'm curious if anyone has any idea what the cause of this bug is?
Here is the query used to produce the above difference tables:
SELECT
table1.id AS table1_id,
table1.created_at AS table1_created,
table2.created_at AS table2_created,
(table2.created_at - table1.created_at) AS diff
FROM table1
INNER JOIN table2 ON
table2.table1_id = table1.id AND (
(table2.created_at - table1.created_at) > '2 min' OR
(table1.created_at - table2.created_at) > '2 min')
ORDER BY diff;
While 'now' is not a plain string, it is also not a function in this context, but a special date/time input. The manual:
... simply notational shorthands that will be converted to ordinary date/time values when read. (In particular, now and related strings are converted to a specific time value as soon as they are read.)
The body of a PL/pgSQL function is stored as string, each nested SQL command is parsed and prepared when control reaches it the first time per session. The manual:
The PL/pgSQL interpreter parses the function's source text and
produces an internal binary instruction tree the first time the
function is called (within each session). The instruction tree fully
translates the PL/pgSQL statement structure, but individual SQL
expressions and SQL commands used in the function are not translated
immediately.
As each expression and SQL command is first executed in the function,
the PL/pgSQL interpreter parses and analyzes the command to create a
prepared statement, using the SPI manager's SPI_prepare function.
Subsequent visits to that expression or command reuse the prepared statement.
There is more. Read on. But that's enough for our case:
The first time the trigger is executed per session, 'now' is translated to the current timestamp (the transaction timestamp). While doing more inserts in that same transaction, there won't be any difference to transaction_timestamp() because that is stable within a transaction by design.
But every subsequent transaction in the same session will insert the same, constant timestamp in table2, while values for table1 may be anything (not sure what Sequelize does there). If new values in table1 are the then current timestamp, that results in a "negative" diff in your test. (Timestamps in table2 will be older.)
Solution
Situations where you actually want 'now' are few and far between. Typically, you want the function now() (without single quotes!) - which is equivalent to CURRENT_TIMESTAMP (standard SQL) and transaction_timestamp(). Related (recommended reading!):
Difference between now() and current_timestamp
In your particular case I suggest column defaults instead of doing additional work in triggers. If you set the same default now() in table1 and table2, you also eliminate any nonsense the INSERT to table1 might add. And you never have to even mention these columns in inserts any more:
CREATE TABLE table1 (
id SERIAL PRIMARY KEY,
-- other cols
created_at timestamptz NOT NULL DEFAULT now(),
updated_at timestamptz NOT NULL DEFAULT now() -- or leave this one NULL?
);
CREATE TABLE table2 (
id SERIAL PRIMARY KEY,
table1_id integer NOT NULL REFERENCES table1(id) ON UPDATE CASCADE,
-- other cols (not used in query)
created_at timestamptz NOT NULL DEFAULT now(), -- not 'now'!
updated_at timestamptz NOT NULL DEFAULT now() -- or leave this one NULL?
);
CREATE OR REPLACE FUNCTION after_insert_table1()
RETURNS trigger LANGUAGE plpgsql AS
$$
BEGIN
INSERT INTO table2 (table1_id) -- more columns? but not: created_at, updated_at
VALUES (NEW.id); -- more columns?
RETURN NULL; -- can be NULL for AFTER trigger
END
$$;

Set the value of a column to its default value

I have few existing tables in which I have to modify various columns to have a default value.
How can I apply the default value to old records which are NULL, so that the old records will be consistent with the new ones
ALTER TABLE "mytable" ALTER COLUMN "my_column" SET DEFAULT NOW();
After modifying table looks something like this ...
Table "public.mytable"
Column | Type | Modifiers
-------------+-----------------------------+-----------------------------------------------
id | integer | not null default nextval('mytable_id_seq'::regclass)
....
my_column | timestamp(0) with time zone | default now()
Indexes:
"mytable_pkey" PRIMARY KEY, btree (id)
Is there a simple to way to have all columns which are currently null and also which have a default value to be set to the default value ?
Deriving from insert into:
For clarity, you can also request default values explicitly, for individual columns or for the entire row:
INSERT INTO products (product_no, name, price) VALUES (1, 'Cheese', DEFAULT);
INSERT INTO products DEFAULT VALUES;
I just tried this, and it is as simple as
update mytable
set my_column = default
where my_column is null
See sqlfiddle
Edit: olaf answer is easiest and correct way of doing this however the below also is viable solution for most cases.
For a each column it is easy to use the information_schema and get the default value of a column and then use that in a UPDATE statement
UPDATE mytable set my_column = (
SELECT column_default
FROM information_schema.columns
WHERE (table_schema, table_name, column_name) = ('public', 'mytable','my_column')
)::timestamp
WHERE my_column IS NULL;
Note the sub-query must by typecast to the corresponding column data type .
Also this statement will not evaluate expressions as column_default will be of type character varying it will work for NOW() but not for expressions like say (NOW()+ interval ' 7 days')
It is better to get expression and validate it then apply it manually

Altering a parent table in Postgresql 8.4 breaks child table defaults

The problem: In Postgresql, if table temp_person_two inherits fromtemp_person, default column values on the child table are ignored if the parent table is altered.
How to replicate:
First, create table and a child table. The child table should have one column that has a default value.
CREATE TEMPORARY TABLE temp_person (
person_id SERIAL,
name VARCHAR
);
CREATE TEMPORARY TABLE temp_person_two (
has_default character varying(4) DEFAULT 'en'::character varying NOT NULL
) INHERITS (temp_person);
Next, create a trigger on the parent table that copies its data to the child table (I know this appears like bad design, but this is a minimal test case to show the problem).
CREATE FUNCTION temp_person_insert() RETURNS trigger
LANGUAGE plpgsql
AS '
BEGIN
INSERT INTO temp_person_two VALUES ( NEW.* );
RETURN NULL;
END;
';
CREATE TRIGGER temp_person_insert_trigger
BEFORE INSERT ON temp_person
FOR EACH ROW
EXECUTE PROCEDURE temp_person_insert();
Then insert data into parent and select data from child. The data should be correct.
INSERT INTO temp_person (name) VALUES ('ovid');
SELECT * FROM temp_person_two;
person_id | name | has_default
-----------+------+-------------
1 | ovid | en
(1 row )
Finally, alter parent table by adding a new, unrelated column. Attempt to insert data and watch a "not-null constraint" violation occur:
ALTER TABLE temp_person ADD column foo text;
INSERT INTO temp_person(name) VALUES ('Corinna');
ERROR: null value in column "has_default" violates not-null constraint
CONTEXT: SQL statement "INSERT INTO temp_person_two VALUES ( $1 .* )"
PL/pgSQL function "temp_person_insert" line 2 at SQL statement
My version:
testing=# select version();
version
-------------------------------------------------------------------------------------------------------
PostgreSQL 8.4.17 on x86_64-pc-linux-gnu, compiled by GCC gcc-4.4.real (Debian 4.4.5-8) 4.4.5, 64-bit
(1 row)
It's there all the way to 9.3, but it's going to be tricky to fix, and I'm not sure if it's just undesirable behaviour rather than a bug.
The constraint is still there, but look at the column-order.
Table "pg_temp_2.temp_person"
Column | Type | Modifiers
-----------+-------------------+-----------------------------------------------------------------
person_id | integer | not null default nextval('temp_person_person_id_seq'::regclass)
name | character varying |
Number of child tables: 1 (Use \d+ to list them.)
Table "pg_temp_2.temp_person_two"
Column | Type | Modifiers
-------------+----------------------+-----------------------------------------------------------------
person_id | integer | not null default nextval('temp_person_person_id_seq'::regclass)
name | character varying |
has_default | character varying(4) | not null default 'en'::character varying
Inherits: temp_person
ALTER TABLE
Table "pg_temp_2.temp_person_two"
Column | Type | Modifiers
-------------+----------------------+-----------------------------------------------------------------
person_id | integer | not null default nextval('temp_person_person_id_seq'::regclass)
name | character varying |
has_default | character varying(4) | not null default 'en'::character varying
foo | text |
Inherits: temp_person
It works in your first example because you are effectively doing:
INSERT INTO temp_person_two (person_id,name)
VALUES (person_id, name)
BUT look where your new column is added in the child table - at the end! So you end up with
INSERT INTO temp_person_two (person_id,name,has_default)
VALUES (person_id, name, foo)
rather than what you hoped for:
INSERT INTO temp_person_two (person_id,name,foo)...
So - what's the correct behaviour here? If PostgreSQL shuffled the columns in the child table that could break code. If it doesn't, that can also break code. As it happens, I don't think the first option is do-able without substantial PG code changes, so it's unlikely to do that in the medium term.
Moral of the story: explicitly list your INSERT column-names.
Could take a while by hand. You know any languages with regexes? ;-)
It's not a bug. NEW.* expands to the values of each column in the new row, so you're doing INSERT INTO temp_person_two VALUES ( NEW.person_id, NEW.name, NEW.foo ), the last of which is indeed NULL if you didn't specify it (and wrong if you did).
I'm surprised it even works before you added the new column, since the number of values doesn't match the number of fields in the child table. Presumably it assumes the default for missing trailing values.

Inserting self-referential records in Postgresql

Given the following table in PostgreSQL, how do I insert a record which refers to itself?
CREATE TABLE refers (
id SERIAL PRIMARY KEY,
name VARCHAR(255) NOT NULL,
parent_id INTEGER NOT NULL,
FOREIGN KEY (parent_id) REFERENCES refers(id)
);
The examples I'm finding on the Web have been allowed the parent_id to be NULL and then use a trigger to update it. I'd rather update in one shot, if possible.
You can select last_value from the sequence, that is automatically created when you use type serial:
create table test (
id serial primary key,
parent integer not null,
foreign key (parent) references test(id)
);
insert into test values(default, (select last_value from test_id_seq));
insert into test values(default, (select last_value from test_id_seq));
insert into test values(default, (select last_value from test_id_seq));
select * from test;
id | parent
----+--------
1 | 1
2 | 2
3 | 3
(3 rows)
And the following even simpler seems to work as well:
insert into test values(default, lastval());
Though I don't know how this would work when using multiple sequences... I looked it up; lastval() returns the last value returned or set with the last nextval or setval call to any sequence, so the following would get you in trouble:
create table test (
id serial primary key,
foo serial not null,
parent integer not null,
foreign key (parent) references test(id)
);
select setval('test_foo_seq', 100);
insert into test values(default, default, lastval());
ERROR: insert or update on table "test" violates foreign key constraint "test_parent_fkey"
DETAIL: Key (parent)=(101) is not present in table "test".
However the following would be okay:
insert into test values(default, default, currval('test_id_seq'));
select * from test;
id | foo | parent
----+-----+--------
2 | 102 | 2
(1 row)
The main question is - why would you want to insert record which relates to itself?
Schema looks like standard adjacency list - one of methods to implement trees in relational database.
The thing is that in most cases you simply have parent_id NULL for top-level element. This is actually much simpler to handle.

What's the PostgreSQL datatype equivalent to MySQL AUTO INCREMENT?

I'm switching from MySQL to PostgreSQL and I was wondering how can I have an INT column with AUTO INCREMENT. I saw in the PostgreSQL docs a datatype called SERIAL, but I get syntax errors when using it.
Yes, SERIAL is the equivalent function.
CREATE TABLE foo (
id SERIAL,
bar varchar
);
INSERT INTO foo (bar) VALUES ('blah');
INSERT INTO foo (bar) VALUES ('blah');
SELECT * FROM foo;
+----------+
| 1 | blah |
+----------+
| 2 | blah |
+----------+
SERIAL is just a create table time macro around sequences. You can not alter SERIAL onto an existing column.
You can use any other integer data type, such as smallint.
Example :
CREATE SEQUENCE user_id_seq;
CREATE TABLE user (
user_id smallint NOT NULL DEFAULT nextval('user_id_seq')
);
ALTER SEQUENCE user_id_seq OWNED BY user.user_id;
Better to use your own data type, rather than user serial data type.
If you want to add sequence to id in the table which already exist you can use:
CREATE SEQUENCE user_id_seq;
ALTER TABLE user ALTER user_id SET DEFAULT NEXTVAL('user_id_seq');
Starting with Postgres 10, identity columns as defined by the SQL standard are also supported:
create table foo
(
id integer generated always as identity
);
creates an identity column that can't be overridden unless explicitly asked for. The following insert will fail with a column defined as generated always:
insert into foo (id)
values (1);
This can however be overruled:
insert into foo (id) overriding system value
values (1);
When using the option generated by default this is essentially the same behaviour as the existing serial implementation:
create table foo
(
id integer generated by default as identity
);
When a value is supplied manually, the underlying sequence needs to be adjusted manually as well - the same as with a serial column.
An identity column is not a primary key by default (just like a serial column). If it should be one, a primary key constraint needs to be defined manually.
Whilst it looks like sequences are the equivalent to MySQL auto_increment, there are some subtle but important differences:
1. Failed Queries Increment The Sequence/Serial
The serial column gets incremented on failed queries. This leads to fragmentation from failed queries, not just row deletions. For example, run the following queries on your PostgreSQL database:
CREATE TABLE table1 (
uid serial NOT NULL PRIMARY KEY,
col_b integer NOT NULL,
CHECK (col_b>=0)
);
INSERT INTO table1 (col_b) VALUES(1);
INSERT INTO table1 (col_b) VALUES(-1);
INSERT INTO table1 (col_b) VALUES(2);
SELECT * FROM table1;
You should get the following output:
uid | col_b
-----+-------
1 | 1
3 | 2
(2 rows)
Notice how uid goes from 1 to 3 instead of 1 to 2.
This still occurs if you were to manually create your own sequence with:
CREATE SEQUENCE table1_seq;
CREATE TABLE table1 (
col_a smallint NOT NULL DEFAULT nextval('table1_seq'),
col_b integer NOT NULL,
CHECK (col_b>=0)
);
ALTER SEQUENCE table1_seq OWNED BY table1.col_a;
If you wish to test how MySQL is different, run the following on a MySQL database:
CREATE TABLE table1 (
uid int unsigned NOT NULL AUTO_INCREMENT PRIMARY KEY,
col_b int unsigned NOT NULL
);
INSERT INTO table1 (col_b) VALUES(1);
INSERT INTO table1 (col_b) VALUES(-1);
INSERT INTO table1 (col_b) VALUES(2);
You should get the following with no fragementation:
+-----+-------+
| uid | col_b |
+-----+-------+
| 1 | 1 |
| 2 | 2 |
+-----+-------+
2 rows in set (0.00 sec)
2. Manually Setting the Serial Column Value Can Cause Future Queries to Fail.
This was pointed out by #trev in a previous answer.
To simulate this manually set the uid to 4 which will "clash" later.
INSERT INTO table1 (uid, col_b) VALUES(5, 5);
Table data:
uid | col_b
-----+-------
1 | 1
3 | 2
5 | 5
(3 rows)
Run another insert:
INSERT INTO table1 (col_b) VALUES(6);
Table data:
uid | col_b
-----+-------
1 | 1
3 | 2
5 | 5
4 | 6
Now if you run another insert:
INSERT INTO table1 (col_b) VALUES(7);
It will fail with the following error message:
ERROR: duplicate key value violates unique constraint "table1_pkey"
DETAIL: Key (uid)=(5) already exists.
In contrast, MySQL will handle this gracefully as shown below:
INSERT INTO table1 (uid, col_b) VALUES(4, 4);
Now insert another row without setting uid
INSERT INTO table1 (col_b) VALUES(3);
The query doesn't fail, uid just jumps to 5:
+-----+-------+
| uid | col_b |
+-----+-------+
| 1 | 1 |
| 2 | 2 |
| 4 | 4 |
| 5 | 3 |
+-----+-------+
Testing was performed on MySQL 5.6.33, for Linux (x86_64) and PostgreSQL 9.4.9
Sorry, to rehash an old question, but this was the first Stack Overflow question/answer that popped up on Google.
This post (which came up first on Google) talks about using the more updated syntax for PostgreSQL 10:
https://blog.2ndquadrant.com/postgresql-10-identity-columns/
which happens to be:
CREATE TABLE test_new (
id int GENERATED BY DEFAULT AS IDENTITY PRIMARY KEY,
);
Hope that helps :)
You have to be careful not to insert directly into your SERIAL or sequence field, otherwise your write will fail when the sequence reaches the inserted value:
-- Table: "test"
-- DROP TABLE test;
CREATE TABLE test
(
"ID" SERIAL,
"Rank" integer NOT NULL,
"GermanHeadword" "text" [] NOT NULL,
"PartOfSpeech" "text" NOT NULL,
"ExampleSentence" "text" NOT NULL,
"EnglishGloss" "text"[] NOT NULL,
CONSTRAINT "PKey" PRIMARY KEY ("ID", "Rank")
)
WITH (
OIDS=FALSE
);
-- ALTER TABLE test OWNER TO postgres;
INSERT INTO test("Rank", "GermanHeadword", "PartOfSpeech", "ExampleSentence", "EnglishGloss")
VALUES (1, '{"der", "die", "das", "den", "dem", "des"}', 'art', 'Der Mann küsst die Frau und das Kind schaut zu', '{"the", "of the" }');
INSERT INTO test("ID", "Rank", "GermanHeadword", "PartOfSpeech", "ExampleSentence", "EnglishGloss")
VALUES (2, 1, '{"der", "die", "das"}', 'pron', 'Das ist mein Fahrrad', '{"that", "those"}');
INSERT INTO test("Rank", "GermanHeadword", "PartOfSpeech", "ExampleSentence", "EnglishGloss")
VALUES (1, '{"der", "die", "das"}', 'pron', 'Die Frau, die nebenen wohnt, heißt Renate', '{"that", "who"}');
SELECT * from test;
In the context of the asked question and in reply to the comment by #sereja1c, creating SERIAL implicitly creates sequences, so for the above example-
CREATE TABLE foo (id SERIAL,bar varchar);
CREATE TABLE would implicitly create sequence foo_id_seq for serial column foo.id. Hence, SERIAL [4 Bytes] is good for its ease of use unless you need a specific datatype for your id.
Since PostgreSQL 10
CREATE TABLE test_new (
id int GENERATED BY DEFAULT AS IDENTITY PRIMARY KEY,
payload text
);
This way will work for sure, I hope it helps:
CREATE TABLE fruits(
id SERIAL PRIMARY KEY,
name VARCHAR NOT NULL
);
INSERT INTO fruits(id,name) VALUES(DEFAULT,'apple');
or
INSERT INTO fruits VALUES(DEFAULT,'apple');
You can check this the details in the next link:
http://www.postgresqltutorial.com/postgresql-serial/
Create Sequence.
CREATE SEQUENCE user_role_id_seq
INCREMENT 1
MINVALUE 1
MAXVALUE 9223372036854775807
START 3
CACHE 1;
ALTER TABLE user_role_id_seq
OWNER TO postgres;
and alter table
ALTER TABLE user_roles ALTER COLUMN user_role_id SET DEFAULT nextval('user_role_id_seq'::regclass);