getting nextval for identity column - postgresql

In my postgreSQL DB applications, I sometimes need to retrieve the next value of a sequence BEFORE running an insert.
I used to make this by giving a “usage” privilege on such sequences to my users and using the “nextval” function.
I recently begun to use “GENERATED BY DEFAULT AS IDENTITY” columns as primary keys, I am still able to retrieve nextval as superuser, but I cannot grant such privilege to other users. Where’s my mistake?
Here's an example:
-- <sequence>
CREATE SEQUENCE public.apps_apps_id_seq
INCREMENT 1
START 1
MINVALUE 1
MAXVALUE 9223372036854775807
CACHE 1;
ALTER SEQUENCE public.apps_apps_id_seq
OWNER TO postgres;
GRANT USAGE ON SEQUENCE public.apps_apps_id_seq TO udocma;
GRANT ALL ON SEQUENCE public.apps_apps_id_seq TO postgres;
-- </sequence>
-- <table>
CREATE TABLE public.apps
(
apps_id integer NOT NULL DEFAULT nextval('apps_apps_id_seq'::regclass),
apps_born timestamp without time zone NOT NULL DEFAULT now(),
apps_vrsn character varying(50) COLLATE pg_catalog."default",
apps_ipad character varying(200) COLLATE pg_catalog."default",
apps_dscr character varying(500) COLLATE pg_catalog."default",
apps_date timestamp without time zone DEFAULT now(),
CONSTRAINT apps_id_pkey PRIMARY KEY (apps_id)
)
WITH (
OIDS = FALSE
)
TABLESPACE pg_default;
ALTER TABLE public.apps
OWNER to postgres;
GRANT INSERT, SELECT, UPDATE, DELETE ON TABLE public.apps TO udocma;
-- </table>
The client application is connected as ‘udocma’ and can use the “nextval” function to retrieve the next key of the sequence.
If I use the identity column instead, I still can do this if I log as postgres, but if I log as udocma I don’t have a privilege to execute nextval on the “hidden” sequence that generates values for the identity column.
Thanyou. I realized that the statements
GRANT USAGE ON SEQUENCE public.apps_apps_id_seq TO udocma;
and
select nextval('apps_apps_id_seq'::regclass);
are still working if I define apps.apps_id as identity column instead of serial. So I guess that a field named 'somefield' defined as identity column in a table named 'sometable' should have some 'hidden' underlying sequence named 'sometable_somefiled_seq'. Is it right?

Related

convert incoming text timestamp from rsyslog to timestamp for postrgesql

I have logs from various linux servers being fed by rsyslog to a PostgreSQL database. The incoming timestamp is an rsyslog'd RFC3339 formatted time like so: 2020-10-12T12:01:18.162329+02:00.
In the original test setup of the database logging table, I created that timestamp field as 'text'. Most things I need parsed are working right, so I was hoping to convert that timestamp table column from text to a timestamp datatype (and retain the subseconds and timezone if possible).
The end result should be a timestamp datatype so that I can do date-range queries using PostgreSQL data functions.
Is this doable in PostgreSQL 11? Or is it just better to re-create the table with the correct timestamp column datatype to begin with?
Thanks in advance for any pointers, advice, places to look, or snippets of code.
Relevant rsyslog config:
$template CustomFormat,"%timegenerated:::date-rfc3339% %syslogseverity-text:::uppercase% %hostname% %syslogtag% %msg%\n"
$ActionFileDefaultTemplate CustomFormat
...
template(name="rsyslog" type="list" option.sql="on") {
constant(value="INSERT INTO log (timestamp, severity, hostname, syslogtag, message)
values ('")
property(name="timegenerated" dateFormat="rfc3339") constant(value="','")
property(name="syslogseverity-text" caseConversion="upper") constant(value="','")
property(name="hostname") constant(value="','")
property(name="syslogtag") constant(value="','")
property(name="msg") constant(value="')")
}
and the log table structure:
CREATE TABLE public.log
(
id integer NOT NULL DEFAULT nextval('log_id_seq'::regclass),
"timestamp" text COLLATE pg_catalog."default" DEFAULT timezone('UTC'::text, CURRENT_TIMESTAMP),
severity character varying(10) COLLATE pg_catalog."default",
hostname character varying(20) COLLATE pg_catalog."default",
syslogtag character varying(24) COLLATE pg_catalog."default",
program character varying(24) COLLATE pg_catalog."default",
process text COLLATE pg_catalog."default",
message text COLLATE pg_catalog."default",
CONSTRAINT log_pkey PRIMARY KEY (id)
)
some sample data already fed into the table (ignore the timestamps in the messsage, they are done with an independent handmade logging system by my predecessor):
You can in theory convert the TEXT column to TIMESTAMP WITH TIME ZONE with ALTER TABLE .. ALTER COLUMN ... SET DATA TYPE ... USING, e.g.:
postgres=# CREATE TABLE tstest (tsval TEXT NOT NULL);
CREATE TABLE
postgres=# INSERT INTO tstest values('2020-10-12T12:01:18.162329+02:00');
INSERT 0 1
postgres=# ALTER TABLE tstest
ALTER COLUMN tsval SET DATA TYPE TIMESTAMP WITH TIME ZONE
USING tsval::TIMESTAMPTZ;
ALTER TABLE
postgres=# \d tstest
Table "public.tstest"
Column | Type | Collation | Nullable | Default
--------+--------------------------+-----------+----------+---------
tsval | timestamp with time zone | | not null |
postgres=# SELECT * FROM tstest ;
tsval
-------------------------------
2020-10-12 12:01:18.162329+02
(1 row)
PostgreSQL can parse the RFC3339 format, so subsequent inserts should just work:
postgres=# INSERT INTO tstest values('2020-10-12T12:01:18.162329+02:00');
INSERT 0 1
postgres=# SELECT * FROM tstest ;
tsval
-------------------------------
2020-10-12 12:01:18.162329+02
2020-10-12 12:01:18.162329+02
(2 rows)
But note that any bad data in the table (i.e. values which cannot be parsed as timestamps) will cause the ALTER TABLE operation to fail, so you should consider verifying the values before converting the data. Something like SELECT "timestamp"::TIMESTAMPTZ FROM public.log would fail with an error like invalid input syntax for type timestamp with time zone: "somebadvalue".
Also bear in mind this kind of ALTER TABLE requires a table rewrite which may take some time to complete (depending on how large the table is), and which requires a ACCESS EXCLUSIVE lock, rendering the table inaccessible for the duration of the operation.
If you want to avoid a long-running ACCESS EXCLUSIVE lock, you could probably do something like this (not tested):
add a new TIMESTAMPTZ column (adding a column doesn't rewrite the table and is fairly cheap provided you don't use a volatile default value)
creating a trigger to copy any values inserted into the original column
copy the existing values (using a bunch of batched updateds like UPDATE public.foo SET newlog = log::TIMESTAMPTZ
(in a single transaction) drop the trigger and the existing column, and rename the new column to the old one

PostgreSql INSERT is trying to use existing primary keys

I'm experiencing a peculiar problem with a Postgres table. When I try to perform a simple INSERT, it returns an error - duplicate key value violates unique constraint.
For starters, here's the schema for the table:
CREATE TABLE app.guardians
(
guardian_id serial NOT NULL,
first_name character varying NOT NULL,
middle_name character varying,
last_name character varying NOT NULL,
id_number character varying NOT NULL,
telephone character varying,
email character varying,
creation_date timestamp without time zone NOT NULL DEFAULT now(),
created_by integer,
active boolean NOT NULL DEFAULT true,
occupation character varying,
address character varying,
marital_status character varying,
modified_date timestamp without time zone,
modified_by integer,
CONSTRAINT "PK_guardian_id" PRIMARY KEY (guardian_id ),
CONSTRAINT "U_id_number" UNIQUE (id_number )
)
WITH (
OIDS=FALSE
);
ALTER TABLE app.guardians
OWNER TO postgres;
The table has 400 rows. Now suppose I try to perform this simple INSERT:
INSERT INTO app.guardians(first_name, last_name, id_number) VALUES('This', 'Fails', '123456');
I get the error:
ERROR: duplicate key value violates unique constraint "PK_guardian_id"
DETAIL: Key (guardian_id)=(2) already exists.
If I try running the same query again, the detail on the error message will be:
DETAIL: Key (guardian_id)=(3) already exists.
And
DETAIL: Key (guardian_id)=(4) already exists.
Incrementally until it gets to a non-existing guardian_id.
What could have gone wrong on this particular table and how is it rectified? I reckon it might have to do with the fact that the table had earlier been dropped using cascade and data re-entered afresh but I'm not sure on this theory.
The reason of this error is that you have incorrect sequence next_val. It happens when you insert field with auto increment manually
So, you have to alter your sequence next_val
alter sequence "PK_guardian_id"
start with (
select max(quardian_id) + 1
from app.guardians
)
Note:
To avoid blocking of concurrent transactions that obtain numbers from the same sequence, ALTER SEQUENCE's effects on the sequence generation parameters are never rolled back; those changes take effect immediately and are not reversible. However, the OWNED BY, OWNER TO, RENAME TO, and SET SCHEMA clauses cause ordinary catalog updates that can be rolled back.
ALTER SEQUENCE will not immediately affect nextval results in backends, other than the current one, that have preallocated (cached) sequence values. They will use up all cached values prior to noticing the changed sequence generation parameters. The current backend will be affected immediately.
Documentation:
https://www.postgresql.org/docs/9.6/static/sql-altersequence.html

not able to convert integer data type to bigserial in postgres

For converting integer datatype to bigserial in postgres I have run below command but it didn't change its datatype but it's changing the modifiers
CREATE SEQUENCE id;
ALTER TABLE user_event_logs ALTER COLUMN id SET NOT NULL;
ALTER TABLE user_event_logs ALTER COLUMN id SET DEFAULT nextval('id');
ALTER SEQUENCE id OWNED BY user_event_logs.id;
After running this it is showing output like this
Column Type Modifiers this all my column heading and id integer not null
i want to change type to bigserial
bigserial is a shortcut for bigint + sequence + default value, so if you want user_event_logs.id to be bigint, instead of int, use:
ALTER TABLE user_event_logs ALTER COLUMN id type bigint;
https://www.postgresql.org/docs/current/static/datatype-numeric.html#DATATYPE-SERIAL
The data types smallserial, serial and bigserial are not true types,
but merely a notational convenience for creating unique identifier
columns (similar to the AUTO_INCREMENT property supported by some
other databases).
also:
The type names serial and serial4 are equivalent: both create integer
columns. The type names bigserial and serial8 work the same way,
except that they create a bigint column.
so if you want bigserial, just alter volumn type to bigint
I am not sure, can you try this?
CREATE SEQUENCE id_seq;
ALTER TABLE user_event_logs ALTER COLUMN id TYPE BIGINT;
ALTER TABLE user_event_logs ALTER COLUMN id SET NOT NULL;
ALTER TABLE user_event_logs ALTER COLUMN id SET DEFAULT nextval('id_seq'::regclass);
ALTER SEQUENCE id_seq OWNED BY user_event_logs.id;

PostgreSQL bigserial & nextval

I've got a PgSQL 9.4.3 server setup and previously I was only using the public schema and for example I created a table like this:
CREATE TABLE ma_accessed_by_members_tracking (
reference bigserial NOT NULL,
ma_reference bigint NOT NULL,
membership_reference bigint NOT NULL,
date_accessed timestamp without time zone,
points_awarded bigint NOT NULL
);
Using the Windows Program PgAdmin III I can see it created the proper information and sequence.
However I've recently added another schema called "test" to the same database and created the exact same table, just like before.
However this time I see:
CREATE TABLE test.ma_accessed_by_members_tracking
(
reference bigint NOT NULL DEFAULT nextval('ma_accessed_by_members_tracking_reference_seq'::regclass),
ma_reference bigint NOT NULL,
membership_reference bigint NOT NULL,
date_accessed timestamp without time zone,
points_awarded bigint NOT NULL
);
My question / curiosity is why in a public schema the reference shows bigserial but in the test schema reference shows bigint with a nextval?
Both work as expected. I just do not understand why the difference in schema's would show different table creations. I realize that bigint and bigserial allow the same volume of ints to be used.
Merely A Notational Convenience
According to the documentation on Serial Types, smallserial, serial, and bigserial are not true data types. Rather, they are a notation to create at once both sequence and column with default value pointing to that sequence.
I created test table on schema public. The command psql \d shows bigint column type. Maybe it's PgAdmin behavior ?
Update
I checked PgAdmin source code. In function pgColumn::GetDefinition() it scans table pg_depend for auto dependency and when found it - replaces bigint with bigserial to simulate original table create code.
When you create a serial column in the standard way:
CREATE TABLE new_table (
new_id serial);
Postgres creates a sequence with commands:
CREATE SEQUENCE new_table_new_id_seq ...
ALTER SEQUENCE new_table_new_id_seq OWNED BY new_table.new_id;
From documentation: The OWNED BY option causes the sequence to be associated with a specific table column, such that if that column (or its whole table) is dropped, the sequence will be automatically dropped as well.
Standard name of a sequence is built from table name, column name and suffix _seq.
If a serial column was created in such a way, PgAdmin shows its type as serial.
If a sequence has non-standard name or is not associated with a column, PgAdmin shows nextval() as default value.

How to change the auto numbering id field to serial type in PostgreSQL

I have a Database which is migrated from MSSQL to PostgreSQL(9.2).
This Database have 100+ tables, These table have autonumbering filed(PRIMARY KEY field), given below is an example for a table
CREATE TABLE company
(
companyid integer NOT NULL DEFAULT nextval('seq_company_id'::regclass),
company character varying(100),
add1 character varying(100),
add2 character varying(100),
add3 character varying(100),
phoneoff character varying(30),
phoneres character varying(30)
CONSTRAINT gcompany_pkey PRIMARY KEY (companyid)
)
sample data
INSERT INTO company (company, add1, add2, add3, phoneoff, phoneres) VALUES
('company1','add1','add2','add3','00055544','7788848');
INSERT INTO company (company, add1, add2, add3, phoneoff, phoneres) VALUES
('company2','add9','add5','add2','00088844','7458844');
INSERT INTO company (company, add1, add2, add3, phoneoff, phoneres) VALUES
('company5','add5','add8','add7','00099944','2218844');
and below is the sequence for this table
CREATE SEQUENCE seq_company_id
INCREMENT 1
MINVALUE 1
MAXVALUE 9223372036854775807
START 1
CACHE 1;
ALTER TABLE seq_company_id
OWNER TO postgres;
while reading PostgreSQL Documentation i read about Serial Types so i wish to change all the existing auto numbering fields to serial.
How to do it?
i have tried
alter table company alter column companyid type serial
ERROR: type "serial" does not exist
********** Error **********
There is indeed no data type serial. It is just a shorthand notation for a default value populated from sequence (see the manual for details), essentially what you have now.
The only difference between your setup and a column defined as serial is that there is a link between the sequence and the column, which you can define manually as well:
alter sequence seq_gcompany_id owned by company.companyid;
With that link in place you can no longer distinguish your column from a column initially defined as serial. What this change does, is that the sequence will automatically be dropped if the table (or the column) is dropped that uses it.