Postgresql: find all the allowed values for a domain - postgresql

Suppose I created the following domain:
CREATE DOMAIN MY_STATUS AS VARCHAR NOT NULL DEFAULT 'STATUS0' CHECK(VALUE in ('STATUS1', 'STATUS2', 'STATUS3'));
As expected, in a column whose type is MY_STATUS, i can put only the values:
'STATUS0'
'STATUS1'
'STATUS2'
'STATUS3'
Now, let's suppose that I want to validate this column before to send an insert or update to my DB. I need to know which values are allowed so that, if I have status = STATUS4, I will have an error before to send the insert to DB and I can manage it. Since the domain may change in the future, I need to select all the allowed values from DB rather than hardcode all possible values creating constants.
Shortly: how to make a query that selects all the possible values of the domain?
In my example, I would like to have a query that will return:
'STATUS0', 'STATUS1', 'STATUS2', 'STATUS3'

I would recommend that you use a foreign key reference rather than a type or check constraint. Although you can devise a complex query to parse the the constraint, this is so much easier with a foreign key:
create table valid_domain_statuses (
status varchar(32) primary key
);
insert into valid_domain_statuses (status)
select 'STATUS0' union all
select 'STATUS1' union all
select 'STATUS2' union all
select 'STATUS3' ;
. . .
alter table t add constraint fk_my_status
foreign key (status) references valid_domain_statuses(status);
This has other advantages:
You can readily see the valid values by looking at the table.
You can add additional information about the statuses.
It is easy to add new status values.

Related

How can a relational database with foreign key constraints ingest data that may be in the wrong order?

The database is ingesting data from a stream, and all the rows needed to satisfy a foreign key constraint may be late or never arrive.
This can likely be accomplished by using another datastore, one without foreign key constraints, and then when all the needed data is available, read into the database which has fk constraints. However, this adds complexity and I'd like to avoid it.
We're working on a solution that creates "placeholder" rows to point the foreign key to. When the real data comes in, the placeholder is replaced with real values. Again, this adds complexity, but it's the best solution we've found so far.
How do people typically solve this problem?
Edit: Some sample data which might help explain the problem:
Let's say we have these tables:
CREATE TABLE order (
id INTEGER NOT NULL,
order_number,
PRIMARY KEY (id),
UNIQUE (order_number)
);
CREATE TABLE line_item (
id INTEGER NOT NULL,
order_number INTEGER REFERENCES order(order_number),
PRIMARY KEY (id)
);
If I insert an order first, not a problem! But let's say I try:
INSERT INTO line_item (order_number) values (123) before order 123 was inserted. This will fail the fk constraint of course. But this might be the order I get the data, since it's reading from a stream that is collecting this data from multiple sources.
Also, to address #philpxy's question, I didn't really find much on this. One thing that was mentioned was deferred constraints. This is a mechanism that waits to do the fk constraints at the end of a transaction. I don't think it's possible to do that in my case however, since these insert statements will be run at random times whenever the data is received.
You have a business workflow problem, because line items of individual orders are coming in before the orders themselves have come in. One workaround, perhaps not ideal, would be to create a before insert trigger which checks, for every incoming insert to the line_item table, whether that order already exists in the order table. If not, then it will first insert the order record before trying the insert on line_item.
CREATE OR REPLACE FUNCTION "public"."fn_insert_order" () RETURNS trigger AS $$
BEGIN
INSERT INTO "order" (order_number)
SELECT NEW.order_number
WHERE NOT EXISTS (SELECT 1 FROM "order" WHERE order_number = NEW.order_number);
RETURN NEW;
END
$$
LANGUAGE 'plpgsql'
# trigger
CREATE TRIGGER "trigger_insert_order"
BEFORE INSERT ON line_item FOR EACH ROW
EXECUTE PROCEDURE fn_insert_order()
Note: I am assuming that the id column of the order table in fact is auto increment, in which case Postgres would automatically assign a value to it when inserting as above. Most likely, this is what you want, as having two id columns which both need to be manually assigned does not make much sense.
You could accomplish that with a BEFORE INSERT trigger on line_item.
In that trigger you query order if a matching item exists, and if not, you insert a dummy row.
That will allow the INSERT to succeed, at the cost of some performance.
To insert rows into order, use
INSERT INTO order ...
ON CONFLICT ON (order_number) DO UPDATE SET
id = EXCLUDED.id;
Updating a primary key is problematic and may lead to conflicts. One way you could get around that is if you use negative ids for artificially generated orders (assuming that the real ids are positive). If you have any references to that primary key, you'd have to define the constraint with ON UPDATE CASCADE.

Generate column value automatically from other columns values and be used as PRIMARY KEY

I have a table with a column named "source" and "id". This table is populated from open data DB.
"id" can't be UNIQUE, since my data came from other db with their own id system. There is a real risk to have same id but really different data.
I want to create another column which combine source and id into a single value.
"openDataA" + 123456789 -> "openDataA123456789"
"openDataB" + 123456789 -> "openDataB123456789"
I have seen example that use || and function to concatenate value. This is good, but I want to make this third column my PRIMARY KEY, to avoid duplicate, and create a really unique id that I can query without much computation and that I can use as a foreign key constraint for other table.
I think Composite Types is what I'm looking for, but instead of setting the value manually each time, I want to grab them automatically by setting only "source" and "id"
I'm fairly new to postgresql, so any help is welcome.
Thank you.
You could just have a composite key in your table:
CREATE TABLE mytable (
source VARCHAR(10),
id VARCHAR(10),
PRIMARY KEY (source, id)
);
If you really want a joined column, you could create a view to display it:
CREATE VIEW myview AS
SELECT *, source || id AS primary_key
FROM mytable;

Getting error for auto increment fields when inserting records without specifying columns

We're in process of converting over from SQL Server to Postgres. I have a scenario that I am trying to accommodate. It involves inserting records from one table into another, WITHOUT listing out all of the columns. I realize this is not recommended practice, but let's set that aside for now.
drop table if exists pk_test_table;
create table public.pk_test_table
(
recordid SERIAL PRIMARY KEY NOT NULL,
name text
);
--example 1: works and will insert a record with an id of 1
insert into pk_test_table values(default,'puppies');
--example 2: fails
insert into pk_test_table
select first_name from person_test;
Error I receive in the second example:
column "recordid" is of type integer but expression is of type
character varying Hint: You will need to rewrite or cast the
expression.
The default keyword will tell the database to grab the next value.
Is there any way to utilize this keyword in the second example? Or some way to tell the database to ignore auto-incremented columns and just them be populated like normal?
I would prefer to not use a subquery to grab the next "id".
This functionality works in SQL Server and hence the question.
Thanks in advance for your help!
If you can't list column names, you should instead use the DEFAULT keyword, as you've done in the simple insert example. This won't work with a in insert into ... select ....
For that, you need to invoke nextval. A subquery is not required, just:
insert into pk_test_table
select nextval('pk_test_table_id_seq'), first_name from person_test;
You do need to know the sequence name. You could get that from information_schema based on the table name and inferring its primary key, using a function that takes just the table name as an argument. It'd be ugly, but it'd work. I don't think there's any way around needing to know the table name.
You're inserting value into the first column, but you need to add a value in the second position.
Therefore you can use INSERT INTO table(field) VALUES(value) syntax.
Since you need to fetch values from another table, you have to remove VALUES and put the subquery there.
insert into pk_test_table(name)
select first_name from person_test;
I hope it helps
I do it this way via a separate function- though I think I'm getting around the issue via the table level having the DEFAULT settings on a per field basis.
create table public.pk_test_table
(
recordid integer NOT NULL DEFAULT nextval('pk_test_table_id_seq'),
name text,
field3 integer NOT NULL DEFAULT 64,
null_field_if_not_set integer,
CONSTRAINT pk_test_table_pkey PRIMARY KEY ("recordid")
);
With function:
CREATE OR REPLACE FUNCTION func_pk_test_table() RETURNS void AS
$BODY$
INSERT INTO pk_test_table (name)
SELECT first_name FROM person_test;
$BODY$
LANGUAGE sql VOLATILE;
Then just execute the function via a SELECT FROM func_pk_test_table();
Notice it hasn't had to specify all the fields- as long as constraints allow it.

PostgreSQL ON INSERT CASCADE

I've got two tables - one is Product and one is ProductSearchResult.
Whenever someone tries to Insert a SearchResult with a product that is not listed in the Product table the foreign key constrain is violattet, hence i get an error.
I would like to know how i could get my database to automatically create that missing Product in the Product Table (Just the ProductID, all other attributes can be left blank)
Is there such thing as CASCADE ON INSERT? If there is, i was not able not get it working.
Rules are getting executed after the Insert, so because we get an Error beforehand there are useless if you USE an "DO ALSO". If you use "DO INSTEAD" and add the INSERT Command at the End you end up with endless recursion.
I reckon a Trigger is the way to go - but all my attempts to write one failed.
Any recommendations?
The Table Structure:
CREATE TABLE Product (
ID char(10) PRIMARY KEY,
Title varchar(150),
Manufacturer varchar(80),
Category smallint,
FOREIGN KEY(Category) REFERENCES Category(ID) ON DELETE CASCADE);
CREATE TABLE ProductSearchResult (
SearchTermID smallint NOT NULL,
ProductID char(10) NOT NULL,
DateFirstListed date NOT NULL DEFAULT current_date,
DateLastListed date NOT NULL DEFAULT current_date,
PRIMARY KEY (SearchTermID,ProductID),
FOREIGN KEY (SearchTermID) REFERENCES SearchTerm(ID) ON DELETE CASCADE,
FOREIGN KEY (ProductID) REFERENCES Product ON DELETE CASCADE);
Yes, triggers are the way to go. But before you can start to use triggers in plpgsql, you
have to enable the language. As user postgres, run the command createlang with the proper parameters.
Once you've done that, you have to
Write function in plpgsql
create a trigger to invoke that function
See example 39-3 for a basic example.
Note that a function body in Postgres is a string, with a special quoting mechanism: 2 dollar signs with an optional word in between them, as the quotes. (The word allows you to quote other similar quotes.)
Also note that you can reuse a trigger procedure for multiple tables, as long as they have the columns your procedure uses.
So the function has to
check if the value of NEW.ProductID exists in the ProductSearchResult table, with a select statement (you ought to be able to use SELECT count(*) ... INTO someint, or SELECT EXISTS(...) INTO somebool)
if not, insert a new row in that table
If you still get stuck, come back here.
In any case (rules OR triggers) the insert needs to create a new key (and new values for the attributes) in the products table. In most cases, this implies that a (serial,sequence) surrogate primary key should be used in the products table, and that the "real world" product_id ("product number") should default to NULL, and be degraded to a candidate key.
BTW: a rule can be used, rules just are tricky to implement correctly for N:1 relations (they need the same kind of EXISTS-logic as in Bart's answer above).
Maybe cascading on INSERT is not such a good idea after all. What do you want to happen if someone inserts a ProductSearchResult record for a not-existing product? [IMO a FK is always a domain; you cannot just extend a domain just by referring to a not-existant value for it; that would make the FK constraint meaningless]

postgresql duplicate key violates unique constraint

I have a question I know this was posted many times but I didn't find an answer to my problem. The problem is that I have a table and a column "id" I want it to be unique number just as normal. This type of column is serial and the next value after each insert is coming from a sequence so everything seems to be all right but it still sometimes shows this error. I don't know why. In the documentation, it says the sequence is foolproof and always works. If I add a UNIQUE constraint to that column will it help? I worked before many times on Postres but this error is showing for me for the first time. I did everything as normal and I never had this problem before. Can you help me to find the answer that can be used in the future for all tables that will be created? Let's say we have something easy like this:
CREATE TABLE comments
(
id serial NOT NULL,
some_column text NOT NULL,
CONSTRAINT id_pkey PRIMARY KEY (id)
)
WITH (
OIDS=FALSE
);
ALTER TABLE interesting.comments OWNER TO postgres;
If i add:
ALTER TABLE comments ADD CONSTRAINT id_id_key UNIQUE(id)
Will if be enough or is there some other thing that should be done?
This article explains that your sequence might be out of sync and that you have to manually bring it back in sync.
An excerpt from the article in case the URL changes:
If you get this message when trying to insert data into a PostgreSQL
database:
ERROR: duplicate key violates unique constraint
That likely means that the primary key sequence in the table you're
working with has somehow become out of sync, likely because of a mass
import process (or something along those lines). Call it a "bug by
design", but it seems that you have to manually reset the a primary
key index after restoring from a dump file. At any rate, to see if
your values are out of sync, run these two commands:
SELECT MAX(the_primary_key) FROM the_table;
SELECT nextval('the_primary_key_sequence');
If the first value is higher than the second value, your sequence is
out of sync. Back up your PG database (just in case), then run this command:
SELECT setval('the_primary_key_sequence', (SELECT MAX(the_primary_key) FROM the_table)+1);
That will set the sequence to the next available value that's higher
than any existing primary key in the sequence.
Intro
I also encountered this problem and the solution proposed by #adamo was basically the right solution. However, I had to invest a lot of time in the details, which is why I am now writing a new answer in order to save this time for others.
Case
My case was as follows: There was a table that was filled with data using an app. Now a new entry had to be inserted manually via SQL. After that the sequence was out of sync and no more records could be inserted via the app.
Solution
As mentioned in the answer from #adamo, the sequence must be synchronized manually. For this purpose the name of the sequence is needed. For Postgres, the name of the sequence can be determined with the command PG_GET_SERIAL_SEQUENCE. Most examples use lower case table names. In my case the tables were created by an ORM middleware (like Hibernate or Entity Framework Core etc.) and their names all started with a capital letter.
In an e-mail from 2004 (link) I got the right hint.
(Let's assume for all examples, that Foo is the table's name and Foo_id the related column.)
Command to get the sequence name:
SELECT PG_GET_SERIAL_SEQUENCE('"Foo"', 'Foo_id');
So, the table name must be in double quotes, surrounded by single quotes.
1. Validate, that the sequence is out-of-sync
SELECT CURRVAL(PG_GET_SERIAL_SEQUENCE('"Foo"', 'Foo_id')) AS "Current Value", MAX("Foo_id") AS "Max Value" FROM "Foo";
When the Current Value is less than Max Value, your sequence is out-of-sync.
2. Correction
SELECT SETVAL((SELECT PG_GET_SERIAL_SEQUENCE('"Foo"', 'Foo_id')), (SELECT (MAX("Foo_id") + 1) FROM "Foo"), FALSE);
Replace the table_name to your actual name of the table.
Gives the current last id for the table. Note it that for next step.
SELECT MAX(id) FROM table_name;
Get the next id sequence according to postgresql. Make sure this id is higher than the current max id we get from step 1
SELECT nextVal('"table_name_id_seq"');
if it's not higher than then use this step 3 to update the next sequence.
SELECT setval('"table_name_id_seq"', (SELECT MAX(id) FROM table_name)+1);
The primary key is already protecting you from inserting duplicate values, as you're experiencing when you get that error. Adding another unique constraint isn't necessary to do that.
The "duplicate key" error is telling you that the work was not done because it would produce a duplicate key, not that it discovered a duplicate key already commited to the table.
For future searchs, use ON CONFLICT DO NOTHING.
Referrence - https://www.calazan.com/how-to-reset-the-primary-key-sequence-in-postgresql-with-django/
I had the same problem try this:
python manage.py sqlsequencereset table_name
Eg:
python manage.py sqlsequencereset auth
you need to run this in production settings(if you have)
and you need Postgres installed to run this on the server
From http://www.postgresql.org/docs/current/interactive/datatype.html
Note: Prior to PostgreSQL 7.3, serial implied UNIQUE. This is no longer automatic. If you wish a serial column to be in a unique constraint or a primary key, it must now be specified, same as with any other data type.
In my case carate table script is:
CREATE TABLE public."Survey_symptom_binds"
(
id integer NOT NULL DEFAULT nextval('"Survey_symptom_binds_id_seq"'::regclass),
survey_id integer,
"order" smallint,
symptom_id integer,
CONSTRAINT "Survey_symptom_binds_pkey" PRIMARY KEY (id)
)
SO:
SELECT nextval('"Survey_symptom_binds_id_seq"'::regclass),
MAX(id)
FROM public."Survey_symptom_binds";
SELECT nextval('"Survey_symptom_binds_id_seq"'::regclass) less than MAX(id) !!!
Try to fix the proble:
SELECT setval('"Survey_symptom_binds_id_seq"', (SELECT MAX(id) FROM public."Survey_symptom_binds")+1);
Good Luck every one!
I had the same problem. It was because of the type of my relations. I had a table property which related to both states and cities. So, at first I had a relation from property to states as OneToOne, and the same for cities. And I had the same error "duplicate key violates unique constraint". That means that: I can only have one property related to one state and city. But that doesnt make sense, because a city can have multiple properties. So the problem is the relation. The relation should be ManyToOne. Many properties to One city
Table name started with a capital letter if tables were created by an ORM middleware (like Hibernate or Entity Framework Core etc.)
SELECT setval('"Table_name_Id_seq"', (SELECT MAX("Id") FROM "Table_name") + 1)
WHERE
NOT EXISTS (
SELECT *
FROM (SELECT CURRVAL(PG_GET_SERIAL_SEQUENCE('"Table_name"', 'Id')) AS seq, MAX("Id") AS max_id
FROM "Table_name") AS seq_table
WHERE seq > max_id
)
try that CLI
it's just a suggestion to enhance the adamo code (thanks a lot adamo)
SELECT setval('tableName_columnName_seq', (SELECT MAX(columnName) FROM tableName));
For programatically solution at Django. Based on Paolo Melchiorre's answer, I wrote a chunk as a function to be called before any .save()
from django.db import connection
def setSqlCursor(db_table):
sql = """SELECT pg_catalog.setval(pg_get_serial_sequence('"""+db_table+"""', 'id'), MAX(id)) FROM """+db_table+""";"""
with connection.cursor() as cursor:
cursor.execute(sql)
I have similar problem but I solved it by removing all the foreign key in my Postgresql