I'd like to have column constraint based combination of 2 columns. I don't find the way to use foreign key here, because it should be conditional FK, then. Hope this basic SQL shows the problem:
CREATE TABLE performer_type (
id serial primary key,
type varchar
);
INSERT INTO performer_type ( id, type ) VALUES (1, 'singer'), ( 2, 'band');
CREATE TABLE singer (
id serial primary key,
name varchar
);
INSERT INTO singer ( id, name ) VALUES (1, 'Robert');
CREATE TABLE band (
id serial primary key,
name varchar
);
INSERT INTO band ( id, name ) VALUES (1, 'Animates'), ( 2, 'Zed Leppelin');
CREATE TABLE gig (
id serial primary key,
performer_type_id int default null, /* FK, no problem */
performer_id int default null /* want FK based on previous FK, no good solution so far */
);
INSERT INTO gig ( performer_type_id, performer_id ) VALUES ( 1,1 ), (2,1), (2,2), (1,2), (2,3);
Now, the last INSERT works, but for last 2 value pairs I'd like it fail, because there is no singer ID 2 nor band ID 3. How to set such constraint?
I already asked similar question in Mysql context and only solution was to use trigger. Problem with trigger was: you can't have dynamic list of types and table set. I'd like to add types (and related tables) on the fly.
I also found very promising pattern, but this is upside down for me, I did not figured out, how to turn it to work in my case.
What I am looking here seems to me so useful pattern, I think there must be some common way for it. Is it?
Edit.
Seems, I choose bad items in my examples, so I try make it clear: different performer tables (singer and band) have NO relation between them. gig-table just has to list tasks for different performers, without setting any relations between them.
Another example would items in stock: I may have item_type-table, which defines hundreds of item-types with related tables (for example, orange and house), and there should be table stock which enlists all appearances of items.
PostgreSQL I use is 9.6
Based on #Laurenz Albe answer I form a solution for example above. Main difference: there is parent table performer, which PK is FK/PK for specific performer-tables and is referenced also from gig table.
CREATE TABLE performer_type (
id serial primary key,
type varchar
);
INSERT INTO performer_type ( id, type ) VALUES (1, 'singer' ), ( 2, 'band' );
CREATE TABLE performer (
id serial primary key,
performer_type_id int REFERENCES performer_type(id)
);
CREATE TABLE singer (
id int primary key REFERENCES performer(id),
name varchar
);
INSERT INTO performer ( performer_type_id ) VALUES (1); -- get PK 1 for next statement
INSERT INTO singer ( id, name ) VALUES (1, 'Robert');
CREATE TABLE band (
id int primary key REFERENCES performer(id),
name varchar
);
INSERT INTO performer ( performer_type_id ) VALUES (2); -- get PK 2 for next statement
INSERT INTO singer ( id, name ) VALUES (2, 'Animates');
INSERT INTO performer ( performer_type_id ) VALUES (2); -- get PK 3 for next statement
INSERT INTO singer ( id, name ) VALUES (3, 'Zed Leppelin');
CREATE TABLE gig (
id serial primary key,
performer_id int REFERENCES performer(id)
);
INSERT INTO gig ( performer_id ) VALUES (1), (2), (3), (4);
And the last INSERT fails, as expected:
ERROR: insert or update on table "gig" violates foreign key constraint "gig_performer_id_fkey"
DETAIL: Key (performer_id)=(4) is not present in table "performer".
But
For me there is annoying problem: I have no good way to make distinction which ID is for singer and which for band etc. (in original example I had performer_type_id in gig-table for that), because any performer_id may belong any performer. So I'd like any performer type has it's own ID range, so I create dummy table for every sequence
CREATE TABLE band_id (
id int primary key,
dummy boolean default null
);
CREATE SEQUENCE band_id_seq START 1;
ALTER TABLE band_id ALTER COLUMN id SET DEFAULT nextval('band_id_seq');
CREATE TABLE singer_id (
id int primary key,
dummy boolean default null
);
CREATE SEQUENCE singer_id_seq START 2000000;
ALTER TABLE singer_id ALTER COLUMN id SET DEFAULT nextval('singer_id_seq');
Now, to insert new row into specific perfomer table I have to get next ID for it:
INSERT INTO band_id (dummy) VALUES (NULL);
Trying to figure out, is it possible to solve this process on DB level, or has something to done in App-level. It would be nice, if inserting into band table could:
before trigger inserting into band_id to genereate specific ID
before trigger inserting this new ID into performer-table
include this new ID into INSERT into band
Frist 2 points are easy, but the last point is not clear for now.
Related
I want to create the following tables (simplified to the keys for example):
CREATE TABLE a (
TestVer VARCHAR(50) PRIMARY KEY,
TestID INT NOT NULL
);
CREATE TABLE b (
RunID SERIAL PRIMARY KEY,
TestID INT NOT NULL
);
Where TestID is not unique, but I want table b's TestID to only contain values from table a's `TestID'.
I'm fairly certain I can't make it a foreign key, as the target of a foreign key has to be either a key or unique, and found that supported by this post.
It appears possible with Triggers according to this post where mine on insert would look something like:
CREATE TRIGGER id_constraint FOR b
BEFORE INSERT
POSITION 0
AS BEGIN
IF (NOT EXISTS(
SELECT TestID
FROM a
WHERE TestID = NEW.TestID)) THEN
EXCEPTION my_exception 'There is no Test with id=' ||
NEW.TestID;
END
But I would rather not use a trigger. What are other ways to do this if any?
A trigger is the only way to continuously maintain such a constraint, however you can delete all unwanted rows as part of a query that uses table b:
with clean_b as (
delete from b
where not exists (select from a where a.TestID = b.TestID)
)
select *
from b
where ...
When inserting rows with implicit primary keys, it seems not affect primary key sequence and then, when trying to insert without PK, it fails:
create table testtable(
id serial primary key,
data integer not null
);
Insert with PK (for example on data migration):
insert into testtable ( id, data ) values ( 1,2 ), ( 2,2 ), ( 3,2 ), ( 4,2 );
INSERT 0 4
Inserting new data, without PK:
insert into testtable ( data ) values ( 4 ), ( 5 ), ( 6 ), ( 7 );
ERROR: duplicate key value violates unique constraint "testtable_pkey"
DETAIL: Key (id)=(1) already exists.
Why sequence is not set on the max value after first INSERT? Should I control sequences after inserts with PK? Is there way to have sequence automatically on right track?
The reason for this behavior is that the sequence is accessed in the DEFAULT value of the column, and the default value is not used when the column is inserted explicitly.
The only way to achieve what you want that I can imagine is to have a trigger that modifies the sequence after an insert, but I think that would be a slow and horrible solution.
The best way to proceed would be to adjust the sequence once after you are done with the migration.
I want to create e temp table using select into syntax. Like:
select top 0 * into #AffectedRecord from MyTable
Mytable has a primary key. When I insert record using merge into syntax primary key be a problem. How could I drop pk constraint from temp table
The "SELECT TOP (0) INTO.." trick is clever but my recommendation is to script out the table yourself for reasons just like this. SELECT INTO when you're actually bringing in data, on the other hand, is often faster than creating the table and doing the insert. Especially on 2014+ systems.
The existence of a primary key has nothing to do with your problem. Key Constraints and indexes don't get created when using SELECT INTO from another table, the data type and NULLability does. Consider the following code and note my comments:
USE tempdb -- a good place for testing on non-prod servers.
GO
IF OBJECT_ID('dbo.t1') IS NOT NULL DROP TABLE dbo.t1;
IF OBJECT_ID('dbo.t2') IS NOT NULL DROP TABLE dbo.t2;
GO
CREATE TABLE dbo.t1
(
id int identity primary key clustered,
col1 varchar(10) NOT NULL,
col2 int NULL
);
GO
INSERT dbo.t1(col1) VALUES ('a'),('b');
SELECT TOP (0)
id, -- this create the column including the identity but NOT the primary key
CAST(id AS int) AS id2, -- this will create the column but it will be nullable. No identity
ISNULL(CAST(id AS int),0) AS id3, -- this this create the column and make it nullable. No identity.
col1,
col2
INTO dbo.t2
FROM t1;
Here's the (cleaned up for brevity) DDL for the new table I created:
-- New table
CREATE TABLE dbo.t2
(
id int IDENTITY(1,1) NOT NULL,
id2 int NULL,
id3 int NOT NULL,
col1 varchar(10) NOT NULL,
col2 int NULL
);
Notice that the primary key is gone. When I brought in id as-is it kept the identity. Casting the id column as an int (even though it already is an int) is how I got rid of the identity insert. Adding an ISNULL is how to make a column nullable.
By default, identity insert is set to off here to this query will fail:
INSERT dbo.t2 (id, id3, col1) VALUES (1, 1, 'x');
Msg 544, Level 16, State 1, Line 39
Cannot insert explicit value for identity column in table 't2' when IDENTITY_INSERT is set to OFF.
Setting identity insert on will fix the problem:
SET IDENTITY_INSERT dbo.t2 ON;
INSERT dbo.t2 (id, id3, col1) VALUES (1, 1, 'x');
But now you MUST provide a value for that column. Note the error here:
INSERT dbo.t2 (id3, col1) VALUES (1, 'x');
Msg 545, Level 16, State 1, Line 51
Explicit value must be specified for identity column in table 't2' either when IDENTITY_INSERT is set to ON
Hopefully this helps.
On a side-note: this is a good way to play around with and understand how select insert works. I used a perm table because it's easier to find.
Well, I have two tables:
CREATE TABLE Temp(
TEMP_ID int IDENTITY(1,1) NOT NULL, ... )
CREATE TABLE TEMP1(
TEMP1_ID int IDENTITY(1,1) NOT NULL,
TEMP_ID int, ... )
they are linked with TEMP_ID foreign key.
In a stored procedure I need to create tons of
Temp and Temp1 rows and update them, so I created a table variable (#TEMP) and I am dealing with it and finally make one big INSERT into Temp. My question is: how can I fill #Temp with correct TEMP_ID's without insert safely from multiple sessions?
you can use Scope_Identity() to find out last inserted row. You can use Output clause to find all newly inserted (or updated) rows.
create table #t1
(
id int primary key identity,
val int
)
Insert into #t1 (val)
output inserted.id, inserted.val
values (10), (20), (30)
I'm using Postresql 9.3.5. I have a many-to-many relationship between entities "Foo" and "Bar" that I've modeled as something like:
CREATE TABLE Foo
(
id SERIAL PRIMARY KEY NOT NULL,
.... various columns for foo ....
);
CREATE TABLE Bar
(
id SERIAL PRIMARY KEY NOT NULL,
field1 varchar(50) UNIQUE NOT NULL,
.... various columns for bar ....
);
CREATE TABLE FooBar
(
fooID int NOT NULL,
barID int NOT NULL,
PRIMARY KEY (fooID, barID),
FOREIGN KEY (fooID) REFERENCES Foo(id),
FOREIGN KEY (barID) REFERENCES Bar(id)
);
Now what I want to do is insert a record into Foo, insert a corresponding record into Bar, and then insert the matching FooBar record containing the ids of the foo & bar entries.
Catch: I don't know when I go to insert the Bar records if they already exist, so currently my insert for Bar looks something like:
INSERT INTO Bar(field1, .... other fields for Bar....)
SELECT 'value1', .... other values for the insert....
WHERE NOT EXISTS (
SELECT 1 FROM Bar WHERE field1 = 'value1')
Which works fine, but my question: how do I get the id of the newly inserted (or existing) Bar record so that I can insert it into the FooBar table?
This seems to work, although it is far from elegant and is probably very inefficient:
WITH new AS (
INSERT INTO bar(field1)
SELECT ('aaa') WHERE NOT EXISTS (
SELECT 1 FROM bar WHERE field1='aaa'
) RETURNING id
),
existing AS (
SELECT id FROM bar WHERE field1='aaa'
)
SELECT id FROM existing UNION SELECT id FROM new
I imagine this would be inefficient due to repeated searches in bar for the matching value. A more efficient solution might be to write a stored procedure.
Try This
INSERT INTO Bar(field1, field2,etc...) values(value1, value2,etc...) RETURNING id;
SQL FIDDLE