I'm trying to create a database that tracks electrical cables. Each cable contains 1 or more cores that are connected to terminals at each end. The number of cores in each cable is defined in a table.
| number_of_cores | cable_id
|----------2-------|---1-----|
The core table is as follows
cable_no | from_id | core_mark | to_id
1001 | 1 | 1 Black | 2
1001 | 2 | 1 White | 4
I want to create a check that will prevent another 1001 cable core from being inserted.
Is this possible in postgresql?
Ideally, if I tried to insert another 1001 cable with another unique core number, the error would be something like "all cores used on cable 1001"
Thanks,
I think what you need is something like a check constraint. (https://www.postgresql.org/docs/current/ddl-constraints.html)
Follow those steps :
1. Create some table properly
create table cable (cable_id int primary key, number_of_cores int);
create table core (core_id int primary key, cable_id int references cable (cable_id), from_id int, core_mark varchar (50), to_id int);
2. Create the function that will verify the inserts
create or replace function test_max_core_number(in_cable_id int)
returns boolean
language plpgsql
as $function$
declare
res boolean := false;
begin
if exists (
select *
from cable
where cable_id = in_cable_id
and number_of_cores > (select count(*) from core where cable_id = in_cable_id )
)
then
res := true;
end if;
return res;
end;
$function$;
3. Add the constraint to your table
alter table core
add constraint cstr_check check (test_max_core_number(cable_id));
4. Now it is time for some testing :)
insert into cable (cable_id, number_of_cores) values (1, 2), (2, 3);
insert into core (core_id, cable_id, from_id, core_mark, to_id)
values
(1, 1, 1, '1 Black', 2)
,(2, 1, 2, '1 White', 4);
Normally all goes fine for now.
5. And now the wanted error !
insert into core (core_id, cable_id, from_id, core_mark, to_id)
values
(3, 1, 3, '1 Green', 2);
Hope this helps !
I think #Jaisus gave a good answer.
I would add only a cross-check into the cable to prevent to set bad values into number_of_cores:
create or replace function test_cable_number_of_cores(in_cable_id int,in_number_of_cores int)
returns boolean
language plpgsql
as $function$
declare
res boolean := false;
begin
res := (in_number_of_cores>0 and (select count(cable_id) from core where cable_id=in_cable_id) <= in_number_of_cores);
return res;
end;
$function$;
alter table cable add check(test_cable_number_of_cores(cable_id, number_of_cores));
-- ok
insert into cable(cable_id, number_of_cores) values (3, 2);
update cable set number_of_cores=3 where cable_id=3;
-- error
update cable set number_of_cores=1 where cable_id=1;
Related
I have got a composite primary key in a table in PostgreSQL (I am using pgAdmin4)
Let's call the the two primary keys productno and version.
version represents the version of productno.
So if I create a new dataset, then it needs to be checked if a dataset with this productno already exists.
If productno doesn't exist yet, then version should be (version) 1
If productno exists once, then version should be 2
If productno exists twice, then version should be 3
... and so on
So that we get something like:
productno | version
-----|-----------
1 | 1
1 | 2
1 | 3
2 | 1
2 | 2
I found a quite similar problem: auto increment on composite primary key
But I can't use this solution because PostgreSQL syntax is obviously a bit different - so tried a lot around with functions and triggers but couldn't figure out the right way to do it.
You can keep the version numbers in a separate table (one for each "base PK" value). That is way more efficient than doing a max() + 1 on every insert and has the additional benefit that it's safe for concurrent transactions.
So first we need a table that keeps track of the version numbers:
create table version_counter
(
product_no integer primary key,
version_nr integer not null
);
Then we create a function that increments the version for a given product_no and returns that new version number:
create function next_version(p_product_no int)
returns integer
as
$$
insert into version_counter (product_no, version_nr)
values (p_product_no, 1)
on conflict (product_no)
do update
set version_nr = version_counter.version_nr + 1
returning version_nr;
$$
language sql
volatile;
The trick here is the the insert on conflict which increments an existing value or inserts a new row if the passed product_no does not yet exists.
For the product table:
create table product
(
product_no integer not null,
version_nr integer not null,
created_at timestamp default clock_timestamp(),
primary key (product_no, version_nr)
);
then create a trigger:
create function increment_version()
returns trigger
as
$$
begin
new.version_nr := next_version(new.product_no);
return new;
end;
$$
language plpgsql;
create trigger base_table_insert_trigger
before insert on product
for each row
execute procedure increment_version();
This is safe for concurrent transactions because the row in version_counter will be locked for that product_no until the transaction inserting the row into the product table is committed - which will commit the change to the version_counter table as well (and free the lock on that row).
If two concurrent transactions insert the same value for product_no, one of them will wait until the other finishes.
If two concurrent transactions insert different values for product_no, they can work without having to wait for the other.
If we then insert these rows:
insert into product (product_no) values (1);
insert into product (product_no) values (2);
insert into product (product_no) values (3);
insert into product (product_no) values (1);
insert into product (product_no) values (3);
insert into product (product_no) values (2);
The product table looks like this:
select *
from product
order by product_no, version_nr;
product_no | version_nr | created_at
-----------+------------+------------------------
1 | 1 | 2019-08-23 10:50:57.880
1 | 2 | 2019-08-23 10:50:57.947
2 | 1 | 2019-08-23 10:50:57.899
2 | 2 | 2019-08-23 10:50:57.989
3 | 1 | 2019-08-23 10:50:57.926
3 | 2 | 2019-08-23 10:50:57.966
Online example: https://rextester.com/CULK95702
You can do it like this:
-- Check if pk exists
SELECT pk INTO temp_pk FROM table a WHERE a.pk = v_pk1;
-- If exists, inserts it
IF temp_pk IS NOT NULL THEN
INSERT INTO table(pk, versionpk) VALUES (v_pk1, temp_pk);
END IF;
So - I got it work now
So if you want a column to update depending on another column in pg sql - have a look at this:
This is the function I use:
CREATE FUNCTION public.testfunction()
RETURNS trigger
LANGUAGE 'plpgsql'
COST 100
VOLATILE NOT LEAKPROOF
AS $BODY$
DECLARE v_productno INTEGER := NEW.productno;
BEGIN
IF NOT EXISTS (SELECT *
FROM testtable
WHERE productno = v_productno)
THEN
NEW.version := 1;
ELSE
NEW.version := (SELECT MAX(testtable.version)+1
FROM testtable
WHERE testtable.productno = v_productno);
END IF;
RETURN NEW;
END;
$BODY$;
And this is the trigger that runs the function:
CREATE TRIGGER testtrigger
BEFORE INSERT
ON public.testtable
FOR EACH ROW
EXECUTE PROCEDURE public.testfunction();
Thank you #ChechoCZ, you definetly helped me getting in the right direction.
Given two tables, A and B:
A B
----- -----
id id
high high
low low
bId
I want to find rows in table A where bId is null, create an entry in B based off the data in A, and update the row in A to reference the newly created row. I can create the rows but I'm having trouble updating table A with the reference to the new row:
begin transaction;
with rows as (
insert into B (high, low)
select high, low
from A a
where a.bId is null
returning id as bId, a.id as aId
)
update A
set bId=(select bId from rows where id=rows.aId)
where id=rows.aId;
--commit;
rollback;
However, this fails with a cryptic error: ERROR: missing FROM-clause entry for table a.
Using a Postgres query, how can I achieve this?
either
update "A"
set "bId"=(select "bId" from rows where id=rows."aId")
without the where clause or
update "A"
set "bId"=(select "bId" from rows where id=rows."aId")
FROM rows
where "A".id=rows.aId;
I dont know if your tables realy have that names, as mentioned in the comments try to avoid uppercase tables and fieldnames and try to avoid reserved keynames.
I found a way to get it to work but I feel like it's not the most efficient.
begin transaction;
do $body$
declare
newId int4;
tempB record;
begin
create temp table TempAB (
High float8,
Low float8,
AID int4
);
insert into TempAB (High, Low, AId)
select high, low, id
from A
where bId is null;
for tempB in (select * from TempAB)
loop
insert into B (high, low)
values (tempB.high, tempB.low)
returning id into newId;
update A
set bId=newId
where id=tempB.AId;
end loop;
end $body$;
rollback;
--commit;
I have a table that regroups some users and which event (as in IRL event) they've joined.
I have set up a server query that lets a user join an event.
It goes like this :
INSERT INTO participations
VALUES(:usr,:event_id)
I want that statement to also return the number of people who have joined the same event as the user. How do I proceed? If possible in one SQL statement.
Thanks
You can use a common table expression like this to execute it as one query.
with insert_tbl_statement as (
insert into tbl values (4, 1) returning event_id
)
select (count(*) + 1) as event_count from tbl where event_id = (select event_id from insert_tbl_statement);
see demo http://rextester.com/BUF16406
You can use a function, I've set up next example, but keep in mind you must add 1 to the final count because still transaction hasn't been committed.
create table tbl(id int, event_id int);
✓
insert into tbl values (1, 2),(2, 2),(3, 3);
3 rows affected
create function new_tbl(id int, event_id int)
returns bigint as $$
insert into tbl values ($1, $2);
select count(*) + 1 from tbl where event_id = $2;
$$ language sql;
✓
select new_tbl(4, 2);
| new_tbl |
| ------: |
| 4 |
db<>fiddle here
In SQL Server and Oracle, joining to (or selecting from) stored procedures that return a resultset is not supported.
Is it possible in PostgreSQL?
If so, a followup question is: is it possible, perhaps via foreign data wrappers, to join to a stored procedure within an MS SQL Server database??
PostgreSQL can join to a stored procedure (function). Tables and rows to get us started.
create schema test;
create table test.test_a (
id integer primary key
);
insert into test.test_a values
(1), (2);
create table test.test_b (
id integer references test.test_a,
n integer not null,
primary key (id, n)
);
insert into test.test_b values
(1, 1), (1, 2), (1, 3), (2, 1), (2, 2);
A simple function.
create or replace function test.return_test_b ()
returns table (
id integer,
n integer
) as
$$
select * from test.test_b
$$ language sql;
Now you can join directly.
select A.id, B.n from test.test_a A
inner join test.return_test_b() B on A.id = B.id;
id n
--
1 1
1 2
1 3
2 1
2 2
Hello what is the easiest way to duplicate a DB record over the same table?
My problem is that the table where I am doing this has many column, like 100+, and I don't like how the solution looks like. Here is what I do (this is inside plpqsql function):
...
1. duplicate record
INSERT INTO history
(SELECT NEXTVAL('history_id_seq'), col_1, col_2, ... , col_100)
FROM history
WHERE history_id = 1234
ORDER BY datetime DESC
LIMIT 1)
RETURNING
history_id INTO new_history_id;
2. update some columns
UPDATE history
SET
col_5 = 'test_5',
col_23 = 'test_23',
datetime = CURRENT_TIMESTAMP
WHERE history_id = new_history_id;
Here are the problems I am attempting to solve
Listing all these 100+ columns looks lame
When new column is added eventually the function should be updated too
On separate DB instances the column order might differ, which would cause the function fail
I am not sure if I can list them once more (solving issue 3) like insert into <table> (<columns_list>) values (<query>) but then the query looks even uglier.
I would like to achieve something like 'insert into ', but this seems impossible the unique primary key constraint will raise a duplication error.
Any suggestions?
Thanks in advance for you time.
This isn't pretty or particularly optimized but there are a couple of ways to go about this. Ideally, you might want to do this all in an UPDATE trigger though you could implement a duplication function something like this:
-- create source table
CREATE TABLE history (history_id serial not null primary key, col_2 int, col_3 int, col_4 int, datetime timestamptz default now());
-- add some data
INSERT INTO history (col_2, col_3, col_4)
SELECT g, g * 10, g * 100 FROM generate_series(1, 100) AS g;
-- function to duplicate record
CREATE OR REPLACE FUNCTION fn_history_duplicate(p_history_id integer) RETURNS SETOF history AS
$BODY$
DECLARE
cols text;
insert_statement text;
BEGIN
-- build list of columns
SELECT array_to_string(array_agg(column_name::name), ',') INTO cols
FROM information_schema.columns
WHERE (table_schema, table_name) = ('public', 'history')
AND column_name <> 'history_id';
-- build insert statement
insert_statement := 'INSERT INTO history (' || cols || ') SELECT ' || cols || ' FROM history WHERE history_id = $1 RETURNING *';
-- execute statement
RETURN QUERY EXECUTE insert_statement USING p_history_id;
RETURN;
END;
$BODY$
LANGUAGE 'plpgsql';
-- test
SELECT * FROM fn_history_duplicate(1);
history_id | col_2 | col_3 | col_4 | datetime
------------+-------+-------+-------+-------------------------------
101 | 1 | 10 | 100 | 2013-04-15 14:56:11.131507+00
(1 row)
As I noted in my original comment, you might also take a look at the colnames extension as an alternative to querying the information schema.
You don't need the update anyway, you can supply the constant values directly in the SELECT statement:
INSERT INTO history
SELECT NEXTVAL('history_id_seq'),
col_1,
col_2,
col_3,
col_4,
'test_5',
...
'test_23',
...,
col_100
FROM history
WHERE history_sid = 1234
ORDER BY datetime DESC
LIMIT 1
RETURNING history_sid INTO new_history_sid;