Postgresql remove values from foreign key that has a cyclic reference and also is referenced in a primary table - postgresql

There are 2 tables:
the first one is the Father Table
create table win_folder_principal(
id_folder_principal serial primary key not null,
folder_name varchar(300)not null
);
and the table that has a cyclic reference
create table win_folder_dependency(
id_folder_dependency serial primary key not null,
id_folder_father int not null,
id_folder_son int not null,
foreign key(id_folder_father)references win_folder_principal(id_folder_principal),
foreign key(id_folder_son)references win_folder_principal(id_folder_principal)
);
however i found a very interesting situation, if i wanna remove a value from the table father that has a kid and that kid has more kids, is there any way to remove the values from the last to the first but also those values be removed from the Father table?
**WIN_FOLDER_PRINCIPAL**
| Id | Folder_Name|
| 23 | new2 |
| 24 | new3 |
| 13 | new0 |
| 22 | new1 |
| 12 | nFol |
And this are the value stored in the Win_Folder_Dependency
**WIN_FOLDER_DEPENDENCY**
| Id_Father | Id_Son |
| 12 | 13 |
| 13 | 22 |
| 22 | 23 |
| 23 | 24 |
and this is the query that i use to know the values in the dependency and principal table.
SELECT m2.id_folder_principal AS "Principal",
m.folder_name AS "Dependency",
m2.id_folder_principal AS id_principal,
m.id_folder_principal AS id_dependency
FROM ((win_folder_dependency md
JOIN win_folder_principal m ON ((m.id_folder_principal = md.id_folder_son)))
JOIN win_folder_principal m2 ON ((m2.id_folder_principal = md.id_folder_father)))
If i wanna remove the folder with the Id_Principal 13 i need to remove the other relations that exists in the Folder_Dependency table, but also remove the value from the Folder_Principal
is there any way to achieve that cyclic delete?

This anonymous code block will accumulate all the principles rooted with ID 13 searching down the dependency tree in an array parameter named l_Principles. It then deletes all the dependency records where either the father or son (or both) are contained in l_Principles, and then deletes all the principle records identified in l_Principles:
DO $$DECLARE
l_principles int[];
BEGIN
with recursive t1(root, child, pinciples) as (
select id_folder_father
, id_folder_son
, array[id_folder_father, id_folder_son]
from win_folder_dependency
where id_folder_father = 13
union all
select root
, id_folder_son
, pinciples||id_folder_son
from win_folder_dependency
join t1
on id_folder_father = child
and not id_folder_son = any(pinciples) -- Avoid cycles
)
select max(pinciples) into l_principles from t1 group by root;
delete from win_folder_dependency
where id_folder_father = any(l_principles)
or id_folder_son = any(l_principles);
delete from win_folder_principal
where id_folder_principal = any(l_principles);
end$$;
/
With your provided sample data, the end result will be only one record remaining in the win_folder_principal and no records in the win_folder_dependency table.

If you wan to delete a record from win_folder_principal you must first remove the references to it in win_folder_dependency like so:
delete from win_folder_dependency where 13 in (id_folder_father, id_folder_son);
before you delete the record from win_folder_principal like so:
delete from win_folder_principal where id_folder_principal = 13;
Alternatively if you build your second table like this:
create table win_folder_dependency(
id_folder_dependency serial primary key not null,
id_folder_father int not null,
id_folder_son int not null,
foreign key(id_folder_father)references win_folder_principal(id_folder_principal) on delete cascade,
foreign key(id_folder_son)references win_folder_principal(id_folder_principal) on delete cascade
);
Note the on delete cascade directives, then you can just delete from the principal table, and the references in the dependency table will be deleted as well.

Related

How can I ensure that a join table is referencing two tables with a composite FK, one of the two column being in common on both tables?

I have 3 tables : employee, event, and these are N-N so the 3rd table employee_event.
The trick is, they can only N-N within the same group
employee
+---------+--------------+
| id | group |
+---------+--------------+
| 1 | A |
| 2 | B |
+---------+--------------+
event
+---------+--------------+
| id | group |
+---------+--------------+
| 43 | A |
| 44 | B |
+----
employee_event
+---------+--------------+
| employee_id | event_id |
+-------------+--------------+
| 1 | 43 |
| 2 | 44 |
+---------+--------------+
So the combination employee_id=1 event_id=44 should not be possible, because employee from group A can not attend an event from group B. How can I secure my DB with this?
My first idea is to add the column employee_event.group so that I can make my two FK (composite) with employee_id + group and event_id + group respectively to the table employee and event. But is there a way to avoid adding a column in the join table for the only purpose of FKs?
Thx!
You may create a function and use it as a check constraint on table employee_event.
create or replace function groups_match (employee_id integer, event_id integer)
returns boolean language sql as
$$
select
(select group from employee where id = employee_id) =
(select group from event where id = event_id);
$$;
and then add a check constraint on table employee_event.
ALTER TABLE employee_event
ADD CONSTRAINT groups_match_check
CHECK groups_match(employee_id, event_id);
Still bear in mind that rows in employee_event that used to be valid may become invalid but still remain intact if certain changes in tables employee and event occur.

remove duplicate records in postgres where all records are duplicate

My postgres table model have exactly duplicate record, I need to write a query to delete them.
id | model | model_id | dependent_on_model
-----+-------+----------+--------------------
1 | Card | 72 | Metric
1 | Card | 72 | Metric
2 | Card | 79 | Metric
2 | Card | 79 | Metric
3 | Card | 83 | Metric
3 | Card | 83 | Metric
5 | Card | 86 | Metric
using Cte is not helping as i am getting the error
relation "cte" does not exist.
Please suggest a query which delete the duplicate row and i will have just 4 distinct records at the end.
My suggestion is to duplicate the table in a TEMPORARY TABLE WITH OIDS. This way you have some other id to distinguish the two identical rows.
Idea:
Duplicate the data with another ID in a temporary table.
Remove duplicates in temporary table.
Delete actual table
Copy data back into actual table from temporary table.
Delete the TEMPORARY TABLE
You'll have to perform some destructive action on your actual table so make sure your TEMPORARY TABLE has what you want remaining before deleting anything from your actual table.
This is how you would create the TEMPORARY TABLE:
CREATE TEMPORARY TABLE dups_with_oids
( id integer
, model text
, model_id integer
, dependent_on_model text
) WITH OIDS;
Here is the DELETE query:
WITH temp AS
(
SELECT d.id AS keep
, d.oid AS keep_oid
, d2.id AS del
, d2.oid AS del_oid
FROM dups_with_oids d
JOIN dups_with_oids d2 ON (d.id = d2.id AND d.oid < d2.oid)
)
DELETE FROM dups_with_oids d
WHERE d.oid IN (SELECT temp.del_oid FROM temp);
SQLFiddle to prove the theory.
I should add that if id were a PRIMARY KEY or UNIQUE these duplicates wouldn't have been possible.

SELECT statement for all 3 tables in a many-to-many relationship

I am having trouble writing a SELECT query that includes all the 3 tables in a many-to-many relationship. I have the following tables:
Table "public.companies"
Column | Type | Modifiers | Storage | Stats target | Description
----------------+------------------------+--------------------------------------------------------+----------+--------------+-------------
id | integer | not null default nextval('companies_id_seq'::regclass) | plain | |
name | character varying(48) | not null | extended | |
description | character varying(512) | | extended | |
tagline | character varying(64) | | extended | |
featured_image | integer | | plain | |
Indexes:
"companies_pkey" PRIMARY KEY, btree (id)
Referenced by:
TABLE "company_category_associations" CONSTRAINT "company_category_associations_company_id_foreign" FOREIGN KEY (company_id) REFERENCES companies(id) ON DELETE CASCADE
Table "public.company_category_associations"
Column | Type | Modifiers
-------------+---------+----------------------------------------------------------------------------
id | integer | not null default nextval('company_category_associations_id_seq'::regclass)
company_id | integer | not null
category_id | integer | not null
Indexes:
"company_category_associations_pkey" PRIMARY KEY, btree (id)
Foreign-key constraints:
"company_category_associations_category_id_foreign" FOREIGN KEY (category_id) REFERENCES company_categories(id) ON DELETE RESTRICT
"company_category_associations_company_id_foreign" FOREIGN KEY (company_id) REFERENCES companies(id) ON DELETE CASCADE
Table "public.company_categories"
Column | Type | Modifiers
-------------+-----------------------+-----------------------------------------------------------------
id | integer | not null default nextval('company_categories_id_seq'::regclass)
name | character varying(32) | not null
description | character varying(96) |
Indexes:
"company_categories_pkey" PRIMARY KEY, btree (id)
Referenced by:
TABLE "company_category_associations" CONSTRAINT "company_category_associations_category_id_foreign" FOREIGN KEY (category_id) REFERENCES company_categories(id) ON DELETE RESTRICT
My companies table will have around 100k rows and a company can have up to 10 categories associated. Of course I won't be selecting more than 200 companies at a time.
I managed to get the results with the following query:
select
c.id as companyid,
c.name as companyname,
cat.id as categoryid,
cat.name as categoryname
from company_categories cat
left join company_category_associations catassoc on catassoc.category_id = cat.id
left join companies c on catassoc.company_id = c.id where c.id is not null;
This question comes from the fact that I need to present data in JSON format and I would like it to look like this:
{
"companies": [
{
"name": "...",
"description": "...",
"categories": [
{
"id": 12,
"name": "Technology"
},
{
"id": 14,
"name": "Computers"
},
]
},
/* ... */
]
}
And basically I want to take as much of that data in as few queries as possible.
How can I write that SELECT query to fit my needs?
Is there a problem with the database structure as it is my diagram?
Thank you!
P.S. I am using PostgreSQL 9.6.6
You can get json out of postgresql directly.
First define composite types (json objects) that you want to output. This is to get names for fields, otherwise they will be named f1,f2,...
create type cat as (id integer, name varchar);
create type comp as (id integer, name varchar, categories cat[]);
Then select array of comps with nested array of cats as json
select to_json(array(
select (
c.id,
c.name,
array(
select (cc.id, cc.name)::cat
from company_categories cc
join company_category_associations cca on (cca.category_id=cc.id and cca.company_id=c.id)
))::comp
from companies c
)) as companies
dbfiddle
You can nest two JSON aggregations to get there.
The first level creates a JSON value for each company with the categories stored as an array:
select to_jsonb(c) || jsonb_build_object('categories', jsonb_agg(cc)) comp_json
from companies c
join company_category_associations cca on cca.company_id = c.id
join company_categories cc on cc.id = cca.category_id
group by c.id;
This returns one row per company. Now we need to aggregate those rows into a single JSON value:
select jsonb_build_object('companies', jsonb_agg(comp_json))
from (
select to_jsonb(c) || jsonb_build_object('{categories}', jsonb_agg(cc)) comp_json
from companies c
join company_category_associations cca on cca.company_id = c.id
join company_categories cc on cc.id = cca.category_id
group by c.id
) t;
Online example: http://rextester.com/TRCR26633

Postgres - updates with join gives wrong results

I'm having some hard time understanding what I'm doing wrong.
The result of this query shows the same results for each row instead of being updated by the right result.
My DATA
I'm trying to update a table of stats over a set of business
business_stats ( id SERIAL,
pk integer not null,
b_total integer,
PRIMARY KEY(pk)
);
the details of each business are stored here
business_details (id SERIAL,
category CHARACTER VARYING,
feature_a CHARACTER VARYING,
feature_b CHARACTER VARYING,
feature_c CHARACTER VARYING
);
and here a table that associate the pk with the category
datasets (id SERIAL,
pk integer not null,
category CHARACTER VARYING;
PRIMARY KEY(pk)
);
WHAT I DID (wrong)
UPDATE business_stats
SET b_total = agg.total
FROM business_stats b,
( SELECT d.pk, count(bd.id) total
FROM business_details AS bd
INNER JOIN datasets AS d
ON bd.category = d.category
GROUP BY d.pk
) agg
WHERE b.pk = agg.pk;
The result of this query is
| id | pk | b_total |
+----+----+-----------+
| 1 | 14 | 273611 |
| 2 | 15 | 273611 |
| 3 | 16 | 273611 |
| 4 | 17 | 273611 |
but if I run just the SELECT the results of each pk are completely different
| pk | agg.total |
+----+-------------+
| 14 | 273611 |
| 15 | 407802 |
| 16 | 179996 |
| 17 | 815580 |
THE QUESTION
why is this happening?
why is the WHERE clause not working?
Before writing this question I've used as reference these posts: a, b, c
Do the following (I always recommend against joins in Updates)
UPDATE business_stats bs
SET b_total =
( SELECT count(c.id) total
FROM business_details AS bd
INNER JOIN datasets AS d
ON bd.category = d.category
where d.pk=bs.pk
)
/*optional*/
where exists (SELECT *
FROM business_details AS bd
INNER JOIN datasets AS d
ON bd.category = d.category
where d.pk=bs.pk)
The issue is your FROM clause. The repeated reference to business_stats means you aren't restricting the join like you expect to. You're joining agg against the second unrelated mention of business_stats rather than the row you want to update.
Something like this is what you are after (warning not tested):
UPDATE business_stats AS b
SET b_total = agg.total
FROM
(...) agg
WHERE b.pk = agg.pk;

Any way to create referential integrity based on data values?

following is a simplified illustration
TABLE : EMPLOYEE (TENANT_ID is a FK)
ID | NAME | TENANT_ID
1 | John | 1
TABLE DEPARTMENT
ID | NAME | TENANT_ID
1 | Physics | 1
2 | Math | 2
TABLE : EMPLOYEE_DEPARTMENTS (Join between employee and department)
ID | EMPLOYEE_ID | DEPARTMENT_ID
1 | 1 | 1
Is there a way to fail inserting data into EMPLOYEE_DEPARTMENTS if EMPLOYEE value is for TENANT 1 and DEPARMENT_ID is from TENANT 2? e.g. where employee_id=1 belongs to tenant=1 and department_id=2 belongs to tenant=2
ID | EMPLOYEED_ID | DEPARTMENT_ID
2 | 1 | 2
Is there a way to prevent such data insertion either at an app or db level. PS> no room for using triggers and don't want to use triggers.
Without triggers, the only way to do this is copy the tenant id so it appears in every table, and use composite primary or unique constraint and a composite foreign key.
e.g. if you had a UNIQUE constraint on EMPLOYEE(TENANT_ID, ID) and on DEPARTMENT(TENANT_ID, ID) you could add a FOREIGN KEY (TENANT_ID, EMPLOYEE_ID) REFERENCES EMPLOYEE (TENANT_ID, ID) and FOREIGN KEY (TENANT_ID, DEPARTMENT_ID) REFERENCES DEPARTMENT (TENANT_ID, ID).
This requires that the join table incorporate the TENANT_ID.
I suggest defining the PRIMARY_KEY of EMPLOYEE_DEPARTMENTS as (TENANT_ID, DEPARTMENT_ID, EMPLOYEE_ID) and getting rid of the useless surrogate key ID on the EMPLOYEE_DEPARTMENTS table, unless your toolkit/framework/ORM can't cope without it.