Cascading delete on table that references itself - tsql

I have a table (say 'MyTable') that can reference itself, ie. it has a ParentId that can point to another record in the same table (so you can store a tree of related nodes).
The problem is that when I try to delete all records that are children of a specific parent, I get the following exception (using EF 6):
The DELETE statement conflicted with the SAME TABLE REFERENCE constraint "FK_dbo.MyTable_dbo.MyTable_ParentId". The conflict occurred in database "foo", table "dbo.MyTable", column 'ParentId'.
The statement has been terminated.
(I'm executing a Sql command like this context.Database.ExecuteSqlCommand("DELETE FROM [MyTable] WHERE ParentId = {0}", parentId);
I tried to fix it by adding a Children property and use fluent api to set cascading delete like this:
modelBuilder.Entity<MyTable>()
.HasMany(t => t.Children)
.WithOptional(t => t.Parent)
.WillCascadeOnDelete(true);
But that gives the following error:
Introducing FOREIGN KEY constraint 'FK_dbo.MyTable_dbo.MyTable_ParentId' on table 'MyTable' may cause cycles or multiple cascade paths. Specify ON DELETE NO ACTION or ON UPDATE NO ACTION, or modify other FOREIGN KEY constraints.
Also when I manually remove the FK and recreate it with ON CASCADE DELETE I get the same error.
I'm a bit lost now on how to fix this.. so any ideas are welcome :)

USE tempdb;
GO
IF OBJECT_ID('tempdb..#Employees') IS NOT NULL
DROP TABLE #Employees;
GO
CREATE TABLE #Employees
(
empid INT PRIMARY KEY,
mgrid INT NULL REFERENCES #Employees,
empname VARCHAR(25) NOT NULL
);
CREATE UNIQUE INDEX idx_unc_mgrid_empid ON #Employees(mgrid, empid);
INSERT INTO #Employees(empid, mgrid, empname) VALUES
(1, NULL, 'David'),
(2, 1, 'Eitan'),
(3, 1, 'Ina'),
(4, 2, 'Seraph'),
(5, 2, 'Jiru'),
(6, 2, 'Steve'),
(7, 3, 'Aaron'),
(8, 5, 'Lilach'),
(9, 7, 'Rita'),
(10, 5, 'Sean'),
(11, 7, 'Gabriel'),
(12, 9, 'Emilia'),
(13, 9, 'Michael'),
(14, 9, 'Didi');
GO
DELETE FROM #Employees WHERE mgrid = 9
SELECT * FROM #Employees;

Related

How to update a table without creating new rows

Imagine I have a set of key-value data from a primary key to values:
id
foo
1
abc
2
def
3
ghj
... that need to be updated in a table.
And I want to update all of these in one query. Naturally, upserts come to mind, which works quite well:
INSERT INTO my_table (id, foo) VALUES ((1, 'abc'), (2, 'def'), (3, 'ghj'))
ON CONFLICT (id) DO UPDATE SET foo = excluded.foo;
This works fine, but what if I don't actually want to insert the row with id=3 when it doesn't already exist in the table my_table?
I don't see why you would need an INSERT at all if you just want to UPDATE the rows?
update my_table
set foo = v.foo
from (
VALUES (1, 'abc'), (2, 'def'), (3, 'ghj')
) as v(id, foo)
where v.id = my_table.id;
One thing I have already tried (and it works) is to use a source query which receives all of the source data as a json list and then inner joins to the existing table to throw away all the records that don't have an entry in my_table:
[
{"id": 1, "foo": "abc"},
{"id": 2, "foo": "def"},
{"id": 3, "foo": "ghj"}
]
which is passed as the only parameter to this query:
WITH source AS (
SELECT my_table.id, x.foo FROM jsonb_to_recordset($1::jsonb) AS x(id int, foo text)
JOIN my_table ON x.id = my_table.id
)
INSERT INTO my_table (id, foo)
(SELECT * FROM source)
ON CONFLICT(id) DO UPDATE SET foo = excluded.foo

Create `has_($value)`column for table values

I would like to perform an operation similar to a dynamic pivot table. I utilize Postgresql as database framework. The table t has a column with values 10, 20, and 30. I wish to create n columns, in this case equal to 3, to allow the boolean assignment has_($value) equal to 1 if existent in respective group, or 0 if not. I tried to understand tablefunc and crosstab without success.
CREATE TABLE IF NOT EXISTS t (
id INTEGER NOT NULL,
value INT NOT NULL
)
INSERT INTO t (id, value) VALUES
(1, 10),
(1, 20),
(2, 10),
(3, 30),
(3, 20)

How can I remove rows that are 100% duplicates in a PostgreSQL table without a primary key? [duplicate]

This question already has answers here:
Delete duplicate rows from small table
(15 answers)
Closed 3 years ago.
I have a PostgreSQL table with a very large number of columns. The table does not have a primary key and now contains several rows that are 100% duplicates of another row.
How can I remove those duplicates without deleting the original along with them?
I found this answer on a related question, but I'd have to spell out each and every column name, which is error-prone. How can I avoid having to know anything about the table structure?
Example:
Given
create table duplicated (
id int,
name text,
description text
);
insert into duplicated
values (1, 'A', null),
(2, 'B', null),
(2, 'B', null),
(3, 'C', null),
(3, 'C', null),
(3, 'C', 'not a DUPE!');
after deletion, the following rows should remain:
(1, 'A', null)
(2, 'B', null)
(3, 'C', null)
(3, 'C', 'not a DUPE!')
As proposed in this answer, use the system column ctid to distinguish the physical copies of otherwise indentical rows.
To avoid having to spell out a non-existing 'key' for the rows, simply use the row constructor row(table), which returns a
row value containing the entire row as returned by select * from table:
DELETE FROM duplicated
USING (
SELECT MIN(ctid) as ctid, row(duplicated) as row
FROM duplicated
GROUP BY row(duplicated) HAVING COUNT(*) > 1
) uniqued
WHERE row(duplicated) = uniqued.row
AND duplicated.ctid <> uniqued.ctid;
You can try it in this DbFiddle.

Batch INSERT on multiple queries is throwing foreign key violation

I am following this to do batch INSERT
with two queries. The first query inserts into <tableone> and the second query insert into <tabletwo>.
The second table has a foreign key constraints that references <tableone>.
The following code is how I am handling the batch inserts
batchQuery.push(
insertTableOne,
insertTableTwo
);
const query = pgp.helpers.concat(batchQuery);
db.none(query)
insertTableOne looks like
INSERT INTO tableone (id, att2, att3) VALUES
(1, 'a', 'b'), (2, 'c', 'd'), (3, 'e', 'f'), ...
insertTableTwo looks like
INSERT INTO tabletwo (id, tableone_id) VALUES
(10, 1), (20, 2), (30, 3), ...
with a constraint on <tabletwo>
CONSTRAINT fk_tabletwo_tableone_id
FOREIGN KEY (tableone_id)
REFERENCES Tableone (id)
upon db.none(query) I am getting a violates foreign key constraint "fk_tabletwo_tableone_id"
Does the above query not execute in sequence? First insert into table one, then insert into table two?
Is this an issue with how the query is being commited? I have also tried using a transaction shown by the example in the linked page above.
Any thoughts?
If you read through to the documentation for the spex.batch() method (which is used by the pgp.helpers.concat() method from your linked example) says of the values argument:
Array of mixed values (it can be empty), to be resolved
asynchronously, in no particular order.
See http://vitaly-t.github.io/spex/global.html#batch
You probably need to look at another method rather than using batch().
I'd suggest chaining the dependent query using a .then() after the first insert has completed, ie. something like db.none(insertTableOne).then(() => db.none(insertTableTwo))

Bulk insert and update in one query sqlite

is there any way to insert and update bulk data in same query. I have seen many likes but not getting solution. I get a code but its not working
INSERT INTO `demo1` (`id`,`uname`,`address`)
VALUES (1, 2, 3),
VALUES (6, 5, 4),
VALUES (7, 8, 9)
ON DUPLICATE KEY UPDATE `id` = VALUES(32), `uname` = VALUES (b),`address` = VALUES(c)
Can any one help me.
SQLite has the REPLACE statement (which is an alias for INSERT OR REPLACE), but this just deletes the old row if a duplicate key is found.
If you want to keep data from the old row(s), you must use two statements for each row:
db.execute("UPDATE demo1 SET ... WHERE id = 1")
if db.affected_rows == 0:
db.execute("INSERT ...")