Create `has_($value)`column for table values - postgresql

I would like to perform an operation similar to a dynamic pivot table. I utilize Postgresql as database framework. The table t has a column with values 10, 20, and 30. I wish to create n columns, in this case equal to 3, to allow the boolean assignment has_($value) equal to 1 if existent in respective group, or 0 if not. I tried to understand tablefunc and crosstab without success.
CREATE TABLE IF NOT EXISTS t (
id INTEGER NOT NULL,
value INT NOT NULL
)
INSERT INTO t (id, value) VALUES
(1, 10),
(1, 20),
(2, 10),
(3, 30),
(3, 20)

Related

How to show chain element by order

I have goal to create query which return me item ids regarding position in chain.
I have chain logic, each element has right and left fk and index.
Chain can contains elements which can added like append and like prepend approach, regarding this id from table not help to build current chain dependencies.
This is db structure
create table public.chain_data
(
id integer not null
constraint chain_data_pkey
primary key,
unique_identifiers_id integer not null
constraint fk_388447e52a0b191e
references public.unique_identifiers
on delete cascade,
chain_data_name varchar(255) not null,
carriage boolean default false,
left_id integer not null
constraint fk_388447e5e26cce02
references public.chain_data,
right_id integer
constraint fk_388447e554976835
references public.chain_data
);
alter table public.chain_data
owner to "universal-counter";
create index idx_388447e52a0b191e
on public.chain_data (unique_identifiers_id);
create unique index left_right_uniq_idx
on public.chain_data (right_id, left_id);
create unique index carriage_uniq_index
on public.chain_data (unique_identifiers_id, carriage)
where (carriage <> false);
and data example. this chain began from id = 10 and then was prepend new items(rows) in start of chain. Each element has left and right dependencies. So inserts:
INSERT INTO public.chain_data (id, unique_identifiers_id, chain_data_name, carriage, left_id, right_id)
VALUES
(10, 8, 'dddd_2', true, 22, null),
(22, 8, 'shuba', false, 23, 10),
(24, 8, 'viktor', false, null, 23),
(23, 8, 'ivan', false, 24, 22);
Regarding this query should to return ids like this
24, 23, 22, 10
because element with id = 24 present on start chain then by left and right dependencies obviously 23, 22 and 10 id= 10 is last element in chain
demo:db<>fiddle
You can use a recursive CTE for that:
WITH RECURSIVE chain AS (
SELECT id, right_id -- 1
FROM chain_data
WHERE left_id IS NULL
UNION
SELECT cd.id, cd.right_id -- 2
FROM chain_data cd
JOIN chain c ON c.right_id = cd.id
)
SELECT
string_agg(id::text, ', ') -- 3
FROM
chain
Initial part of the recursion: The record with the NULL value
The recursion part: Join the current table on the previous step using the previous right_id as current id
Afterwards you can aggregate all fetched records with the string_agg() aggregation to return your string list.

How can I remove rows that are 100% duplicates in a PostgreSQL table without a primary key? [duplicate]

This question already has answers here:
Delete duplicate rows from small table
(15 answers)
Closed 3 years ago.
I have a PostgreSQL table with a very large number of columns. The table does not have a primary key and now contains several rows that are 100% duplicates of another row.
How can I remove those duplicates without deleting the original along with them?
I found this answer on a related question, but I'd have to spell out each and every column name, which is error-prone. How can I avoid having to know anything about the table structure?
Example:
Given
create table duplicated (
id int,
name text,
description text
);
insert into duplicated
values (1, 'A', null),
(2, 'B', null),
(2, 'B', null),
(3, 'C', null),
(3, 'C', null),
(3, 'C', 'not a DUPE!');
after deletion, the following rows should remain:
(1, 'A', null)
(2, 'B', null)
(3, 'C', null)
(3, 'C', 'not a DUPE!')
As proposed in this answer, use the system column ctid to distinguish the physical copies of otherwise indentical rows.
To avoid having to spell out a non-existing 'key' for the rows, simply use the row constructor row(table), which returns a
row value containing the entire row as returned by select * from table:
DELETE FROM duplicated
USING (
SELECT MIN(ctid) as ctid, row(duplicated) as row
FROM duplicated
GROUP BY row(duplicated) HAVING COUNT(*) > 1
) uniqued
WHERE row(duplicated) = uniqued.row
AND duplicated.ctid <> uniqued.ctid;
You can try it in this DbFiddle.

Cascading delete on table that references itself

I have a table (say 'MyTable') that can reference itself, ie. it has a ParentId that can point to another record in the same table (so you can store a tree of related nodes).
The problem is that when I try to delete all records that are children of a specific parent, I get the following exception (using EF 6):
The DELETE statement conflicted with the SAME TABLE REFERENCE constraint "FK_dbo.MyTable_dbo.MyTable_ParentId". The conflict occurred in database "foo", table "dbo.MyTable", column 'ParentId'.
The statement has been terminated.
(I'm executing a Sql command like this context.Database.ExecuteSqlCommand("DELETE FROM [MyTable] WHERE ParentId = {0}", parentId);
I tried to fix it by adding a Children property and use fluent api to set cascading delete like this:
modelBuilder.Entity<MyTable>()
.HasMany(t => t.Children)
.WithOptional(t => t.Parent)
.WillCascadeOnDelete(true);
But that gives the following error:
Introducing FOREIGN KEY constraint 'FK_dbo.MyTable_dbo.MyTable_ParentId' on table 'MyTable' may cause cycles or multiple cascade paths. Specify ON DELETE NO ACTION or ON UPDATE NO ACTION, or modify other FOREIGN KEY constraints.
Also when I manually remove the FK and recreate it with ON CASCADE DELETE I get the same error.
I'm a bit lost now on how to fix this.. so any ideas are welcome :)
USE tempdb;
GO
IF OBJECT_ID('tempdb..#Employees') IS NOT NULL
DROP TABLE #Employees;
GO
CREATE TABLE #Employees
(
empid INT PRIMARY KEY,
mgrid INT NULL REFERENCES #Employees,
empname VARCHAR(25) NOT NULL
);
CREATE UNIQUE INDEX idx_unc_mgrid_empid ON #Employees(mgrid, empid);
INSERT INTO #Employees(empid, mgrid, empname) VALUES
(1, NULL, 'David'),
(2, 1, 'Eitan'),
(3, 1, 'Ina'),
(4, 2, 'Seraph'),
(5, 2, 'Jiru'),
(6, 2, 'Steve'),
(7, 3, 'Aaron'),
(8, 5, 'Lilach'),
(9, 7, 'Rita'),
(10, 5, 'Sean'),
(11, 7, 'Gabriel'),
(12, 9, 'Emilia'),
(13, 9, 'Michael'),
(14, 9, 'Didi');
GO
DELETE FROM #Employees WHERE mgrid = 9
SELECT * FROM #Employees;

Bulk insert and update in one query sqlite

is there any way to insert and update bulk data in same query. I have seen many likes but not getting solution. I get a code but its not working
INSERT INTO `demo1` (`id`,`uname`,`address`)
VALUES (1, 2, 3),
VALUES (6, 5, 4),
VALUES (7, 8, 9)
ON DUPLICATE KEY UPDATE `id` = VALUES(32), `uname` = VALUES (b),`address` = VALUES(c)
Can any one help me.
SQLite has the REPLACE statement (which is an alias for INSERT OR REPLACE), but this just deletes the old row if a duplicate key is found.
If you want to keep data from the old row(s), you must use two statements for each row:
db.execute("UPDATE demo1 SET ... WHERE id = 1")
if db.affected_rows == 0:
db.execute("INSERT ...")

Constraint on sum from rows

I've got a table in PostgreSQL 9.4:
user_votes (
user_id int,
portfolio_id int,
car_id int
vote int
)
Is it possible to put a constraint on the table so a user max can have 99 point to vote with in each portfolio?
This means that a user can have multiple rows consisting of the same user_id and portfolio_id, but different car_id and vote. The sum on votes should never exceed 99, but it can be placed among different cars.
So doing:
INSERT INTO user_vores (user_id, portfolio_id, car_id, vote) VALUES
(1, 1, 1, 20),
(1, 1, 7, 40),
(1, 1, 9, 25)
would all be allowed, but when trying to add something that exceeds 99 votes should fail, like another row:
INSERT INTO user_vores (user_id, portfolio_id, car_id, vote) VALUES
(1, 1, 21, 40)
Unfortunately no, if you tried to create such a constraint you will see this error message:
ERROR: aggregate functions are not allowed in check constraints
But the wonderfull thing about postgresql is that there is always more than one way to skin a cat. You can use a BEFORE trigger to check that the data you are trying to insert fullfills our requirements.
Row-level triggers fired BEFORE can return null to signal the trigger
manager to skip the rest of the operation for this row (i.e.,
subsequent triggers are not fired, and the INSERT/UPDATE/DELETE does
not occur for this row). If a nonnull value is returned then the
operation proceeds with that row value.
Inside your trigger you would count the number of votes
SELECT COUNT(*) into vote_count FROM user_votes WHERE user_id = NEW.user_id
Now if vote_count is 99 you return NULL and the data will not be inserted.