create policy based on table content - postgresql

Simple user of PostgreSQL, I try to learn database management basics. I do not see how to create policy or privileges based on table content.
In this example, I create 3 tables and 1 table for role attributes.
I would link a role "user_01" with the table "role_attributes" to create policy limiting access (i.e read) to the "accessible_sector" and this limitation work for request with join for example.
Thank you in advance.
create database buildings;
\c buildings postgres
create extension postgis;
create table sector(
code varchar (3) not null primary key,
name varchar (50) not null,
geom geometry
);
create table site (
site_id int serial primary key,
name varchar (50) unique not null,
code_zone varchar (3) references sector(code),
security_level int not null,
);
create table site_content (
name varchar (50) references site(name),
code_product varchar (5) unique not null,
dangerosity boolean default false
);
create table role_attributes (
role_id int not null references role(role_id),
accessible_sector varchar (3) references sector(code),
accessible_level int references site(security_level)
);
insert into sector values ('NO', 'Sector NO', 'polygon((0 10, 5 10, 5 5, 0 5, 0 10))');
insert into sector values ('NO1', 'Sector NO area 1', 'polygon((0 10, 5 10, 5 8, 0 8, 0 10))');
insert into sector values ('NO2', 'Sector NO area 2', 'polygon((0 8, 5 8, 5 5, 0 5, 0 8))');
insert into site values (default, 'Site 1', 'NO1', '1');
insert into site values (default, 'Site 2', 'NO2', '1');
insert into site values (default, 'Site 3', 'NO2', '2');

Related

delete child node when root node is deleted - PostgreSQL

I'm having a problem deleting child node. What I'm trying to do is that whenever a root node is deleted, it will also delete the child node. For example:
If I delete item 1, it should also delete item 2 automatically.
id: 1, name: item 1, parent_id: 0
id: 2, name: item 2, parent_d: 1
The id is PK and parent_id is FK.
I also created a sequence that when ever a new item is created, the id number increments by 1
sql command
DROP SCHEMA IF EXISTS note CASCADE;
CREATE SCHEMA note;
SET
search_path TO note;
CREATE TABLE note
(
id integer primary key,
name varchar(50),
parent_id integer references note.note (id) NOT NULL,
created_at timestamp without time zone DEFAULT now() NOT NULL
);
CREATE SEQUENCE note_id_seq
START WITH 1
INCREMENT BY 1
NO MINVALUE
NO MAXVALUE CACHE 1;
INSERT INTO note.note(id, name, parent_id)
VALUES
(1, 'item 1', 0)
,(2, 'item 2', 1)
SELECT SETVAL('note.note_id_seq',(SELECT MAX(id) FROM note.note))
Any suggestions would help a lot. Thanks!
You need to add on delete cascade to your foreign key:
CREATE TABLE note
(
id integer primary key,
name varchar(50),
parent_id integer references note (id) on delete cascade NOT NULL,
created_at timestamp without time zone DEFAULT now() NOT NULL
);
test it:
test=# insert into note(id,parent_id) values(0,0),(1,0),(2,1);
INSERT 0 3
test=# select (id,parent_id) from note;
row
-------
(0,0)
(1,0)
(2,1)
(3 rows)
test=# delete from note where id = 1;
DELETE 1
test=# select (id,parent_id) from note;
row
-------
(0,0)
(1 row)

How to use output variable in insert query

I am inserting records from our crm to our erp. The phone table uses a identity column from the people table. I need to insert the people record and capture the value in the PersonId which is an identity column and then use that PersonId as the key to insert records to the Phone table. I get the error:
Msg 137, Level 16, State 1, Line 16
Must declare the scalar variable "#IdentityID".
Msg 137, Level 16, State 1, Line 17
Must declare the scalar variable "#IdentityID".
--IdentityTable
CREATE TABLE [dbo].[People](
[People_ID] [int] NOT NULL,
[text] [nvarchar](50) NULL,
[PersonId] [int] IDENTITY(1,1) NOT NULL
) ON [PRIMARY]
--Phone
CREATE TABLE [dbo].[Phone](
[PersonId] [int] NOT NULL,
[text] [nvarchar](50) NULL,
[Number] [nchar](10) NULL
) ON [PRIMARY]
declare #IdentityID table (PersonId int);
INSERT INTO [Bridge].[dbo].[People]
([People_ID]
,[text])
output Inserted.PersonId into #IdentityID
VALUES
(3,'row1'),
(4,'row2');
INSERT INTO [Bridge].[dbo].[Phone]
(PersonId
,[text]
,[Number])
VALUES
(#IdentityID,'row1'),
(#IdentityID,'row2');
Print 'IdentityID' + #IdentityID
Msg 137, Level 16, State 1, Line 16
Must declare the scalar variable "#IdentityID".
Msg 137, Level 16, State 1, Line 17
Must declare the scalar variable "#IdentityID".
Output will be to tablevariable you can use as below:
declare #IdentityID table (PersonId int);
INSERT INTO [dbo].[People]
([People_ID]
,[text])
output Inserted.PersonId into #IdentityID(PersonId)
VALUES
(3,'row1'),
(4,'row2');
INSERT INTO [dbo].[Phone]
(PersonId
,[text]
,[Number])
select PersonId,'row1', 1 from #IdentityID
union all select PersonId,'row2', 2 from #IdentityID
select * from #IdentityID

Postgres unique date column

I wanted to create table with unique date column. Is it possible?
I have following table:
CREATE TABLE senders (
id bigint NOT NULL,
number bigint NOT NULL,
inserted_at date NOT NULL
);
I want to be sure I will have unique values per date. For example I can insert (1, 1, '2017-01-01'),(1, 1, '2017-01-02') but I can't add then (1, 1, '2017-01-01') one more time. I tried using UNIQUE constraints creating table or unique indexes but always I have SQL unique exception.
CREATE UNIQUE INDEX senders_unique ON senders (id, number, inserted_at);
CREATE TABLE senders (
id bigint NOT NULL,
number bigint NOT NULL,
inserted_at date NOT NULL,
UNIQUE(id, number, inserted_at)
);
Is it possible. Thanks for all answers.

Insert into table, return id and then insert into another table with stored id

I have the following three tables:
Please note that the below DDL came models generated by Django then grabbed out of Postgresql after they were created. So modifying the tables is not an option.
CREATE TABLE "parentTeacherCon_grade"
(
id INTEGER PRIMARY KEY NOT NULL,
"currentGrade" VARCHAR(2) NOT NULL
);
CREATE TABLE "parentTeacherCon_parent"
(
id INTEGER PRIMARY KEY NOT NULL,
name VARCHAR(50) NOT NULL,
grade_id INTEGER NOT NULL
);
CREATE TABLE "parentTeacherCon_teacher"
(
id INTEGER PRIMARY KEY NOT NULL,
name VARCHAR(50) NOT NULL
);
CREATE TABLE "parentTeacherCon_teacher_grade"
(
id INTEGER PRIMARY KEY NOT NULL,
teacher_id INTEGER NOT NULL,
grade_id INTEGER NOT NULL
);
ALTER TABLE "parentTeacherCon_parent" ADD FOREIGN KEY (grade_id) REFERENCES "parentTeacherCon_grade" (id);
CREATE INDEX "parentTeacherCon_parent_5c853be8" ON "parentTeacherCon_parent" (grade_id);
CREATE INDEX "parentTeacherCon_teacher_5c853be8" ON "parentTeacherCon_teacher" (grade_id);
ALTER TABLE "parentTeacherCon_teacher_grade" ADD FOREIGN KEY (teacher_id) REFERENCES "parentTeacherCon_teacher" (id);
ALTER TABLE "parentTeacherCon_teacher_grade" ADD FOREIGN KEY (grade_id) REFERENCES "parentTeacherCon_grade" (id);
CREATE UNIQUE INDEX "parentTeacherCon_teacher_grade_teacher_id_20e07c38_uniq" ON "parentTeacherCon_teacher_grade" (teacher_id, grade_id);
CREATE INDEX "parentTeacherCon_teacher_grade_d9614d40" ON "parentTeacherCon_teacher_grade" (teacher_id);
CREATE INDEX "parentTeacherCon_teacher_grade_5c853be8" ON "parentTeacherCon_teacher_grade" (grade_id);
My Question is: How do I write an insert statement (or statements) where I do not have keep track of the IDs? More specifically I have a teacher table, where teachers can teach relate to more than one grade and I am attempting to write my insert statements to start populating my DB. Such that I am only declaring a teacher's name, and grades they relate to.
For example, if I have a teacher that belong to only one grade then the insert statement looks like this.
INSERT INTO "parentTeacherCon_teacher" (name, grade_id) VALUES ('foo bar', 1 );
Where grades K-12 are enumerated 0,12
But Need to do something like (I realize this does not work)
INSERT INTO "parentTeacherCon_teacher" (name, grade_id) VALUES ('foo bar', (0,1,3) );
To indicate that this teacher relates to K, 1, and 3 grades
leaving me with this table for the parentTeacherCon_teacher_grade
+----+------------+----------+
| id | teacher_id | grade_id |
+----+------------+----------+
| 1 | 3 | 0 |
| 2 | 3 | 1 |
| 3 | 3 | 3 |
+----+------------+----------+
This is how I can currently (successfully) insert into the Teacher Table.
INSERT INTO public."parentTeacherCon_teacher" (id, name) VALUES (3, 'Foo Bar');
Then into the grade table
INSERT INTO public.parentTeacherCon_teacher_grade (id, teacher_id, grade_id) VALUES (1, 3, 0);
INSERT INTO public.parentTeacherCon_teacher_grade (id, teacher_id, grade_id) VALUES (2, 3, 1);
INSERT INTO public.parentTeacherCon_teacher_grade (id, teacher_id, grade_id) VALUES (3, 3, 3);
A bit more information.
Here is a diagram of the database
Other things I have tried.
WITH i1 AS (INSERT INTO "parentTeacherCon_teacher" (name) VALUES ('foo bar')
RETURNING id) INSERT INTO "parentTeacherCon_teacher_grade"
SELECT
i1.id
, v.val
FROM i1, (VALUES (1), (2), (3)) v(val);
Then I get this error.
[2016-08-10 16:07:46] [23502] ERROR: null value in column "grade_id" violates not-null constraint
Detail: Failing row contains (6, 1, null).
If you want to insert all three rows in one statement, you can use:
INSERT INTO "parentTeacherCon_teacher" (name, grade_id)
SELECT 'foo bar', g.grade_id
FROM (SELECT 0 as grade_id UNION ALL SELECT 1 UNION ALL SELECT 3) g;
Or, if you prefer:
INSERT INTO "parentTeacherCon_teacher" (name, grade_id)
SELECT 'foo bar', g.grade_id
FROM (VALUES (0), (2), (3)) g(grade_id);
EDIT:
In Postgres, you can have data modification statements as a CTE:
WITH i as (
INSERT INTO public."parentTeacherCon_teacher" (id, name)
VALUES (3, 'Foo Bar')
RETURNING *
)
INSERT INTO "parentTeacherCon_teacher" (name, teacher_id, grade_id)
SELECT 'foo bar', i.id, g.grade_id
FROM (VALUES (0), (2), (3)) g(grade_id) CROSS JOIN
i

Use one query or several queries to PostgreSQL DB

I have the following DB
CREATE TABLE IF NOT EXISTS users (
user_uid INTEGER PRIMARY KEY,
user_name CHAR(64) NOT NULL,
token_id INTEGER
);
CREATE TABLE IF NOT EXISTS unique_thing (
id SERIAL PRIMARY KEY,
unique_thing_id INTEGER NOT NULL,
option_id INTEGER NOT NULL
);
CREATE TABLE IF NOT EXISTS example (
id SERIAL PRIMARY KEY,
variable INTEGER NOT NULL,
variable_2 INTEGER NOT NULL,
char_var CHAR(64) NOT NULL,
char_var2 CHAR(512),
char_var3 CHAR(256),
file_path CHAR(256) NOT NULL
);
CREATE TABLE IF NOT EXISTS different_option_of_things (
id SERIAL PRIMARY KEY,
name CHAR(64)
);
CREATE TABLE IF NOT EXISTS commits (
id SERIAL PRIMARY KEY,
user_id INTEGER NOT NULL,
unique_thing_id INTEGER NOT NULL,
value REAL NOT NULL,
var CHAR(512) NOT NULL,
example_id INTEGER NOT NULL,
boolean_var boolean NOT NULL
);
The tables unique_thing, different_option_of_things and examples will be static (the data will be added rarely and manually).
The table commits will be rather large. It will be the table for insert only (I will delete very rarely).
The user will be the table with user idnetification. It will be not so large as unique_thing, but will have quite a few users.
The Data of the table will be as follows:
INSERT INTO users VALUES(1, 'pacefist', 2);
INSERT INTO users VALUES(3, 'motherfucker', 4);
INSERT INTO users VALUES(4, 'cheater', 5);
INSERT INTO different_option_of_things VALUES(1, 'blablab');
INSERT INTO different_option_of_things VALUES(2, 'smth different');
INSERT INTO different_option_of_things VALUES(3, 'unique_thing');
INSERT INTO different_option_of_things VALUES(4 ,'unique_thing2');
INSERT INTO unique_thing VALUES(DEFAULT, 1, 1);
INSERT INTO unique_thing VALUES(DEFAULT, 1, 3);
INSERT INTO unique_thing VALUES(DEFAULT, 2, 3);
INSERT INTO unique_thing VALUES(DEFAULT, 2, 2);
INSERT INTO example VALUES(1, 20, 20, 'fsdfsdf', 'fgdfgdfg', 'url', '/home/user/file.txt');
INSERT INTO example VALUES(2, 24, 40, 'sfadfadf', 'dfgdfg', 'url', '/home/user/file2.txt');
INSERT INTO commits VALUES(DEFAULT, 1, 1, 55.43, '1234567', 1, TRUE);
INSERT INTO commits VALUES(DEFAULT, 2, 1, 97.85, '1234573', 2, TRUE);
INSERT INTO commits VALUES(DEFAULT, 3, 1, 0.001, '98766543', 1, TRUE);
INSERT INTO commits VALUES(DEFAULT, 4, 2, 100500.00, 'xxxxxxxx', 1, TRUE);
So, the data will be inserted ther following way:
1) I have input data of different_option_of_things, e.g., [ blablab, unique_thing], the REAL value (like 8.9999) and the number of example like `fsdfsdf`
2) It's necessary to find this record in the table `unique_thing`
a) if we've found 2 or more values or haven't found anything
results false -> the search is over
b) if we've found 1 result then
3) we are searching all values (record from unique_thing) in the 'commits' table.
a) if it has been found
a.1 search of the given example name
a.1.1 if found -> get first 25 values and check whether the current value is bigger
a.1.1.1 if yes, we make a commit
a.1.1.2 if no, do nothing (do not duplicate the value)
a.1.2 no -> no results
a.2 if no -> no results
The second function will be almost the same but without insertion, we will just make a selection without insertion (only to get data) and will find for all existing values in the table 'examples' (not only one).
The question: is it better to create 3 functions instead of one big query?
SELECT count(1) AS counter FROM different_option_of_things
WHERE name IN (SELECT * FROM unnest(different_option_of_things));
SELECT * FROM example where id=fsdfsdf;
SELECT TOP 25
FROM commits
JOIN unique_thing
ON commits.id=unique_thing.unique_thing_id where value > 8.9999;
if results-> 0 do a commit
Or is it better to write one enormous query? I am using Postgresql, Tornado and momoko.
I would prefer two stored procedures each to get and insert data.
Pros:
all required data is in db, so it seems like a job for db,
each call on execute needs to:
get/wait for available connection in pool (depending on your app)
run query
fetch data
Release connection
x. And between all of this operation on IOLoop
Although momoko is non-blocking, it is not for free
db can be api, not only sack of data
Cons:
logic in db means you depend on it - change db engine (for example to cassandra) will be harder
often logic in db means there is no tests. Of course you can and you should test it (e.g. pgTap)
for simple tasks it seems like a overkill
It is matter of db and app load, performance and time constraints - in other words run tests and choose solution that meets your expectations/requirements.