I'm trying to construct an object for use from my postgres backend. The tables in question look something like this:
We have some Things that essentially act as rows for a matrix where the columns are Field_Columns. Field_Values are filled cells.
Create Table Platform_User (
serial id PRIMARY KEY
)
Create Table Things (
serial id PRIMARY KEY,
INTEGER user_id REFERENCES Platform_User(id)
)
Create Table Field_Columns (
serial id PRIMARY KEY,
TEXT name,
)
Create Table Field_Values (
INTEGER field_column_id REFERENCES Field_Columns(id),
INTEGER thing_id REFERENCES Things(id)
TEXT content,
PRIMARY_KEY(field_column_id, thing_id)
)
This would be simple if I were trying to load just the Field_Values for a single Thing as JSON, which would look like this:
SELECT JSONB_OBJECT(
ARRAY(
SELECT name
FROM Field_Columns
ORDER BY Field_Columns.id
),
ARRAY(
SELECT Field_Values.content
FROM Fields_Columns
LEFT JOIN Field_Values ON Field_Values.field_column_id = Field_Columns.id
AND Field_Values.thing_id = Things.id
ORDER BY Field_Columns.id)
)
)
FROM Things
WHERE Thing.id = $1
however, I'd like to construct the JSON object to look like this when returned. I want to get an object of all the Fields:Field_Values objects for the Things that a user owns
{
14:
{
'first field':'asdf',
'other field':''
}
25:
{
'first field':'qwer',
'other field':'dfgdsfg'
}
43:
{
'first field':'',
'other field':''
}
}
My efforts to construct this query look like this, but I'm running into the problem where the JSONB object function doesn't want to construct an object where the value of the field is an object itself
SELECT (
JSONB_OBJECT(
ARRAY(SELECT Things.id::TEXT
FROM Things
WHERE Things.user_id = $2
ORDER BY Things.id
),
ARRAY(SELECT JSONB_OBJECT(
ARRAY(
SELECT name
FROM Field_Columns
ORDER BY Field_Columns.id),
ARRAY(
SELECT Field_Values.content
FROM Field_Columns
LEFT JOIN Field_Values ON Field_Values.field_column_Id = Field_Columns.id
AND Field_Values.thing_id = Things.id
ORDER BY Field_Columns.id)
)
FROM Things
WHERE Things.user_id = $2
ORDER BY Things.id
)
)
) AS thing_fields
The specific error I get is function jsonb_object(text[], jsonb[]) does not exist. Is there a way to do this that doesn't involve copious text conversions and nonsense like that? Or will I just need to abandon trying to sort my data in the query and do it in my code instead.
Your DDL scripts are syntactically incorrect so I created these for you:
create table platform_users (
id int8 PRIMARY KEY
);
create table things (
id int8 PRIMARY KEY,
user_id int8 REFERENCES platform_users(id)
);
create table field_columns (
id int8 PRIMARY KEY,
name text
);
create table field_values (
field_column_id int8 REFERENCES field_columns(id),
thing_id int8 REFERENCES things(id),
content text,
PRIMARY KEY(field_column_id, thing_id)
);
I also created some scripts to populate the db:
insert into platform_users(id) values (1);
insert into platform_users(id) values (2);
insert into platform_users(id) values (3);
insert into platform_users(id) values (4);
insert into platform_users(id) values (5);
insert into things(id, user_id) values(1, 1);
insert into things(id, user_id) values(2, 1);
insert into things(id, user_id) values(3, 2);
insert into things(id, user_id) values(4, 2);
insert into field_columns(id, name) values(1, 'col1');
insert into field_columns(id, name) values(2, 'col2');
insert into field_values(field_column_id, thing_id, content) values(1, 1, 'thing1 val1');
insert into field_values(field_column_id, thing_id, content) values(2, 1, 'thing1 val2');
insert into field_values(field_column_id, thing_id, content) values(1, 2, 'thing2 val1');
insert into field_values(field_column_id, thing_id, content) values(2, 2, 'thing2 val2');
Please include such scripts next time when you ask for help, and make sure that your scripts are correct. This will reduce the work needed to answer your question.
You can get your jsonb value by aggregating the key value pairs with jsonb_object_agg
select
t.id,
jsonb_object_agg(fc.name, fv.content)
from
things t inner join
field_values fv on fv.thing_id = t.id inner join
field_columns fc on fv.field_column_id = fc.id
group by 1
The results looking like this:
thing_id;jsonb_value
1;"{"col1": "thing1 val1", "col2": "thing1 val2"}"
2;"{"col1": "thing2 val1", "col2": "thing2 val2"}"
Related
Suppose the following structure:
CREATE SCHEMA IF NOT EXISTS my_schema;
CREATE TABLE IF NOT EXISTS my_schema.user (
id SERIAL PRIMARY KEY,
tag_id BIGINT NOT NULL
);
CREATE TABLE IF NOT EXISTS my_schema.conversation (
id SERIAL PRIMARY KEY,
user_ids BIGINT[] NOT NULL
);
INSERT INTO my_schema.user VALUES
(1, 55555),
(2, 77777);
INSERT INTO my_schema.conversation VALUES
(1, '{1,2}');
I can pull out the my_schema.conversation records if I know the my_schema.user.id values:
SELECT *
FROM my_schema.conversation
WHERE user_ids #> '{1}'
The above works, but I need to use my_schema.user.tag_id instead of my_schema.user.id:
How can I do this?
Fiddle
You would have to join the two tables on the array values
SELECT *
FROM my_schema.user u
JOIN my_schema.conversation c
ON u.id = any(c.chat_ids)
WHERE u.tag_id=55555;
I have a table with records which can reference another row in the same table so there is a parent-child relationship between rows in the same table.
What I am trying to achieve is to create the same data for another user so that they can see and manage their own version of this structure through the web ui where these rows are displayed as a tree.
Problem is when I bulk insert this data by only changing user_id, I lose the relation between rows because the parent_id values will be invalid for these new records and they should be updated as well with the newly generated ids.
Here is what I tried: (did not work)
Iterate over main_table
copy-paste the static values after each
do another insert on a temp table for holding old and new ids
update old parent_ids with new ids after loop ends
My attempt at doing such thing(last step is not included here)
create or replace function test_x()
returns void as
$BODY$
declare
r RECORD;
userId int8;
rowPK int8;
begin
userId := (select 1)
create table if not exists id_map (old_id int8, new_id int8);
create table if not exists temp_table as select * from main_table;
for r in select * from temp_table
loop
rowPK := insert into main_table(id, user_id, code, description, parent_id)
values(nextval('hibernate_sequence'), userId, r.code, r.description, r.parent_id) returning id;
insert into id_map (old_id, new_id) values (r.id, rowPK);
end loop;
end
$BODY$
language plpgsql;
My PostgreSQL version is 9.6.14.
DDL below for testing.
create table main_table(
id bigserial not null,
user_id int8 not null,
code varchar(3) not null,
description varchar(100) not null,
parent_id int8 null,
constraint mycompkey unique (user_id, code, parent_id),
constraint mypk primary key (id),
constraint myfk foreign key (parent_id) references main_table(id)
);
insert into main_table (id, user_id, code, description, parent_id)
values(0, 0, '01', 'Root row', null);
insert into main_table (id, user_id, code, description, parent_id)
values(1, 0, '001', 'Child row 1', 0);
insert into main_table (id, user_id, code, description, parent_id)
values(2, 0, '002', 'Child row 2', 0);
insert into main_table (id, user_id, code, description, parent_id)
values(3, 0, '002', 'Grand child row 1', 2);
How to write a procedure to accomplish this?
Thanks in advance.
It appears your task is coping all data for a given user to another while maintaining the hierarchical relationship within the new rows. The following accomplishes that.
It begins creating a new copy of the existing rows with the new user_id, including the old row parent_id. That will be user in the next (update) step.
The CTE logically begins with the new rows which have parent_id and joins to the old parent row. From here it joins to the old parent row to the new parent row using the code and description. At that point we have the new id along with the new parent is. At that point just update with those values. Actually for the update the CTE need only select those two columns, but I've left the intermediate columns so you trace through if you wish.
create or replace function copy_user_data_to_user(
source_user_id bigint
, target_user_id bigint
)
returns void
language plpgsql
as $$
begin
insert into main_table ( user_id,code, description, parent_id )
select target_user_id, code, description, parent_id
from main_table
where user_id = source_user_id ;
with n_list as
(select mt.id, mt.code, mt.description, mt.parent_id
, mtp.id p_id,mtp.code p_code,mtp.description p_des
, mtc.id c_id, mtc.code c_code, mtc.description c_description
from main_table mt
join main_table mtp on mtp.id = mt.parent_id
join main_table mtc on ( mtc.user_id = target_user_id
and mtc.code = mtp.code
and mtc.description = mtp.description
)
where mt.parent_id is not null
and mt.user_id = target_user_id
)
update main_table mt
set parent_id = n_list.c_id
from n_list
where mt.id = n_list.id;
return;
end ;
$$;
-- test
select * from copy_user_data_to_user(0,1);
select * from main_table;
CREATE TABLE 'table name you want to create' SELECT * FROM myset
but new table and myset column name should be equal and you can also
use inplace of * to column name but column name exist in new table
othwerwise getting errors
I have the following three tables:
Please note that the below DDL came models generated by Django then grabbed out of Postgresql after they were created. So modifying the tables is not an option.
CREATE TABLE "parentTeacherCon_grade"
(
id INTEGER PRIMARY KEY NOT NULL,
"currentGrade" VARCHAR(2) NOT NULL
);
CREATE TABLE "parentTeacherCon_parent"
(
id INTEGER PRIMARY KEY NOT NULL,
name VARCHAR(50) NOT NULL,
grade_id INTEGER NOT NULL
);
CREATE TABLE "parentTeacherCon_teacher"
(
id INTEGER PRIMARY KEY NOT NULL,
name VARCHAR(50) NOT NULL
);
CREATE TABLE "parentTeacherCon_teacher_grade"
(
id INTEGER PRIMARY KEY NOT NULL,
teacher_id INTEGER NOT NULL,
grade_id INTEGER NOT NULL
);
ALTER TABLE "parentTeacherCon_parent" ADD FOREIGN KEY (grade_id) REFERENCES "parentTeacherCon_grade" (id);
CREATE INDEX "parentTeacherCon_parent_5c853be8" ON "parentTeacherCon_parent" (grade_id);
CREATE INDEX "parentTeacherCon_teacher_5c853be8" ON "parentTeacherCon_teacher" (grade_id);
ALTER TABLE "parentTeacherCon_teacher_grade" ADD FOREIGN KEY (teacher_id) REFERENCES "parentTeacherCon_teacher" (id);
ALTER TABLE "parentTeacherCon_teacher_grade" ADD FOREIGN KEY (grade_id) REFERENCES "parentTeacherCon_grade" (id);
CREATE UNIQUE INDEX "parentTeacherCon_teacher_grade_teacher_id_20e07c38_uniq" ON "parentTeacherCon_teacher_grade" (teacher_id, grade_id);
CREATE INDEX "parentTeacherCon_teacher_grade_d9614d40" ON "parentTeacherCon_teacher_grade" (teacher_id);
CREATE INDEX "parentTeacherCon_teacher_grade_5c853be8" ON "parentTeacherCon_teacher_grade" (grade_id);
My Question is: How do I write an insert statement (or statements) where I do not have keep track of the IDs? More specifically I have a teacher table, where teachers can teach relate to more than one grade and I am attempting to write my insert statements to start populating my DB. Such that I am only declaring a teacher's name, and grades they relate to.
For example, if I have a teacher that belong to only one grade then the insert statement looks like this.
INSERT INTO "parentTeacherCon_teacher" (name, grade_id) VALUES ('foo bar', 1 );
Where grades K-12 are enumerated 0,12
But Need to do something like (I realize this does not work)
INSERT INTO "parentTeacherCon_teacher" (name, grade_id) VALUES ('foo bar', (0,1,3) );
To indicate that this teacher relates to K, 1, and 3 grades
leaving me with this table for the parentTeacherCon_teacher_grade
+----+------------+----------+
| id | teacher_id | grade_id |
+----+------------+----------+
| 1 | 3 | 0 |
| 2 | 3 | 1 |
| 3 | 3 | 3 |
+----+------------+----------+
This is how I can currently (successfully) insert into the Teacher Table.
INSERT INTO public."parentTeacherCon_teacher" (id, name) VALUES (3, 'Foo Bar');
Then into the grade table
INSERT INTO public.parentTeacherCon_teacher_grade (id, teacher_id, grade_id) VALUES (1, 3, 0);
INSERT INTO public.parentTeacherCon_teacher_grade (id, teacher_id, grade_id) VALUES (2, 3, 1);
INSERT INTO public.parentTeacherCon_teacher_grade (id, teacher_id, grade_id) VALUES (3, 3, 3);
A bit more information.
Here is a diagram of the database
Other things I have tried.
WITH i1 AS (INSERT INTO "parentTeacherCon_teacher" (name) VALUES ('foo bar')
RETURNING id) INSERT INTO "parentTeacherCon_teacher_grade"
SELECT
i1.id
, v.val
FROM i1, (VALUES (1), (2), (3)) v(val);
Then I get this error.
[2016-08-10 16:07:46] [23502] ERROR: null value in column "grade_id" violates not-null constraint
Detail: Failing row contains (6, 1, null).
If you want to insert all three rows in one statement, you can use:
INSERT INTO "parentTeacherCon_teacher" (name, grade_id)
SELECT 'foo bar', g.grade_id
FROM (SELECT 0 as grade_id UNION ALL SELECT 1 UNION ALL SELECT 3) g;
Or, if you prefer:
INSERT INTO "parentTeacherCon_teacher" (name, grade_id)
SELECT 'foo bar', g.grade_id
FROM (VALUES (0), (2), (3)) g(grade_id);
EDIT:
In Postgres, you can have data modification statements as a CTE:
WITH i as (
INSERT INTO public."parentTeacherCon_teacher" (id, name)
VALUES (3, 'Foo Bar')
RETURNING *
)
INSERT INTO "parentTeacherCon_teacher" (name, teacher_id, grade_id)
SELECT 'foo bar', i.id, g.grade_id
FROM (VALUES (0), (2), (3)) g(grade_id) CROSS JOIN
i
I want to select from an enumaration that is not in database.
E.g. SELECT id FROM my_table returns values like 1, 2, 3
I want to display 1 -> 'chocolate', 2 -> 'coconut', 3 -> 'pizza' etc. SELECT CASE works but is too complicated and hard to overview for many values. I think of something like
SELECT id, array['chocolate','coconut','pizza'][id] FROM my_table
But I couldn't succeed with arrays. Is there an easy solution? So this is a simple query, not a plpgsql script or something like that.
with food (fid, name) as (
values
(1, 'chocolate'),
(2, 'coconut'),
(3, 'pizza')
)
select t.id, f.name
from my_table t
join food f on f.fid = t.id;
or without a CTE (but using the same idea):
select t.id, f.name
from my_table t
join (
values
(1, 'chocolate'),
(2, 'coconut'),
(3, 'pizza')
) f (fid, name) on f.fid = t.id;
This is the correct syntax:
SELECT id, (array['chocolate','coconut','pizza'])[id] FROM my_table
But you should create a referenced table with those values.
What about creating another table that enumerate all cases, and do join ?
CREATE TABLE table_case
(
case_id bigserial NOT NULL,
case_name character varying,
CONSTRAINT table_case_pkey PRIMARY KEY (case_id)
)
WITH (
OIDS=FALSE
);
and when you select from your table:
SELECT id, case_name FROM my_table
inner join table_case on case_id=my_table_id;
I need to copy content from one table to itself and related tables... Let me schematize the problem. Let's say I have two tables:
Order
OrderID : int
CustomerID : int
OrderName : nvarchar(32)
OrderItem
OrderItemID : int
OrderID : int
Quantity : int
With the PK being autoincremental.
Let's say I want to duplicate the content of one customer to another. How do I do that efficiently?
The problem are the PKs. I would need to map the values of OrderIDs from the original set of data to the copy in order to create proper references in OrderItem. If I just select-Insert, I won't be able to create that map.
Suggestions?
For duplicating one parent and many children with identities as the keys, I think the OUTPUT clause can make things pretty clean (SqlFiddle here):
-- Make a duplicate of parent 1, including children
-- Setup some test data
create table Parents (
ID int not null primary key identity
, Col1 varchar(10) not null
, Col2 varchar(10) not null
)
insert into Parents (Col1, Col2) select 'A', 'B'
insert into Parents (Col1, Col2) select 'C', 'D'
insert into Parents (Col1, Col2) select 'E', 'F'
create table Children (
ID int not null primary key identity
, ParentID int not null references Parents (ID)
, Col1 varchar(10) not null
, Col2 varchar(10) not null
)
insert into Children (ParentID, Col1, Col2) select 1, 'g', 'h'
insert into Children (ParentID, Col1, Col2) select 1, 'i', 'j'
insert into Children (ParentID, Col1, Col2) select 2, 'k', 'l'
insert into Children (ParentID, Col1, Col2) select 3, 'm', 'n'
-- Get one parent to copy
declare #oldID int = 1
-- Create a place to store new ParentID
declare #newID table (
ID int not null primary key
)
-- Create new parent
insert into Parents (Col1, Col2)
output inserted.ID into #newID -- Capturing the new ParentID
select Col1, Col2
from Parents
where ID = #oldID -- Only one parent
-- Create new children using the new ParentID
insert into Children (ParentID, Col1, Col2)
select n.ID, c.Col1, c.Col2
from Children c
cross join #newID n
where c.ParentID = #oldID -- Only one parent
-- Show some output
select * from Parents
select * from Children
Do you have to have the primary keys from table A as primaries in Table B? If not you can do a select statement with an insert into. Primary Key's are usually int's that start from an ever increasing seed (identity). Going around this and declaring an insert of this same data problematically has the disadvantage of someone thinking this is a distinct key set on this table and not a 'relationship' or foreign key value.
You can Select Primary Key's for inserts into other tables, just not themselves.... UNLESS you set the 'identity insert on' hint. Do not do this unless you know what this does as you can create more problems than it's worth if you don't understand the ramifications.
I would just do the ole:
insert into TableB
select *
from TableA
where (criteria)
Simple example (This assumes SQL Server 2008 or higher). My bad I did not see you did not list TSQL framework. Not sure if this will run on Oracle or MySql.
declare #Order Table ( OrderID int identity primary key, person varchar(8));
insert into #Order values ('Brett'),('John'),('Peter');
declare #OrderItem Table (orderItemID int identity primary key, OrderID int, OrderInfo varchar(16));
insert into #OrderItem
select
OrderID -- I can insert a primary key just fine
, person + 'Stuff'
from #Order
select *
from #Order
Select *
from #OrderItem
Add an extra helper column to Order called OldOrderID
Copy all the Order's from the #OldCustomerID to the #NewCustomerID
Copy all of the OrderItems using the OldOrderID column to help make the relation
Remove the extra helper column from Order
ALTER TABLE Order ADD OldOrderID INT NULL
INSERT INTO Order (CustomerID, OrderName, OldOrderID)
SELECT #NewCustomerID, OrderName, OrderID
FROM Order
WHERE CustomerID = #OldCustomerID
INSERT INTO OrderItem (OrderID, Quantity)
SELECT o.OrderID, i.Quantity
FROM Order o INNER JOIN OrderItem i ON o.OldOrderID = i.OrderID
WHERE o.CustomerID = #NewCustomerID
UPDATE Order SET OldOrderID = null WHERE OldOrderID IS NOT NULL
ALTER TABLE Order DROP COLUMN OldOrderID
IF the OrderName is unique per customer, you could simply do:
INSERT INTO [Order] ([CustomerID], [OrderName])
SELECT
2 AS [CustomerID],
[OrderName]
FROM [Order]
WHERE [CustomerID] = 1
INSERT INTO [OrderItem] ([OrderID], [Quantity])
SELECT
[o2].[OrderID],
[oi1].[Quantity]
FROM [OrderItem] [oi1]
INNER JOIN [Order] [o1] ON [oi1].[OrderID] = [o1].[OrderID]
INNER JOIN [Order] [o2] ON [o1].[OrderName] = [o2].[OrderName]
WHERE [o1].[CustomerID] = 1 AND [o2].[CustomerID] = 2
Otherwise, you will have to use a temporary table or alter the existing Order table like #LastCoder suggested.