Bulk INSERT for records that don't exist, returning ID - pg-promise

I'm using pg-promise to process an array of data that I want to insert in table A. I'm having a hard time figuring out how to dynamically create this query that should only INSERT those values not already present in the table, while returning the ID of all those who are already present or were newly created.
I'm doing the following to do the above, but for a single element:
WITH s as
( SELECT *
FROM fruits
WHERE fruit_name = 'Apple' ),
i AS
( INSERT INTO fruits(fruit_name) SELECT 'Apple'
WHERE NOT exists
(SELECT 1
FROM s) RETURNING fruit_id)
But say that I have an Array of fruits = ['Banana', 'Apple']. How would I use the helpers to generate a query that would return the existing ID for apple, and the one for Banana?

Related

PostgreSQL array of data composite update element using where condition

I have a composite type:
CREATE TYPE mydata_t AS
(
user_id integer,
value character(4)
);
Also, I have a table, uses this composite type as an array of mydata_t.
CREATE TABLE tbl
(
id serial NOT NULL,
data_list mydata_t[],
PRIMARY KEY (id)
);
Here I want to update the mydata_t in data_list, where mydata_t.user_id is 100000
But I don't know which array element's user_id is equal to 100000
So I have to make a search first to find the element where its user_id is equal to 100000 ... that's my problem ... I don't know how to make the query .... in fact, I want to update the value of the array element, where it's user_id is equal to 100000 (Also where the id of tbl is for example 1) ... What will be my query?
Something like this (I know it's wrong !!!)
UPDATE "tbl" SET "data_list"[i]."value"='YYYY'
WHERE "id"=1 AND EXISTS (SELECT ROW_NUMBER() OVER() AS i
FROM unnest("data_list") "d" WHERE "d"."user_id"=10000 LIMIT 1)
For example, this is my tbl data:
Row1 => id = 1, data = ARRAY[ROW(5,'YYYY'),ROW(6,'YYYY')]
Row2 => id = 2, data = ARRAY[ROW(10,'YYYY'),ROW(11,'YYYY')]
Now i want to update tbl where id is 2 and set the value of one of the tbl.data elements to 'XXXX' where the user_id of element is equal to 11
In fact, the final result of Row2 will be this:
Row2 => id = 2, data = ARRAY[ROW(10,'YYYY'),ROW(11,'XXXX')]
If you know the value value, you can use the array_replace() function to make the change:
UPDATE tbl
SET data_list = array_replace(data_list, (11, 'YYYY')::mydata_t, (11, 'XXXX')::mydata_t)
WHERE id = 2
If you do not know the value value then the situation becomes more complex:
UPDATE tbl SET data_list = data_arr
FROM (
-- UPDATE doesn't allow aggregate functions so aggregate here
SELECT array_agg(new_data) AS data_arr
FROM (
-- For the id value, get the data_list values that are NOT modified
SELECT (user_id, value)::mydata_t AS new_data
FROM tbl, unnest(data_list)
WHERE id = 2 AND user_id != 11
UNION
-- Add the values to update
VALUES ((11, 'XXXX')::mydata_t)
) x
) y
WHERE id = 2
You should keep in mind, though, that there is an awful lot of work going on in the background that cannot be optimised. The array of mydata_t values has to be examined from start to finish and you cannot use an index on this. Furthermore, updates actually insert a new row in the underlying file on disk and if your array has more than a few entries this will involve substantial work. This gets even more problematic when your arrays are larger than the pagesize of your PostgreSQL server, typically 8kB. All behind the scene so it will work, but at a performance penalty. Even though array_replace sounds like changes are made in-place (and they indeed are in memory), the UPDATE command will write a completely new tuple to disk. So if you have 4,000 array elements that means that at least 40kB of data will have to be read (8 bytes for the mydata_t type on a typical system x 4,000 = 32kB in a TOAST file, plus the main page of the table, 8kB) and then written to disk after the update. A real performance killer.
As #klin pointed out, this design may be more trouble than it is worth. Should you make data_list as table (as I would do), the update query becomes:
UPDATE data_list SET value = 'XXXX'
WHERE id = 2 AND user_id = 11
This will have MUCH better performance, especially if you add the appropriate indexes. You could then still create a view to publish the data in an aggregated form with a custom type if your business logic so requires.

Renaming all existing attributes in an array of Json objects

I work with Postgres db.
For example - there is a table
CREATE TABLE public.json_objects(id serial primary key, objects text);
it stores such an arrays of json
INSERT INTO public.json_objects (objects) VALUES ('[{"name":"Ivan"}]'), ('[{"name":"Petr"}, "surname":"Petrov"}]'), ('[{"form":"OOO"}, {"city":"Kizema"}]');
How can I replace the attribute "name" with "first name" or "surname" with "second name" everywhere?
I am using update with the select - subquery.
In this case, a replacement will occur, but if the attribute does not exist in the json object, then it will be added to the json with a null value (and this should not be)
WITH updated_table AS (SELECT id, jsonb_agg(new_field_json) as new_fields_json
FROM (SELECT id, jsonb_array_elements(json_objects.objects::jsonb) - 'name' || jsonb_build_object('first name', jsonb_array_elements(json_objects.objects::jsonb) -> 'name') new_field_json FROM public.json_objects) r group by id) UPDATE public.json_objects SET objects = updated_table.new_fields_json FROM updated_table where json_objects.id = updated_table.id
This seems to be a single operation, so you can just use the regexp_replace function to replace the key
update table_1 set objects = regexp_replace(objects, '(\"name"+)', '"first name"');
update table_1 set objects = regexp_replace(objects, '(\"surname"+)', '"second name"')
Demo in dbfiddle

How to construct dynamic SQL where condition against JSON column

I have a SQL table that stores data in Json format. I am using sample data below to understand the issue. Each document type has its own JSON structure.
DocumentID DocumentTypeID Status JsonData
----------------------------------------------------------------------------
1 2 Active {"FirstName":"Foo","LastName":"Bar","States":"[OK]"}
2 2 Active {"FirstName":"James","LastName":"Smith","States":"[TX,NY]"}
3 3 Active {"Make":"Ford","Model":"Focus","Year":"[2020]"}
4 3 Active {"Make":"Tesla","Model":"X","Year":"[2012,2015,2019]"}
then I have another JSON that needs to use in Where condition
#Condition = '{"FirstName": "James",LastName:"Smith","States":[TX]}'
I will also have DocumentTypeID as parameter
So in normal sql if i hard-code the property names then SQL would look something like
SELECT * FROM Documents d
WHERE
d.DocumentTypeID = #DocumentTypeID AND
JSON_VALUE(d.JsonData,'$.FirstName') = JSON_VALUE(#Condition,'$.FirstName') AND
JSON_VALUE(d.JsonData,'$.LastName') = JSON_VALUE(#Condition,'$.LastName') AND
JSON_QUERY(d.JsonData,'$.States') = JSON_QUERY(#Condition,'$.States') -- This line is wrong. I have
-- to check if one array is
-- subset of another array
Given
The property names in JsonData column and Condition will exactly match for a given DocumentTypeID.
I already have another SQL table that stores DocumentType and its Properties. If it helps, I can store json path for each property that can be used in above query to dynamically construct where condition
DocumentTypeID PropertyName JsonPath DataType
---------------------------------------------------------------------------------
2 FirstName $.FirstName String
2 LastName $.LastName String
2 States $.States Array
3 Make $.Make String
3 Model $.Model String
3 Year $.Year Array
ISSUE
For each document type the #condition will have different JSON structure. How do i construct dynamic where condition? Is this even possible in SQL?
I am using C#.NET so i was thinking of constructing SQL query in C# and just execute SQL Query. But before i go that route i want to check if its possible to do this in TSQL
Unfortunately, JSON support was only added to SQL Server in 2016 version, and still have room for improvement. Working with JSON data that contains arrays is quite cumbersome, involving OPENJSON to get the data, and another OPENJSON to get the array data.
An SQL based solution to this is possible - but a I wrote - cumbersome.
First, create and populate sample table (Please save us this step in your future questions):
DECLARE #Documents AS TABLE (
[DocumentID] int,
[DocumentTypeID] int,
[Status] varchar(6),
[JsonData] varchar(100)
);
INSERT INTO #Documents ([DocumentID], [DocumentTypeID], [Status], [JsonData]) VALUES
(1, 2, 'Active', '{"FirstName":"Foo","LastName":"Bar","States":["OK"]}'),
(2, 2, 'Active', '{"FirstName":"James","LastName":"Smith","States":["TX","NY"]}'),
(2, 2, 'Active', '{"FirstName":"James","LastName":"Smith","States":["OK", "NY"]}'),
(2, 2, 'Active', '{"FirstName":"James","LastName":"Smith","States":["OH", "OK"]}'),
(3, 3, 'Active', '{"Make":"Ford","Model":"Focus","Year":[2020]}'),
(4, 3, 'Active', '{"Make":"Tesla","Model":"X","Year":[2012,2015,2019]}');
Note I've added a couple of rows to the sample data, to verify the condition is working properly.
Also, as a side Note - some of the JSON data in the question was improperly formatted - I've had to fix that.
Then, declare the search parameters (Note: I still think sending a JSON string as a search condition is potentially risky):
DECLARE #DocumentTypeID int = 2,
#Condition varchar(100) = '{"FirstName": "James","LastName":"Smith","States":["TX", "OH"]}';
(Note: I've added another state - again to make sure the condition works as it should.)
Then, I've used a common table expression with openjson and cross apply to convert the json condition to tabular data, and joined that cte to the table:
WITH CTE AS
(
SELECT FirstName, LastName, [State]
FROM OPENJSON(#Condition)
WITH (
FirstName varchar(10) '$.FirstName',
LastName varchar(10) '$.LastName',
States nvarchar(max) '$.States' AS JSON
)
CROSS APPLY OPENJSON(States)
WITH (
[State] varchar(2) '$'
)
)
SELECT [DocumentID], [DocumentTypeID], [Status], [JsonData]
FROM #Documents
CROSS APPLY
OPENJSON([JsonData])
WITH(
-- Since we already have to use OPENJSON, no point of also using JSON_VALUE...
FirstName varchar(10) '$.FirstName',
LastName varchar(10) '$.LastName',
States nvarchar(max) '$.States' AS JSON
) As JD
CROSS APPLY OPENJSON(States)
WITH(
[State] varchar(2) '$'
) As JDS
JOIN CTE
ON JD.FirstName = CTE.FirstName
AND JD.LastName = CTE.LastName
AND JDS.[State] = CTE.[State]
WHERE DocumentTypeID = #DocumentTypeID
Results:
DocumentID DocumentTypeID Status JsonData
2 2 Active {"FirstName":"James","LastName":"Smith","States":["TX","NY"]}
2 2 Active {"FirstName":"James","LastName":"Smith","States":["OH", "OK"]}

Array Insertion Postgres

I have a table item with attributes no(integer) and price(integer), also another table cart with attributes no(integer) and items(array of item).
I have some records in items.
When i tried :
INSERT INTO myschema.cart VALUES(1,'{SELECT item from myschema.item}')
I'am getting error malformed record literal.
I expected this to insert all items from myschema.item into the cart record.
It's hard to give you exact statement without the table structures and such, but you can select into an array:
INSERT INTO myschema.cart (id, item_ids)
SELECT 1, array(SELECT id from myschema.item)
This will select the id's from the item table into an array.
You can test it out by writing:
select array(SELECT id from myschema.item)
You can't write a subquery inside a string like that.
What you need to do is aggregate the items into a array with array_agg
INSERT INTO myschema.cart
VALUES (1, (SELECT array_agg(item) FROM myschema.item));
Or
INSERT INTO myschema.cart
SELECT 1, array_agg(item) FROM myschema.item;

In DB2, perform an update based on insert for large number of rows

In DB2, I need to do an insert, then, using results/data from that insert, update a related table. I need to do it on a million plus records and would prefer not to lock the entire database. So, 1) how do I 'couple' the insert and update statements? 2) how can I ensure the integrity of the transaction (without locking the whole she-bang)?
some pseudo-code should help clarify
STEP 1
insert into table1 (neededId, id) select DYNAMICVALUE, id from tableX where needed value is null
STEP 2
update table2 set neededId = (GET THE DYNAMIC VALUE JUST INSERTED) where id = (THE ID JUST INSERTED)
note: in table1, the ID col is not unique, so i can't just filter on that to find the new DYNAMICVALUE
This should be more clear (FTR, this works, but I don't like it, because I'd have to lock the tables to maintain integrity. Would be great it I could run these statements together, and allow the update to refer to the newAddressNumber value.)
/****RUNNING TOP INSERT FIRST****/*
--insert a new address for each order that does not have a address id
insert into addresses
(customerId, addressNumber, address)
select
cust.Id,
--get next available addressNumber
ifNull((select max(addy2.addressNumber) from addresses addy2 where addy2.customerId = cust.id),0) + 1 as newAddressNumber,
cust.address
from customers cust
where exists (
--find all customers with at least 1 order where addressNumber is null
select 1 from orders ord
where 1=1
and ord.customerId = cust.id
and ord.addressNumber is null
)
/*****RUNNING THIS UPDATE SECOND*****/
update orders ord1
set addressNumber = (
select max(addressNumber) from addresses addy3
where addy3.customerId = ord1.customerId
)
where 1=1
and ord1.addressNumber is null
The IDENTITY_VAL_LOCAL function is a non-deterministic function that returns the most recently assigned value for an identity column, where the assignment occurred as a result of a single INSERT statement using a VALUES clause