Insert values while using the default values specified in Django class - postgresql

I have a Django app that creates this table:
from django.db import models
class MyTable(models.Model):
id = models.AutoField(primary_key=True)
field1 = models.DateTimeField(blank=True, null=True)
field2 = models.BooleanField(default=False)
field3 = models.IntegerField(default=0)
.....
fieldN = models.IntegerField(default=0)
I'm in the dev environment, and I would like to insert manually with sql some rows in this table using this syntax:
INSERT INTO my_table (id, field1, field2, field3, ..., fieldN) VALUES ('1234', 'something', 'something', 'something', ..., 'something');
N is a large number, so I would like to use the default values that are marked in the Django class instead of manually writing the N fields. Is there a way in PostgreSQL that I can do this ?

Related

Update table with newly added column containing data from the same table old column, but modified (flattened) jsonb

So i've came across issue with having to migrate data from one column to "clone" of itself with different jsonb schema -> i need to parse the json from
["keynamed": [...{"type": "type_info", "value": "value_in_here"}]]into something plain object with key:value - dictionary like {"type_info": "value_in_here" ,...}
so far i've tried with subqueries and json functions in subquery + switch case to map "type" to "type_info" and then use jsonb_build_object(), but this takes data from the wole table and i need to have it on update with data from row - is there anything simpler than doing N subqueries closest way i've came with is:
select
jsonb_object_agg(t.k, t.v):: jsonb as _json
from
(
select
jsonb_build_object(type_, _value) as _json
from
(
select
_value,
CASE _type
...
END type_
from
(
select
(datasets ->> 'type') as _type,
datasets -> 'value' as _value
from
(
select
jsonb_array_elements(
values
-> 'keynamed'
) as datasets
from
table
) s
) s
) s
) s,
jsonb_each(_json) as t(k, v);
But i have no idea how to make it row specyfic and apply to simple update like:
UPDATE table
SET table.new_field = (subquery with parsed dict in json)
Any ideas/tips how to solve it with plain PSQL without any external support?
The expected output of the table would be:
id | old_value | new_value
----------------+-------------------------------------+------------------------------------
1 | ["keynamed": [...{"type": "type_info", "value": "value_in_here"}]] | {"type_info": "value_in_here" ,...}
According to postgres documents you can use update with select table and use join pattern update document
Sample:
UPDATE accounts SET contact_first_name = first_name,
contact_last_name = last_name
FROM salesmen WHERE salesmen.id = accounts.sales_id;
If I understand correctly, below query can help you. but I can't test because I haven't sample data and I don't know this query has syntax error or not.
update table t
set new_value = tmp._json
from (
select
id,
jsonb_object_agg(t.k, t.v):: jsonb as _json
from
(
select
id,
jsonb_build_object(type_, _value) as _json
from
(
select
id,
_value,
CASE _type
...
END type_
from
(
select
id,
(datasets ->> 'type') as _type,
datasets -> 'value' as _value
from
(
select
id,
jsonb_array_elements(
values
-> 'keynamed'
) as datasets
from
table
) s
) s
) s
) s,
jsonb_each(_json) as t(k, v)
group by id) tmp
where tmp.id = t.id;

Is it possible to bulk update specific values in postgresql efficiently?

I have created a pipeline which is required to update a high number of rows in postgres where each row should be updated differently.
After looking up I found that this could be done using postgres UPDATE.. FROM.. syntax (https://www.postgresql.org/docs/current/sql-update.html) and I came up with the following query that works perfectly fine:
update grades
set course_id = data_table.course_id,
student_id = data_table.student_id,
grade = data_table.grade
from
(select unnest(array[1,2]) as id, unnest(array['Math', 'Math']) as course_id, unnest(array[1000, 1001]) as student_id, unnest(array[95, 100]) as grade) as data_table
where grades.id = data_table.id;
There's also another way to do it with WITH syntax like this:
update grades
set course_id = data_table.course_id,
student_id = data_table.student_id,
grade = data_table.grade
from
(WITH vals (id, course_id, student_id, grade) as (VALUES (1, 'Math', 1000, 95), (2, 'Math', 1001, 100)) SELECT * from vals) as data_table
where grades.id = data_table.id;
My problem is that sometimes I want in some raws to update a field and sometime not. When I don't want to update I just want to keep the value that is currently in the table. In this case, I would want to potentially do something like:
update grades g
set course_id = data_table.course_id,
student_id = data_table.student_id,
grade = data_table.grade
from
(select unnest(array[1,2]) as id, unnest(array[g.course_id, 'Math2']) as course_id, unnest(array[1000, 1001]) as student_id, unnest(array[95, g.grade]) as grade) as data_table
where grades.id = data_table.id;
However this is not possible and I get back the error HINT: There is an entry for table "g", but it cannot be referenced from this part of the query.
Also postgresql documentation specifies about it in the From description:
Note that the target table must not appear in the from_list,
unless you intend a self-join (in which case it must appear with an alias in the from_list).
Does anyone know if there's a way to perform such bulk update ?
I've tried to use JOINs in inner query but with no luck..
Chose a value that cannot be a valid value, eg '-1' for course name and -1 for a grade, and use that for your generated values, then use a case in the insert to direct whether to use the current value or not:
update grades g
set course_id = case when data_table.course_id = '-1' then course_id else data_table.course_id end,
student_id = data_table.student_id,
grade = case when data_table.grade = -1 then g.grade else data_table.grade end
from (
select
unnest(array[1,2]) as id,
unnest(array['-1', 'Math2']) as course_id, -- use '-1' instead of g.course_id
unnest(array[1000, 1001]) as student_id,
unnest(array[95, -1]) as grade -- use -1 instead of g.grade
) as data_table
where grades.id = data_table.id
Pick whatever values you like for the impossible value.
If nulls were not allowed it would have been more straightforward and less code - use null for the impossible value and coalesce() in for the update value.

How to construct dynamic SQL where condition against JSON column

I have a SQL table that stores data in Json format. I am using sample data below to understand the issue. Each document type has its own JSON structure.
DocumentID DocumentTypeID Status JsonData
----------------------------------------------------------------------------
1 2 Active {"FirstName":"Foo","LastName":"Bar","States":"[OK]"}
2 2 Active {"FirstName":"James","LastName":"Smith","States":"[TX,NY]"}
3 3 Active {"Make":"Ford","Model":"Focus","Year":"[2020]"}
4 3 Active {"Make":"Tesla","Model":"X","Year":"[2012,2015,2019]"}
then I have another JSON that needs to use in Where condition
#Condition = '{"FirstName": "James",LastName:"Smith","States":[TX]}'
I will also have DocumentTypeID as parameter
So in normal sql if i hard-code the property names then SQL would look something like
SELECT * FROM Documents d
WHERE
d.DocumentTypeID = #DocumentTypeID AND
JSON_VALUE(d.JsonData,'$.FirstName') = JSON_VALUE(#Condition,'$.FirstName') AND
JSON_VALUE(d.JsonData,'$.LastName') = JSON_VALUE(#Condition,'$.LastName') AND
JSON_QUERY(d.JsonData,'$.States') = JSON_QUERY(#Condition,'$.States') -- This line is wrong. I have
-- to check if one array is
-- subset of another array
Given
The property names in JsonData column and Condition will exactly match for a given DocumentTypeID.
I already have another SQL table that stores DocumentType and its Properties. If it helps, I can store json path for each property that can be used in above query to dynamically construct where condition
DocumentTypeID PropertyName JsonPath DataType
---------------------------------------------------------------------------------
2 FirstName $.FirstName String
2 LastName $.LastName String
2 States $.States Array
3 Make $.Make String
3 Model $.Model String
3 Year $.Year Array
ISSUE
For each document type the #condition will have different JSON structure. How do i construct dynamic where condition? Is this even possible in SQL?
I am using C#.NET so i was thinking of constructing SQL query in C# and just execute SQL Query. But before i go that route i want to check if its possible to do this in TSQL
Unfortunately, JSON support was only added to SQL Server in 2016 version, and still have room for improvement. Working with JSON data that contains arrays is quite cumbersome, involving OPENJSON to get the data, and another OPENJSON to get the array data.
An SQL based solution to this is possible - but a I wrote - cumbersome.
First, create and populate sample table (Please save us this step in your future questions):
DECLARE #Documents AS TABLE (
[DocumentID] int,
[DocumentTypeID] int,
[Status] varchar(6),
[JsonData] varchar(100)
);
INSERT INTO #Documents ([DocumentID], [DocumentTypeID], [Status], [JsonData]) VALUES
(1, 2, 'Active', '{"FirstName":"Foo","LastName":"Bar","States":["OK"]}'),
(2, 2, 'Active', '{"FirstName":"James","LastName":"Smith","States":["TX","NY"]}'),
(2, 2, 'Active', '{"FirstName":"James","LastName":"Smith","States":["OK", "NY"]}'),
(2, 2, 'Active', '{"FirstName":"James","LastName":"Smith","States":["OH", "OK"]}'),
(3, 3, 'Active', '{"Make":"Ford","Model":"Focus","Year":[2020]}'),
(4, 3, 'Active', '{"Make":"Tesla","Model":"X","Year":[2012,2015,2019]}');
Note I've added a couple of rows to the sample data, to verify the condition is working properly.
Also, as a side Note - some of the JSON data in the question was improperly formatted - I've had to fix that.
Then, declare the search parameters (Note: I still think sending a JSON string as a search condition is potentially risky):
DECLARE #DocumentTypeID int = 2,
#Condition varchar(100) = '{"FirstName": "James","LastName":"Smith","States":["TX", "OH"]}';
(Note: I've added another state - again to make sure the condition works as it should.)
Then, I've used a common table expression with openjson and cross apply to convert the json condition to tabular data, and joined that cte to the table:
WITH CTE AS
(
SELECT FirstName, LastName, [State]
FROM OPENJSON(#Condition)
WITH (
FirstName varchar(10) '$.FirstName',
LastName varchar(10) '$.LastName',
States nvarchar(max) '$.States' AS JSON
)
CROSS APPLY OPENJSON(States)
WITH (
[State] varchar(2) '$'
)
)
SELECT [DocumentID], [DocumentTypeID], [Status], [JsonData]
FROM #Documents
CROSS APPLY
OPENJSON([JsonData])
WITH(
-- Since we already have to use OPENJSON, no point of also using JSON_VALUE...
FirstName varchar(10) '$.FirstName',
LastName varchar(10) '$.LastName',
States nvarchar(max) '$.States' AS JSON
) As JD
CROSS APPLY OPENJSON(States)
WITH(
[State] varchar(2) '$'
) As JDS
JOIN CTE
ON JD.FirstName = CTE.FirstName
AND JD.LastName = CTE.LastName
AND JDS.[State] = CTE.[State]
WHERE DocumentTypeID = #DocumentTypeID
Results:
DocumentID DocumentTypeID Status JsonData
2 2 Active {"FirstName":"James","LastName":"Smith","States":["TX","NY"]}
2 2 Active {"FirstName":"James","LastName":"Smith","States":["OH", "OK"]}

In PostgreSQL how can an array column be filtered and searched for unique values only?

I'm trying to compare values in a history table populated by update trigger to see if certain columns in the old and new values of the JSON fields are equal and if all are equal then be able to create a case when query. Here's what I'm trying to do:
SQL
create table history
(
id serial not null,
ts timestamp default now(),
table_schema text,
table_name text,
operation text,
updated_by text default CURRENT_USER,
new json,
old json
);
With t AS (
select id,
old->>'field1' = new->>'field1' as isMatchField1,
old->>'field2' = new->>'field2' as isMatchField2,
old->>'field3' = new->>'field3' as isMatchField3,
old->>'field4' = new->>'field4' as isMatchField4
from history)
select id, array [isMatchField1, isMatchField2, isMatchField3, isMatchField4] from t
OUTPUT
1, {true, true, false, null}
How do filter out all nulls from the array and do a case when query to see if only true valuesexists. basically I want to do something like:
select id,
case when array field is only true and null then 'no changes made'
else
'changes made'
end as updated
from t

Insert multiple rows where not exists PostgresQL

I'd like to generate a single sql query to mass-insert a series of rows that don't exist on a table. My current setup makes a new query for each record insertion similar to the solution detailed in WHERE NOT EXISTS in PostgreSQL gives syntax error, but I'd like to move this to a single query to optimize performance since my current setup could generate several hundred queries at a time. Right now I'm trying something like the example I've added below:
INSERT INTO users (first_name, last_name, uid)
SELECT ( 'John', 'Doe', '3sldkjfksjd'), ( 'Jane', 'Doe', 'adslkejkdsjfds')
WHERE NOT EXISTS (
SELECT * FROM users WHERE uid IN ('3sldkjfksjd', 'adslkejkdsjfds')
)
Postgres returns the following error:
PG::Error: ERROR: INSERT has more target columns than expressions
The problem is that PostgresQL doesn't seem to want to take a series of values when using SELECT. Conversely, I can make the insertions using VALUES, but I can't then prevent duplicates from being generated using WHERE NOT EXISTS.
http://www.techonthenet.com/postgresql/insert.php suggests in the section EXAMPLE - USING SUB-SELECT that multiple records should be insertable from another referenced table using SELECT, so I'm wondering why I can't seem to pass in a series of values to insert. The values I'm passing are coming from an external API, so I need to generate the values to insert by hand.
Your select is not doing what you think it does.
The most compact version in PostgreSQL would be something like this:
with data(first_name, last_name, uid) as (
values
( 'John', 'Doe', '3sldkjfksjd'),
( 'Jane', 'Doe', 'adslkejkdsjfds')
)
insert into users (first_name, last_name, uid)
select d.first_name, d.last_name, d.uid
from data d
where not exists (select 1
from users u2
where u2.uid = d.uid);
Which is pretty much equivalent to:
insert into users (first_name, last_name, uid)
select d.first_name, d.last_name, d.uid
from (
select 'John' as first_name, 'Doe' as last_name, '3sldkjfksjd' as uid
union all
select 'Jane', 'Doe', 'adslkejkdsjfds'
) as d
where not exists (select 1
from users u2
where u2.uid = d.uid);
a_horse_with_no_name's answer actually has a syntax error, missing a final closing right parens, but other than that is the correct way to do this.
Update:
For anyone coming to this with a situation like mine, if you have columns that need to be type cast (for instance timestamps or uuids or jsonb in PG 9.5), you must declare that in the values you pass to the query:
-- insert multiple if not exists
-- where another_column_name is of type uuid, with strings cast as uuids
-- where created_at and updated_at is of type timestamp, with strings cast as timestamps
WITH data (id, some_column_name, another_column_name, created_at, updated_at) AS (
VALUES
(<id value>, <some_column_name_value>, 'a5fa7660-8273-4ffd-b832-d94f081a4661'::uuid, '2016-06-13T12:15:27.552-07:00'::timestamp, '2016-06-13T12:15:27.879-07:00'::timestamp),
(<id value>, <some_column_name_value>, 'b9b17117-1e90-45c5-8f62-d03412d407dd'::uuid, '2016-06-13T12:08:17.683-07:00'::timestamp, '2016-06-13T12:08:17.801-07:00'::timestamp)
)
INSERT INTO table_name (id, some_column_name, another_column_name, created_at, updated_at)
SELECT d.id, d.survey_id, d.arrival_uuid, d.gf_created_at, d.gf_updated_at
FROM data d
WHERE NOT EXISTS (SELECT 1 FROM table_name t WHERE t.id = d.id);
a_horse_with_no_name's answer saved me today on a project, but had to make these tweaks to make it perfect.