Sum items in a TSQL JSON column - tsql

Lets say I have a table and in a "data" NVARCHAR(MAX) column, I have this JSON:
[
{
"room": "kitchen",
"items": [
{
"name": "table",
"price": 100
}
]
},
{
"room": "bedroom",
"items": [
{
"name": "bed",
"price": 250
},
{
"name": "lamp",
"price": 50
}
]
},
{
"room": "bathroom",
"items": [
{
"name": "toilet",
"price": 101
},
{
"name": "shower",
"items": [
{
"name": "shower curtain",
"price": 10
},
{
"name": "shower head",
"price": 40
}
]
}
]
}
]
Using TSQL, can I somehow SUM all prices in the JSON? Please notice that my "price" is in different levels in the JSON file.
And furthermore, can I make a computed column that will SUM all the prices in the JSON column?

In JSon you would have to treat all nodes and check if another sublevel including "Price" exists. As text it is easier. This example is for one cell.
The idea above to insert the result to another column in the table is a good one.
You can implement a trigger after every INSERT / UPDATE to calculate instead of a computed column.
declare #str varchar(4000)='[ { "room": "kitchen", "items": [ { "name": "table", "price": 100 } ] }, { "room": "bedroom", "items": [ { "name": "bed", "price": 250 }, { "name": "lamp", "price": 50 } ] }, { "room": "bathroom", "items": [ { "name": "toilet", "price": 101 }, { "name": "shower", "items": [ { "name": "shower curtain", "price": 10 }, { "name": "shower head", "price": 40 } ] } ] } ]'
, #sub varchar(15);
drop table if exists #prices;
create table #prices(price int);
WHILE patindex('%"price": %',#str) > 0
begin
SELECT #sub=SUBSTRING(#str, patindex('%"price": %',#str), 15)
WHILE PatIndex('%[^0-9]%', #sub) > 0
SET #sub = Stuff(#sub, PatIndex('%[^0-9]%', #sub), 1, '');
insert into #prices select try_cast(#sub as int);
SET #str = Stuff(#str, patindex('%"price": %',#str), 15, '');
end;
select sum(price) from #prices;

Related

How to select filtered postgresql jsonb field with performance prioritization?

A table:
CREATE TABLE events_holder(
id serial primary key,
version int not null,
data jsonb not null
);
Data field can be very very large (up to 100 Mb) and looks like this:
{
"id": 5,
"name": "name5",
"events": [
{
"id": 255,
"name": "festival",
"start_date": "2022-04-15",
"end_date": "2023-04-15",
"values": [
{
"id": 654,
"type": "text",
"name": "importance",
"value": "high"
},
{
"id": 655,
"type": "boolean",
"name": "epic",
"value": "true"
}
]
},
{
"id": 256,
"name": "discovery",
"start_date": "2022-02-20",
"end_date": "2022-02-22",
"values": [
{
"id": 711,
"type": "text",
"name": "importance",
"value": "low"
},
{
"id": 712,
"type": "boolean",
"name": "specificAttribute",
"value": "false"
}
]
}
]
}
I want to select data field by version, but filtered with extra condition: where events end_date > '2022-03-15'. And the output must look like this:
{
"id": 5,
"name": "name5",
"events": [
{
"id": 255,
"name": "festival",
"start_date": "2022-04-15",
"end_date": "2023-04-15",
"values": [
{
"id": 654,
"type": "text",
"name": "importance",
"value": "high"
},
{
"id": 655,
"type": "boolean",
"name": "epic",
"value": "true"
}
]
}
]
}
How can I do this with maximum performance? How should I index the data field?
My primary solution:
with cte as (
select eh.id, eh.version, jsonb_agg(events) as filteredEvents from events_holder eh
cross join jsonb_array_elements(eh.data #> '{events}') as events
where version = 1 and (events ->> 'end_date')::timestamp >= '2022-03-15'::timestamp
group by id, version
)
select jsonb_set(data, '{events}', cte.filteredEvents) from events_holder, cte
where events_holder.id = cte.id;
But i don't think it's a good variant.
You can do this using a JSON path expression:
select eh.id, eh.version,
jsonb_path_query_array(data,
'$.events[*] ? (#.end_date.datetime() >= "2022-03-15".datetime())')
from events_holder eh
where eh.version = 1
and eh.data #? '$.events[*] ? (#.end_date.datetime() >= "2022-03-15".datetime())'
Given your example JSON, this returns:
[
{
"id": 255,
"name": "festival",
"values": [
{
"id": 654,
"name": "importance",
"type": "text",
"value": "high"
},
{
"id": 655,
"name": "epic",
"type": "boolean",
"value": "true"
}
],
"end_date": "2023-04-15",
"start_date": "2022-04-15"
}
]
Depending on your data distribution a GIN index on data or an index on version could help.
If you need to re-construct the whole JSON content but with just a filtered events array, you can do something like this:
select (data - 'events')||
jsonb_build_object('events', jsonb_path_query_array(data, '$.events[*] ? (#.end_date.datetime() >= "2022-03-15".datetime())'))
from events_holder eh
...
(data - 'events') removes the events key from the json. Then the the result of the JSON path query is appended back to that (partial) object.

how to insert an object into the players array in mongoDB?

I have the document below and I need to insert an object into the players arrays how to do this with mongoDB
{
"data": {
"createTournament": {
"_id": "6130d9a565aa744f173a824a",
"title": "Jogo de truco",
"description": "",
"status": "PENDING",
"size": 8,
"prizePool": 20,
"currency": "USD",
"type": "Battle",
"entryFee": 1,
"startDate": "2021-09-01",
"endDate": "2021-09-01",
"rounds": [{
"round": 1,
"totalMatches": 4,
"matches": [{
"match": 1,
"players": []
}
]
}]
}
}
}
it will add 3 to array players that array matches has match of 1 and rounds array has round of 1
db.collection('exmaple').updateOne({},
{
$push:{"data.createTournament.rounds.$[outer].matches.$[inner].players":"3"}
},
{ "arrayFilters": [
{ "outer.round": 1 }, // you could change this to choose in which array must be pushed
{ "inner.match":1 } // // you could change this to choose in which array must be pushed
] }
)

Postgresql & jsonb : WHERE <a specific nested field in my json> IS NOT NULL

Table public.challenge, column lines JSONB
My initial JSON in lines :
[
{
"line": 1,
"blocs": [
{
"size": 100,
"name": "abc"
},
{
"size": 100,
"name": "def"
},
{
"size": 100,
"name": "ghi"
}
]
},
{
"line": 2,
"blocs": [
{
"size": 100,
"name": "xyz"
}
]
}
]
Desired result (add a new object wrapper for every bloc) :
[
{
"line": 1,
"blocs": [
{
"size": 100,
"name": "abc",
"wrapper": {
"nestedName": "abc",
"type": "regular"
}
},
{
"size": 100,
"name": "def",
"wrapper": {
"nestedName": "def",
"type": "regular"
}
},
{
"size": 100,
"name": "ghi",
"wrapper": {
"nestedName": "ghi",
"type": "regular"
}
}
]
},
{
"line": 2,
"blocs": [
{
"size": 100,
"name": "xyz",
"wrapper": {
"nestedName": "xyz",
"type": "regular"
}
}
]
}
]
I have the following query (from here) :
WITH cte AS (
SELECT id_lines,
jsonb_agg(
jsonb_set(val1, '{blocs}',
(
SELECT jsonb_agg(arr2 ||
json_build_object(
'wrapper', json_build_object('nestedName', arr2->'name', 'type', 'regular')
)::jsonb
)
FROM jsonb_array_elements(arr1.val1->'blocs') arr2
WHERE arr2->'name' IS NOT NULL
)
))
FROM public.challenge, jsonb_array_elements(lines) arr1(val1)
GROUP BY 1
)
UPDATE public.challenge SET lines=(cte.jsonb_agg) FROM cte
WHERE public.challenge.id_lines=cte.id_lines;
The condition WHERE arr2->'name' IS NOT NULL does not filter out blocs where name is null, I struggle to find out why.. thanks!
You have to distinguish between SQL NULL and JSON null.
The IS NOT NULL predicate tests for SQL NULL, which would mean that the attribute is not present in the JSON.
To test for JSON null, use
WHERE arr2->'name' <> 'null'::jsonb
The type cast to jsonb is not necessary and would be performed implicitly.

PostgreSql jsonb field to view

I have this kind of jsonb data in one of column named "FORM" in a my table "process" and I want to create view with some data which are inside of row named field I just want name and value form field named array in this jsonb.
here the jsonb:
{
"column": [
{
"row": {
"id": "ebc7afddad474aee8f82930b6dc328fe",
"name": "Details",
"field": [
{
"name": {
"id": "50a5613e97e04cb5b8d32afa8a9975d1",
"label": "name"
},
"value": {
"stringValue": "yhfghg"
}
}
]
}
},
{
"row": {
"id": "5b7471413cbc44c1a39895020bf2ec58",
"name": "leave details",
"field": [
{
"name": {
"id": "bb127e8284c84692aa217539c4312394",
"label": "date"
},
"value": {
"dateValue": 1549065600
}
},
{
"name": {
"id": "33b2c5d1a968481d9d5e386db487de52",
"label": "days",
"options": {
"allowedValues": [
{
"item": "1"
},
{
"item": "2"
},
{
"item": "3"
},
{
"item": "4"
},
{
"item": "5"
}
]
},
"defaultValue": {
"radioButtonValue": "1"
}
},
"value": {
"radioButtonValue": "3"
}
}
]
}
}
]
}
and i want to this kind of jsonb in view data comes from subarray called field inside the object named row......
[
{
"name": {
"id": "50a5613e97e04cb5b8d32afa8a9975d1"
},
"value": {
"stringValue": "yhfghg"
}
},
{
"name": {
"id": "bb127e8284c84692aa217539c4312394"
},
"value": {
"dateValue": 1549065600
}
},
{
"name": {
"id": "33b2c5d1a968481d9d5e386db487de52"
},
"value": {
"radioButtonValue": "3"
}
}
]
How can I do this?
I used jsonb_array_elements twice to expand the two arrays, then used json_build_object to make the result structure and jsonb_agg combine the several rows generated above into a single JSONB array.
I included a row number is the results so I could later apply group by so that results from several "process" rows would not be accidentally combined by the jsonb_agg.
with cols as (select jsonb_array_elements( "FORM" ->'column') as r
,row_number() over () as n from "process" )
,cols2 as (select jsonb_array_elements(r->'row'->'field') as v
,n from cols)
select jsonb_agg(json_build_object('name',v->'id','value',v->'value'))
from cols2 group by n;

MongoDB find where resoult + value > when 100

I have the following db structure:
[
{
"_id": 1,
"family": "First Family",
"kids": [
{
"name": "David",
"age": 10
},
{
"name": "Moses",
"age": 15
}
]
},
{
"_id": 1,
"family": "Second Family",
"kids": [
{
"name": "Sara",
"age": 17
},
{
"name": "Miriam",
"age": 45
}
]
}
]
I want to select all families that have a kid that his age + 10 is bigger then 30.
What would be the best way to achieve this?
please find query below
db.collection.find({ "kids.age":{$gt:20}})