Filtering elements inside an array - postgresql

I have this JSON and would like to filter by WHERE installmentType = 'STANDARD'
In the SELECT clause I would like to return the region attribute.
{
"processedResult": {
"TYPE": "ACKNOWLEDGEMENT",
"orderPayment": {
"paymentDetails": [
{
"installmentPayment": {
"installmentType": "STANDARD"
}
}
]
},
"region": "US"
}
}
Desired Output:
region
type
US
ACKNOWLEDGEMENT
What I've tried so far but this just gives me the paymentDetails block:
SELECT arr.item_object
FROM aosqe_ema_tools.ocs_response t,
jsonb_array_elements(t.ocsjsonb -> 'processedResult' -> 'orderPayment' -> 'paymentDetails')
with ordinality arr(item_object, position)
Postgres Version : PostgreSQL 11.13

Use item_object in the WHERE clause:
select
ocsjsonb -> 'processedResult' ->> 'region' as region,
ocsjsonb -> 'processedResult' ->> 'TYPE' as type
from ocs_response
cross join jsonb_array_elements(
ocsjsonb -> 'processedResult' -> 'orderPayment' -> 'paymentDetails')
as arr(item_object)
where item_object -> 'installmentPayment' ->> 'installmentType' = 'STANDARD'
Test it in db<>fiddle.

Related

Insert new item in JSONB column based on value of other field - postgres

I have the following jsonb structure with many entries in it
[
{
"name":"test",
"features":[
{
"name":"feature1",
"granted":false
},
{
"name":"feature2",
"granted":true
}
]
}...
]
I'd like to add a new entry in the features array when the parent name element has value "test" and feature1 granted is "false".
The idea is to write a flyway script to migrate my data.
I've been battling with jsonb_insert but I can't figure out the path portion of it since I can have potentially many elements in there and I can't just add a given subscript.
End result should be:
[
{
"name":"test",
"features":[
{
"name":"feature1",
"granted":false
},
{
"name":"feature2",
"granted":true
},
{
"name":"newFeature",
"granted":false
}
]
}
]
EDIT1
So far I've attempted:
UPDATE my_table SET modules =
jsonb_insert(my_column, '{features, [0]}', '{"name": "newFeature", "granted": false}')
WHERE my_column ->> 'name' = 'test' AND my_column #> '{"features": [{"name":"feature1", "granted": false}]}';
The statement executes but no updates are actually done.
EDIT2
I modified the query just to test the PATH out to
UPDATE my_table SET modules =
jsonb_insert(my_column, '{0, features, 0}', '{"name": "newFeature", "granted": false}')
WHERE my_column ->> 'name' = 'test' AND my_column #> '{"features": [{"name":"feature1", "granted": false}]}';
However this only always updates the first entry in the array, and the object I need to update is not guaranteed to always be in this position
This should be enough information to complete the query:
Let's create the mock data
create table a (id serial primary key , b jsonb);
insert into a (b)
values ('[
{
"name": "test",
"features": [
{
"name": "feature1",
"granted": false
},
{
"name": "feature2",
"granted": true
}
]
},
{
"name": "another-name",
"features": [
{
"name": "feature1",
"granted": false
},
{
"name": "feature2",
"granted": true
}
]
}
]');
Now explode the array using jsonb_array_elements with ordinality to get the index and the property
select first_level.id, position, feature_position, feature
from (select a.id, arr.*
from a,
jsonb_array_elements(a.b) with ordinality arr (elem, position)
where elem ->> 'name' = 'test') first_level,
jsonb_array_elements(first_level.elem -> 'features') with ordinality features (feature, feature_position);
The result of this query is:
1,1,1,"{""name"": ""feature1"", ""granted"": false}"
1,1,2,"{""name"": ""feature2"", ""granted"": true}"
There you have the necessary info that you need to fetch the sub elements that you need, as well as all the indexes that you needed for your query.
Now, to the final edit, you already had the query that you wanted:
UPDATE my_table SET modules =
jsonb_insert(my_column, '{0, features, 0}', '{"name": "newFeature", "granted": false}')
WHERE my_column ->> 'name' = 'test' AND my_column #> '{"features": [{"name":"feature1", "granted": false}]}';
In the where you'll use the id, because those are the rows that you are interested in, and in the indexes you got them from the query. So:
UPDATE my_table SET modules =
jsonb_insert(my_column, '{' || exploded_info.position::string || ', features, ' || exploded_info.feature_position || '}', '{"name": "newFeature", "granted": false}') from (/* previous query */) as exploded_info
WHERE exploded_info.id = my_table.id and exploded_info.feature -> 'granted' = false;
As you can see this easily get's very nasty.
I'd recommend either using a more sql approach, that is, having features in a table instead of inside a json, a fk linking that to your table...
If you really need to use the json, for example, because the domain is really complex and defined at the application level and very flexible. Then I would recommend doing the updates inside app code

How can I access a key value in jsonb postgresql?

{
"data": {
"val": "{\"cell_number\": \"123\"}"
}
}
I want to get the value in data -> val -> cell_number i.e '123'. Is there a way to do it in postgresql?
If that is not a typo and you put a stringified json object under the val key, then this will untangle it for you:
with invar as (
select '{
"data": {
"val": "{\"cell_number\": \"123\"}"
}
}'::jsonb as jsonb_col
)
select ((jsonb_col->'data'->>'val')::jsonb)->>'cell_number' from invar;
?column?
----------
123
(1 row)
The first step gets you down to the val key. That result has to be returned as text (hence the ->>) and then cast to jsonb so that cell_number can be dereferenced.

Postgres find in jsonb nested array

I have a case when my data in in nested arrays of jsonb in order to find the value I have to do multiple JSONB_ARRAY_ELEMENTS which is costly and takes a lots of nested code.
The json file has the continents inside countries and inside cities.
I need to access a city value.
Is there a way to make this query simpler and faster?
I was trying to solve it using JSON_EXTRACT_PATH but in order to get in to a array but I need the indexes.
WITH mydata AS (
SELECT '
{
"continents":[
{
"name":"America",
"area":43316000,
"countries":[
{
"country_name":"Canada",
"capital":"Toronto",
"cities":[
{
"city_name":"Ontario",
"population":2393933
},
{
"city_name":"Quebec",
"population":12332
}
]
},
{
"country_name":"Brazil",
"capital":"Brasilia",
"cities":[
{
"city_name":"Sao Paolo",
"population":34534534
},
{
"city_name":"Rio",
"population":445345
}
]
}
]
},
{
"name":"Europa",
"area":10530751,
"countries":[
{
"country_name":"Switzerland",
"capital":"Zurich",
"cities":[
{
"city_name":"Ginebra",
"population":4564565
},
{
"city_name":"Basilea",
"population":4564533
}
]
},
{
"country_name":"Norway",
"capital":"Oslo",
"cities":[
{
"city_name":"Oslo",
"population":3243534
},
{
"city_name":"Steinkjer",
"population":4565465
}
]
}
]
}
]
}
'::JSONB AS data_column
)
SELECT cit.city->>'city_name' AS city,
(cit.city->>'population')::INTEGER AS population
FROM (SELECT JSONB_ARRAY_ELEMENTS(coun.country->'cities') AS city
FROM (SELECT JSONB_ARRAY_ELEMENTS(cont.continent->'countries') AS country
FROM (SELECT JSONB_ARRAY_ELEMENTS(data_column->'continents') AS continent
FROM mydata
) AS cont
WHERE cont.continent #> '{"name":"Europa"}'
) AS coun
WHERE coun.country #> '{"country_name" : "Norway"}'
) AS cit
WHERE cit.city #> '{"city_name": "Oslo"}'
See my nested queries? looks ugly, I can get the answer using: JSONB_EXTRACT_PATH( data_column->'continents', '1', 'countries', '1', 'cities', '0', 'population') but I had to hardcode the array indexes.
Hope you can help me out.
Thanks.
You don't need any nesting, you can do lateral queries:
SELECT
city->>'city_name' AS city,
(city->>'population')::INTEGER AS population
FROM
mydata,
JSONB_ARRAY_ELEMENTS(data_column->'continents') AS continent,
JSONB_ARRAY_ELEMENTS(continent->'countries') AS country,
JSONB_ARRAY_ELEMENTS(country->'cities') AS city
WHERE continent ->> 'name' = 'Europa'
AND country ->> 'country_name' = 'Norway'
AND city ->> 'city_name' = 'Oslo';
(online demo)
However, since you mentioned paths and having to specify indices in there, this is actually the perfect use case for Postgres 12 JSON paths:
SELECT jsonb_path_query(data_column, '$.continents[*]?(#.name == "Europa").countries[*]?(#.country_name=="Norway").cities[*]?(#.city_name=="Oslo")') FROM mydata
(online demo)

file upload part map to model scala play2

file upload code
request.body.file("image").map { ing =>
val target = new java.io.File(s"./uploads/${ing.filename}")
ing.ref.moveTo(target, true)
}
How do you connect the ing.filename to the AboutImages "image" object so I can update the databases!
this is the nobel:
object AboutImages {
val images = {
get[Long]("about_us_images.id") ~
get[String]("about_us_images.image") ~
get[Option[Date]]("about_us_images.created_at") ~
get[Option[Date]]("about_us_images.updated_at") ~
get[Option[Int]]("about_us_images.position") ~
get[String]("about_us_images.name") map {
case id~image~created_at~updated_at~position~name => AboutImages (id, image, created_at, updated_at, position, name)
}
}
the is the form:
val details: Form[AboutImages] = Form(
mapping(
"id" -> longNumber,
"image" -> text,
"created_at" -> optional(date),
"updated_at" -> optional(date),
"position" -> optional(number),
"name" -> nonEmptyText
)(AboutImages.apply)
(AboutImages.unapply)
)
Not entirely sure I understand your question — are you having problems accessing the form components other than the file upload?
If so, take a look at the post Play file upload form with additional fields.

How to pass json array to postgresql function

I have a json object contains array of json.
i.e:
{
ID:'1',
Name:'Pooja',
School:[{Name:'ABC',Address:'Nagpur'},{Name:'CDF'},{Name:'GHI',Year:{From:'2015',To:'2016'}}]
}
I want to insert this in three different table as User, School and Year.
Can anyone help?
Assuming I understood you correctly, you want to split the provided JSON object and write its subobjects to some tables.
PostgreSQL has several JSON operators that might help.
First of all, you should cast JSON's textual representation to type json. This allows you to use JSON operators and functions, such as -> (Get JSON object field):
select
'{
"ID":1,
"Name":"Pooja",
"School":[
{
"Name":"ABC",
"Address":"Nagpur"
},
{
"Name":"CDF"
},
{
"Name":"GHI",
"Year":{
"From":"2015",
"To":"2016"
}
}
]
}'::json -> 'Name';
?column?
----------
Pooja
(1 row)
Or, for example, #> (Get JSON object at specified path):
select 'YOUR_JSON'::json #> '{"School", 2, "Year"}';
?column?
----------------------------
{ +
"From":"2015",+
"To":"2016" +
}
(1 row)
All you have to do now is insert the result of the operator application into the table of your choice:
insert into user select 'YOUR_JSON'::json -> 'Name';
If you simply want to extract the School array, you could still use -> operator:
select 'YOUR_JSON'::json -> 'School';
?column?
-----------------------------
[ +
{ +
"Name":"ABC", +
"Address":"Nagpur"+
}, +
{ +
"Name":"CDF" +
}, +
{ +
"Name":"GHI", +
"Year":{ +
"From":"2015", +
"To":"2016" +
} +
} +
]
(1 row)
Read the documentation for more.