Filemaker - Getting field values from related table - filemaker

I have the following setup in Filemaker Pro 10.
Table1 with: id_table1, related_names
Table2 with: id_table2, name, include
and a joint-table with: id_table1, id_table2
Now I want either make related_names a calculated field or write a script that sets related_names to a comma separated list of all names which are connected through the joint-table and have Table2.include = True.
So for example a data set could look like:
Table1
id_table1, related_names
1, "foo,bar"
2, "foo"
3, ""
joint-table
id_table1, id_table2
1,1
1,2
1,3
2,1
Table2
id_table2, name, include
1, foo, True
2, bar, True
3, baz, False
After searching the internet for a few hours the closest I came was a calculated field
with list(join-table::id_table2) which gives me a list with a all the id_table2's. But now I would need to find the appropriate records in table2 and check the include field.
I hope the problem is clear. any help is highly appreciated.

One of solutions would be to create a calculated field in Table2 that only evaluates to Name if Include is True:
Case( Include, Name )
and then get its contents with List() and replace carriage returns with commas:
Substitute( List( Table2::Name to Include ), "¶", ", " )

Related

PGSQL - How to efficiently flatten key/value table [duplicate]

Does any one know how to create crosstab queries in PostgreSQL?
For example I have the following table:
Section Status Count
A Active 1
A Inactive 2
B Active 4
B Inactive 5
I would like the query to return the following crosstab:
Section Active Inactive
A 1 2
B 4 5
Is this possible?
Install the additional module tablefunc once per database, which provides the function crosstab(). Since Postgres 9.1 you can use CREATE EXTENSION for that:
CREATE EXTENSION IF NOT EXISTS tablefunc;
Improved test case
CREATE TABLE tbl (
section text
, status text
, ct integer -- "count" is a reserved word in standard SQL
);
INSERT INTO tbl VALUES
('A', 'Active', 1), ('A', 'Inactive', 2)
, ('B', 'Active', 4), ('B', 'Inactive', 5)
, ('C', 'Inactive', 7); -- ('C', 'Active') is missing
Simple form - not fit for missing attributes
crosstab(text) with 1 input parameter:
SELECT *
FROM crosstab(
'SELECT section, status, ct
FROM tbl
ORDER BY 1,2' -- needs to be "ORDER BY 1,2" here
) AS ct ("Section" text, "Active" int, "Inactive" int);
Returns:
Section | Active | Inactive
---------+--------+----------
A | 1 | 2
B | 4 | 5
C | 7 | -- !!
No need for casting and renaming.
Note the incorrect result for C: the value 7 is filled in for the first column. Sometimes, this behavior is desirable, but not for this use case.
The simple form is also limited to exactly three columns in the provided input query: row_name, category, value. There is no room for extra columns like in the 2-parameter alternative below.
Safe form
crosstab(text, text) with 2 input parameters:
SELECT *
FROM crosstab(
'SELECT section, status, ct
FROM tbl
ORDER BY 1,2' -- could also just be "ORDER BY 1" here
, $$VALUES ('Active'::text), ('Inactive')$$
) AS ct ("Section" text, "Active" int, "Inactive" int);
Returns:
Section | Active | Inactive
---------+--------+----------
A | 1 | 2
B | 4 | 5
C | | 7 -- !!
Note the correct result for C.
The second parameter can be any query that returns one row per attribute matching the order of the column definition at the end. Often you will want to query distinct attributes from the underlying table like this:
'SELECT DISTINCT attribute FROM tbl ORDER BY 1'
That's in the manual.
Since you have to spell out all columns in a column definition list anyway (except for pre-defined crosstabN() variants), it is typically more efficient to provide a short list in a VALUES expression like demonstrated:
$$VALUES ('Active'::text), ('Inactive')$$)
Or (not in the manual):
$$SELECT unnest('{Active,Inactive}'::text[])$$ -- short syntax for long lists
I used dollar quoting to make quoting easier.
You can even output columns with different data types with crosstab(text, text) - as long as the text representation of the value column is valid input for the target type. This way you might have attributes of different kind and output text, date, numeric etc. for respective attributes. There is a code example at the end of the chapter crosstab(text, text) in the manual.
db<>fiddle here
Effect of excess input rows
Excess input rows are handled differently - duplicate rows for the same ("row_name", "category") combination - (section, status) in the above example.
The 1-parameter form fills in available value columns from left to right. Excess values are discarded.
Earlier input rows win.
The 2-parameter form assigns each input value to its dedicated column, overwriting any previous assignment.
Later input rows win.
Typically, you don't have duplicates to begin with. But if you do, carefully adjust the sort order to your requirements - and document what's happening.
Or get fast arbitrary results if you don't care. Just be aware of the effect.
Advanced examples
Pivot on Multiple Columns using Tablefunc - also demonstrating mentioned "extra columns"
Dynamic alternative to pivot with CASE and GROUP BY
\crosstabview in psql
Postgres 9.6 added this meta-command to its default interactive terminal psql. You can run the query you would use as first crosstab() parameter and feed it to \crosstabview (immediately or in the next step). Like:
db=> SELECT section, status, ct FROM tbl \crosstabview
Similar result as above, but it's a representation feature on the client side exclusively. Input rows are treated slightly differently, hence ORDER BY is not required. Details for \crosstabview in the manual. There are more code examples at the bottom of that page.
Related answer on dba.SE by Daniel Vérité (the author of the psql feature):
How do I generate a pivoted CROSS JOIN where the resulting table definition is unknown?
SELECT section,
SUM(CASE status WHEN 'Active' THEN count ELSE 0 END) AS active, --here you pivot each status value as a separate column explicitly
SUM(CASE status WHEN 'Inactive' THEN count ELSE 0 END) AS inactive --here you pivot each status value as a separate column explicitly
FROM t
GROUP BY section
You can use the crosstab() function of the additional module tablefunc - which you have to install once per database. Since PostgreSQL 9.1 you can use CREATE EXTENSION for that:
CREATE EXTENSION tablefunc;
In your case, I believe it would look something like this:
CREATE TABLE t (Section CHAR(1), Status VARCHAR(10), Count integer);
INSERT INTO t VALUES ('A', 'Active', 1);
INSERT INTO t VALUES ('A', 'Inactive', 2);
INSERT INTO t VALUES ('B', 'Active', 4);
INSERT INTO t VALUES ('B', 'Inactive', 5);
SELECT row_name AS Section,
category_1::integer AS Active,
category_2::integer AS Inactive
FROM crosstab('select section::text, status, count::text from t',2)
AS ct (row_name text, category_1 text, category_2 text);
DB Fiddle here:
Everything works: https://dbfiddle.uk/iKCW9Uhh
Without CREATE EXTENSION tablefunc; you get this error: https://dbfiddle.uk/j8W1CMvI
ERROR: function crosstab(unknown, integer) does not exist
LINE 4: FROM crosstab('select section::text, status, count::text fro...
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
Solution with JSON aggregation:
CREATE TEMP TABLE t (
section text
, status text
, ct integer -- don't use "count" as column name.
);
INSERT INTO t VALUES
('A', 'Active', 1), ('A', 'Inactive', 2)
, ('B', 'Active', 4), ('B', 'Inactive', 5)
, ('C', 'Inactive', 7);
SELECT section,
(obj ->> 'Active')::int AS active,
(obj ->> 'Inactive')::int AS inactive
FROM (SELECT section, json_object_agg(status,ct) AS obj
FROM t
GROUP BY section
)X
Sorry this isn't complete because I can't test it here, but it may get you off in the right direction. I'm translating from something I use that makes a similar query:
select mt.section, mt1.count as Active, mt2.count as Inactive
from mytable mt
left join (select section, count from mytable where status='Active')mt1
on mt.section = mt1.section
left join (select section, count from mytable where status='Inactive')mt2
on mt.section = mt2.section
group by mt.section,
mt1.count,
mt2.count
order by mt.section asc;
The code I'm working from is:
select m.typeID, m1.highBid, m2.lowAsk, m1.highBid - m2.lowAsk as diff, 100*(m1.highBid - m2.lowAsk)/m2.lowAsk as diffPercent
from mktTrades m
left join (select typeID,MAX(price) as highBid from mktTrades where bid=1 group by typeID)m1
on m.typeID = m1.typeID
left join (select typeID,MIN(price) as lowAsk from mktTrades where bid=0 group by typeID)m2
on m1.typeID = m2.typeID
group by m.typeID,
m1.highBid,
m2.lowAsk
order by diffPercent desc;
which will return a typeID, the highest price bid and the lowest price asked and the difference between the two (a positive difference would mean something could be bought for less than it can be sold).
There's a different dynamic method that I've devised, one that employs a dynamic rec. type (a temp table, built via an anonymous procedure) & JSON. This may be useful for an end-user who can't install the tablefunc/crosstab extension, but can still create temp tables or run anon. proc's.
The example assumes all the xtab columns are the same type (INTEGER), but the # of columns is data-driven & variadic. That said, JSON aggregate functions do allow for mixed data types, so there's potential for innovation via the use of embedded composite (mixed) types.
The real meat of it can be reduced down to one step if you want to statically define the rec. type inside the JSON recordset function (via nested SELECTs that emit a composite type).
dbfiddle.uk
https://dbfiddle.uk/N1EzugHk
Crosstab function is available under the tablefunc extension. You'll have to create this extension one time for the database.
CREATE EXTENSION tablefunc;
You can use the below code to create pivot table using cross tab:
create table test_Crosstab( section text,
status text,
count numeric)
insert into test_Crosstab values ( 'A','Active',1)
,( 'A','Inactive',2)
,( 'B','Active',4)
,( 'B','Inactive',5)
select * from crosstab(
'select section
,status
,count
from test_crosstab'
)as ctab ("Section" text,"Active" numeric,"Inactive" numeric)

SQL query to filter where all array items in JSONB array meet condition

I made a similar post before, but deleted it as it had contextual errors.
One of the tables in my database includes a JSONB column which includes an array of JSON objects. It's not dissimilar to this example of a session table which I've mocked up below.
id
user_id
snapshot
inserted_at
1
37
{cart: [{product_id: 1, price_in_cents: 3000, name: "product A"}, {product_id: 2, price_in_cents: 2500, name: "product B"}]}
2022-01-01 20:00:00.000000
2
24
{cart: [{product_id: 1, price_in_cents: 3000, name: "product A"}, {product_id: 3, price_in_cents: 5500, name: "product C"}]}
2022-01-02 20:00:00.000000
3
88
{cart: [{product_id: 4, price_in_cents: 1500, name: "product D"}, {product_id: 2, price_in_cents: 2500, name: "product B"}]}
2022-01-03 20:00:00.000000
The query I've worked with to retrieve records from this table is as follows.
SELECT sessions.*
FROM sessions
INNER JOIN LATERAL (
SELECT *
FROM jsonb_to_recordset(sessions.snapshot->'cart')
AS product(
"product_id" integer,
"name" varchar,
"price_in_cents" integer
)
) AS cart ON true;
I've been trying to update the query above to retrieve only the records in the sessions table for which ALL of the products in the cart have a price_in_cents value of greater than 2000.
To this point, I've not had any success on forming this query but I'd be grateful if anyone here can point me in the right direction.
You can use a JSON path expression:
select *
from sessions
...
where not sessions.snapshot ## '$.cart[*].price_in_cents <= 2000'
There is no JSON path expression that would check that all array elements are greater 2000. So this returns those rows where no element is smaller than 2000 - because that can be expressed with a JSON path expression.
Here is one possible solution based on the idea of your original query.
Each element of the cart JSON array object is joined to its sessions parent row. You 're left adding the WHERE clause conditions now that the wanted JSON array elements are exposed.
SELECT *
FROM (
SELECT
sess.id,
sess.user_id,
sess.inserted_at,
cart_items.cart_name,
cart_items.cart_product_id,
cart_items.cart_price_in_cents
FROM sessions sess,
LATERAL (SELECT (snapshot -> 'cart') snapshot_cart FROM sessions WHERE id = sess.id) snap_arr,
LATERAL (SELECT
(value::jsonb ->> 'name')::text cart_name,
(value::jsonb -> 'product_id')::int cart_product_id,
(value::jsonb -> 'price_in_cents')::int cart_price_in_cents
FROM JSONB_ARRAY_ELEMENTS(snap_arr.snapshot_cart)) cart_items
) session_snapshot_cart_product;
Explanation :
From the sessions table, the cart array is exctracted and joined per sessions row
The necessary items of the cart JSON array is then unnested by the second join using the JSONB_ARRAY_ELEMENTS(jsonb) function
The following worked well for me and allowed me the flexibility to use different comparison operators other than just ones such as == or <=.
In one of the scenarios I needed to construct, I also needed to have my WHERE in the subquery also compare against an array of values using the IN comparison operator, which was not viable using some of the other solutions that were looked at.
Leaving this here in case others run into the same issue as I did, or if others find better solutions or want to propose suggestions to build upon this one.
SELECT *
FROM sessions
WHERE NOT EXISTS (
SELECT sessions.*
FROM sessions
INNER JOIN LATERAL (
SELECT *
FROM jsonb_to_recordset(sessions.snapshot->'cart')
AS product(
"product_id" integer,
"name" varchar,
"price_in_cents" integer
)
) AS cart ON true
WHERE name ILIKE "Product%";
)

JSONB Data Type Modification in Postgresql

I have a doubt with modification of jsonb data type in postgres
Basic setup:-
array=> ["1", "2", "3"]
and now I have a postgresql database with an id column and a jsonb datatype column named lets just say cards.
id cards
-----+---------
1 {"1": 3, "4": 2}
thats the data in the table named test
Question:
How do I convert the cards of id->1 FROM {"1": 3, "4": 2} TO {"1": 4, "4":2, "2": 1, "3": 1}
How I expect the changes to occur:
From the array, increment by 1 all elements present inside the array that exist in the cards jsonb as a key thus changing {"1": 3} to {"1": 4} and insert the values that don't exist as a key in the cards jsonb with a value of 1 thus changing {"1":4, "4":2} to {"1":4, "4":2, "2":1, "3":1}
purely through postgres.
Partial Solution
I asked a senior for support regarding my question and I was told this:-
Roughly (names may differ): object keys to explode cards, array_elements to explode the array, left join them, do the calculation, re-aggregate the object. There may be a more direct way to do this but the above brute-force approach will work.
So I tried to follow through it using these two functions json_each_text(), json_array_elements_text() but ended up stuck halfway into this as well as I was unable to understand what they meant by left joining two columns:-
SELECT jsonb_each_text(tester_cards) AS each_text, jsonb_array_elements_text('[["1", 1], ["2", 1], ["3", 1]]') AS array_elements FROM tester WHERE id=1;
TLDR;
Update statement that checks whether a range of keys from an array exist or not in the jsonb data and automatically increments by 1 or inserts respectively the keys into the jsonb with a value of 1
Now it might look like I'm asking to be spoonfed but I really haven't managed to find anyway to solve it so any assistance would be highly appreciated 🙇
The key insight is that with jsonb_each and jsonb_object_agg you can round-trip a JSON object in a subquery:
SELECT id, (
SELECT jsonb_object_agg(key, value)
FROM jsonb_each(cards)
) AS result
FROM test;
(online demo)
Now you can JOIN these key-value pairs against the jsonb_array_elements of your array input. Your colleague was close, but not quite right: it requires a full outer join, not just a left (or right) join to get all the desired object keys for your output, unless one of your inputs is a subset of the other.
SELECT id, (
SELECT jsonb_object_agg(COALESCE(obj_key, arr_value), …)
FROM jsonb_array_elements_text('["1", "2", "3"]') AS arr(arr_value)
FULL OUTER JOIN jsonb_each(cards) AS obj(obj_key, obj_value) ON obj_key = arr_value
) AS result
FROM test;
(online demo)
Now what's left is only the actual calculation and the conversion to an UPDATE statement:
UPDATE test
SET cards = (
SELECT jsonb_object_agg(
COALESCE(key, arr_value),
COALESCE(obj_value::int, 0) + (arr_value IS NOT NULL)::int
)
FROM jsonb_array_elements_text('["1", "2", "3"]') AS arr(arr_value)
FULL OUTER JOIN jsonb_each_text(cards) AS obj(key, obj_value) ON key = arr_value
);
(online demo)

Make rows to Columns in Postgresql [duplicate]

Does any one know how to create crosstab queries in PostgreSQL?
For example I have the following table:
Section Status Count
A Active 1
A Inactive 2
B Active 4
B Inactive 5
I would like the query to return the following crosstab:
Section Active Inactive
A 1 2
B 4 5
Is this possible?
Install the additional module tablefunc once per database, which provides the function crosstab(). Since Postgres 9.1 you can use CREATE EXTENSION for that:
CREATE EXTENSION IF NOT EXISTS tablefunc;
Improved test case
CREATE TABLE tbl (
section text
, status text
, ct integer -- "count" is a reserved word in standard SQL
);
INSERT INTO tbl VALUES
('A', 'Active', 1), ('A', 'Inactive', 2)
, ('B', 'Active', 4), ('B', 'Inactive', 5)
, ('C', 'Inactive', 7); -- ('C', 'Active') is missing
Simple form - not fit for missing attributes
crosstab(text) with 1 input parameter:
SELECT *
FROM crosstab(
'SELECT section, status, ct
FROM tbl
ORDER BY 1,2' -- needs to be "ORDER BY 1,2" here
) AS ct ("Section" text, "Active" int, "Inactive" int);
Returns:
Section | Active | Inactive
---------+--------+----------
A | 1 | 2
B | 4 | 5
C | 7 | -- !!
No need for casting and renaming.
Note the incorrect result for C: the value 7 is filled in for the first column. Sometimes, this behavior is desirable, but not for this use case.
The simple form is also limited to exactly three columns in the provided input query: row_name, category, value. There is no room for extra columns like in the 2-parameter alternative below.
Safe form
crosstab(text, text) with 2 input parameters:
SELECT *
FROM crosstab(
'SELECT section, status, ct
FROM tbl
ORDER BY 1,2' -- could also just be "ORDER BY 1" here
, $$VALUES ('Active'::text), ('Inactive')$$
) AS ct ("Section" text, "Active" int, "Inactive" int);
Returns:
Section | Active | Inactive
---------+--------+----------
A | 1 | 2
B | 4 | 5
C | | 7 -- !!
Note the correct result for C.
The second parameter can be any query that returns one row per attribute matching the order of the column definition at the end. Often you will want to query distinct attributes from the underlying table like this:
'SELECT DISTINCT attribute FROM tbl ORDER BY 1'
That's in the manual.
Since you have to spell out all columns in a column definition list anyway (except for pre-defined crosstabN() variants), it is typically more efficient to provide a short list in a VALUES expression like demonstrated:
$$VALUES ('Active'::text), ('Inactive')$$)
Or (not in the manual):
$$SELECT unnest('{Active,Inactive}'::text[])$$ -- short syntax for long lists
I used dollar quoting to make quoting easier.
You can even output columns with different data types with crosstab(text, text) - as long as the text representation of the value column is valid input for the target type. This way you might have attributes of different kind and output text, date, numeric etc. for respective attributes. There is a code example at the end of the chapter crosstab(text, text) in the manual.
db<>fiddle here
Effect of excess input rows
Excess input rows are handled differently - duplicate rows for the same ("row_name", "category") combination - (section, status) in the above example.
The 1-parameter form fills in available value columns from left to right. Excess values are discarded.
Earlier input rows win.
The 2-parameter form assigns each input value to its dedicated column, overwriting any previous assignment.
Later input rows win.
Typically, you don't have duplicates to begin with. But if you do, carefully adjust the sort order to your requirements - and document what's happening.
Or get fast arbitrary results if you don't care. Just be aware of the effect.
Advanced examples
Pivot on Multiple Columns using Tablefunc - also demonstrating mentioned "extra columns"
Dynamic alternative to pivot with CASE and GROUP BY
\crosstabview in psql
Postgres 9.6 added this meta-command to its default interactive terminal psql. You can run the query you would use as first crosstab() parameter and feed it to \crosstabview (immediately or in the next step). Like:
db=> SELECT section, status, ct FROM tbl \crosstabview
Similar result as above, but it's a representation feature on the client side exclusively. Input rows are treated slightly differently, hence ORDER BY is not required. Details for \crosstabview in the manual. There are more code examples at the bottom of that page.
Related answer on dba.SE by Daniel Vérité (the author of the psql feature):
How do I generate a pivoted CROSS JOIN where the resulting table definition is unknown?
SELECT section,
SUM(CASE status WHEN 'Active' THEN count ELSE 0 END) AS active, --here you pivot each status value as a separate column explicitly
SUM(CASE status WHEN 'Inactive' THEN count ELSE 0 END) AS inactive --here you pivot each status value as a separate column explicitly
FROM t
GROUP BY section
You can use the crosstab() function of the additional module tablefunc - which you have to install once per database. Since PostgreSQL 9.1 you can use CREATE EXTENSION for that:
CREATE EXTENSION tablefunc;
In your case, I believe it would look something like this:
CREATE TABLE t (Section CHAR(1), Status VARCHAR(10), Count integer);
INSERT INTO t VALUES ('A', 'Active', 1);
INSERT INTO t VALUES ('A', 'Inactive', 2);
INSERT INTO t VALUES ('B', 'Active', 4);
INSERT INTO t VALUES ('B', 'Inactive', 5);
SELECT row_name AS Section,
category_1::integer AS Active,
category_2::integer AS Inactive
FROM crosstab('select section::text, status, count::text from t',2)
AS ct (row_name text, category_1 text, category_2 text);
DB Fiddle here:
Everything works: https://dbfiddle.uk/iKCW9Uhh
Without CREATE EXTENSION tablefunc; you get this error: https://dbfiddle.uk/j8W1CMvI
ERROR: function crosstab(unknown, integer) does not exist
LINE 4: FROM crosstab('select section::text, status, count::text fro...
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
Solution with JSON aggregation:
CREATE TEMP TABLE t (
section text
, status text
, ct integer -- don't use "count" as column name.
);
INSERT INTO t VALUES
('A', 'Active', 1), ('A', 'Inactive', 2)
, ('B', 'Active', 4), ('B', 'Inactive', 5)
, ('C', 'Inactive', 7);
SELECT section,
(obj ->> 'Active')::int AS active,
(obj ->> 'Inactive')::int AS inactive
FROM (SELECT section, json_object_agg(status,ct) AS obj
FROM t
GROUP BY section
)X
Sorry this isn't complete because I can't test it here, but it may get you off in the right direction. I'm translating from something I use that makes a similar query:
select mt.section, mt1.count as Active, mt2.count as Inactive
from mytable mt
left join (select section, count from mytable where status='Active')mt1
on mt.section = mt1.section
left join (select section, count from mytable where status='Inactive')mt2
on mt.section = mt2.section
group by mt.section,
mt1.count,
mt2.count
order by mt.section asc;
The code I'm working from is:
select m.typeID, m1.highBid, m2.lowAsk, m1.highBid - m2.lowAsk as diff, 100*(m1.highBid - m2.lowAsk)/m2.lowAsk as diffPercent
from mktTrades m
left join (select typeID,MAX(price) as highBid from mktTrades where bid=1 group by typeID)m1
on m.typeID = m1.typeID
left join (select typeID,MIN(price) as lowAsk from mktTrades where bid=0 group by typeID)m2
on m1.typeID = m2.typeID
group by m.typeID,
m1.highBid,
m2.lowAsk
order by diffPercent desc;
which will return a typeID, the highest price bid and the lowest price asked and the difference between the two (a positive difference would mean something could be bought for less than it can be sold).
There's a different dynamic method that I've devised, one that employs a dynamic rec. type (a temp table, built via an anonymous procedure) & JSON. This may be useful for an end-user who can't install the tablefunc/crosstab extension, but can still create temp tables or run anon. proc's.
The example assumes all the xtab columns are the same type (INTEGER), but the # of columns is data-driven & variadic. That said, JSON aggregate functions do allow for mixed data types, so there's potential for innovation via the use of embedded composite (mixed) types.
The real meat of it can be reduced down to one step if you want to statically define the rec. type inside the JSON recordset function (via nested SELECTs that emit a composite type).
dbfiddle.uk
https://dbfiddle.uk/N1EzugHk
Crosstab function is available under the tablefunc extension. You'll have to create this extension one time for the database.
CREATE EXTENSION tablefunc;
You can use the below code to create pivot table using cross tab:
create table test_Crosstab( section text,
status text,
count numeric)
insert into test_Crosstab values ( 'A','Active',1)
,( 'A','Inactive',2)
,( 'B','Active',4)
,( 'B','Inactive',5)
select * from crosstab(
'select section
,status
,count
from test_crosstab'
)as ctab ("Section" text,"Active" numeric,"Inactive" numeric)

Select greatest number from a json list of variable length with PostgreSQL

I have a column (let's call it jsn) in my database with json object (actually stored as plain text for reasons). This json object looks like this:
{"a":
{"b":[{"num":123, ...},
{"num":456, ...},
...,
{"num":789, ...}],
...
},
...
}
I'm interested in the biggest "num" inside that list of objects "b" inside the object "a".
If the list if of known length I can do it like this:
SELECT
GREATEST((jsn::json->'a'->'b'->>0)::int,
(jsn::json->'a'->'b'->>1)::int,
... ,
(jsn::json->'a'->'b'->>N)::int))
FROM table
Note that I'm new to PostgreSQL (and database querying in general!) so that may be a rubbish way to do it. In any case it works. What I can't figure out is how to make this work when the list, 'b', is of arbitrary and unknown length.
In case it is relevant, I am using PostgreSQL 10 hosted on AWS RDS and running queries using pgAdmin 4.
You need to unnest the array then you can apply a max() on the result:
select max((n.x -> 'num')::int)
from the_table t
cross join jsonb_array_elements(t.jsn::jsonb -> 'a' -> 'b') as n(x);
you probably want to add a group by, so that you can distinguish rom which row the max value came from. Assuming your table has a column id that is unique:
select id, max((n.x -> 'num')::int)
from the_table t
cross join jsonb_array_elements(t.jsn::jsonb -> 'a' -> 'b') as n(x)
group by id;