I am building a chat room engine and I have a graph like so:
User---Started--->Room----Discuss--->Item<---Own---User
I would like to get all the Items and the roomId associated to the Item for a particular User.
select
*, in('Discuss').in('Started')[name ='julie'] as roomId
from item
so this is going threw the right 'Room' and finds the one started by 'Julie' but it returns Julie's Id, how do I get with this query the Id of that room ?
It's like i need to do a 'one back' and get #rid...
I am new to graphs so any pointers would be appreciated.
This answer is based upon option (1) from my comment to the original question, as you indicated here that "I would like to get all the Items not only the ones that have a room starter by julie".
SELECT *, $rooms_by_user FROM items LET $rooms_by_user = ( SELECT FROM ( SELECT expand(in('Discuss')) FROM $parent.$current ) WHERE in('Started').Name contains 'Julie' )
So the original query is getting all of the items, and is adding a custom field to the output (the LET clause and $rooms_by_user field).
The inner query of $rooms_by_user is expanding all off the 'Discuss' edges of each 'Item' (this runs per item), thus returning all of the 'Room' vertexes associated with the 'Item'. The outer part of the query then filters the 'Room' vertexes for only those started by 'User' with name 'Julie'.
I had to use contains in the outer query as in('Started').Name returns a list, eg ["Julie"].
It would probably be better to filter the user via rid, then you only have to filter on the out property of the 'Started' edge (ie the database would have only have to 'jump' once from vertex to edge, not from vertex to edge to vertex - this is only true if you aren't using lightweight edges though, which is the default setting in the current version.).
Assuming your graph has vertex and edge classes, and is of a similar structure to:
You can use a combination of select and the graph traverse query to return what you need. For example:
select from (traverse both('discuss'), both('started') from #14:0)
where (#class='room')
or (#class='user' and name='Julie')
This will return the following result (JSON format)
{
"result": [
{
"#type": "d",
"#rid": "#13:0",
"#version": 3,
"#class": "room",
"name": "Baking",
"in_started": [
"#15:0"
],
"out_discuss": [
"#16:0"
],
"#fieldTypes": "in_started=g,out_discuss=g"
},
{
"#type": "d",
"#rid": "#12:0",
"#version": 2,
"#class": "user",
"name": "Julie",
"out_started": [
"#15:0"
],
"#fieldTypes": "out_started=g"
}
],
"notification": "Query executed in 0.027 sec. Returned 2 record(s)"
}
Update:
If you only wanted to return the #rid of the room you can wrap the above query with another select:
select #rid from
(select from (traverse both('discuss'), both('started') from #14:0)
where (#class='room')
or (#class='user' and name='Julie'))
where #class='room'
Related
I have jsonb documents in Postgres table like this:
{
"Id": "267f9e75-efb8-4331-8220-932b023b3a34",
"Name": "Some File",
"Tags": [
{
"Key": "supplier",
"Value": "70074"
},
{
"Key": "customer",
"Value": "1008726"
}
]
}
My working query to find documents where Tags.Key is supplier is this:
FROM docs
WHERE EXISTS(
SELECT TRUE
FROM jsonb_array_elements(data -> 'Tags') x
WHERE x ->> 'Key' IN ('supplier')
I wanted to find a shorter way and tried this:
select * from docs where data->'Tags' #> '[{ "Key":"supplier"}]';
But then I get this error for the syntax of #>:
<operator>, AT, EXCEPT, FETCH, FOR, GROUP, HAVING, INTERSECT, ISNULL, LIMIT, NOTNULL, OFFSET, OPERATOR, ORDER, TIME, UNION, WINDOW, WITH or '[' expected, got '#'
My questions are: is there a shorter working query and what's wrong with my second query?
It's actually been an IDE issue: https://youtrack.jetbrains.com/issue/RIDER-83829/Postgres-json-query-error-but-works-in-DataGrip
Using PostgreSQL 13.4, I have a query like this, which is used for a GraphQL endpoint:
export const getPlans = async (filter: {
offset: number;
limit: number;
search: string;
}): Promise<SearchResult<Plan>> => {
const query = sql`
SELECT
COUNT(p.*) OVER() AS total_count,
p.json, TO_CHAR(MAX(pp.published_at) AT TIME ZONE 'JST', 'YYYY-MM-DD HH24:MI') as last_published_at
FROM
plans_json p
LEFT JOIN
published_plans pp ON p.plan_id = pp.plan_id
WHERE
1 = 1
`;
if (filter.search)
query.append(sql`
AND
(
p.plan_id::text ILIKE ${`${filter.search}%`}
OR
p.json->>'name' ILIKE ${`%${filter.search}%`}
**OR
p.json->'activities'->'venue'->>'name' ILIKE ${`%${filter.search}%`}
)
`);
// The above OR line or this alternative didn't work
// p #> '{"activities":[{"venue":{"name":${`%${filter.search}`}}}]}'
.
.
.
}
The data I'm accessing looks like this:
{
"data": {
"plans": {
"records": [
{
"id": "345sdfsdf",
"name": "test1",
"activities": [{...},{...}]
},
{
"id": "abc123",
"name": "test2",
"activities": [
{
"name": "test2",
"id": "123abc",
"venue": {
"name": *"I WANT THIS VALUE"* <------------------------
}
}
]
}
]
}
}
}
Since the search parameter provided to this query varies, I can only make changes in the WHERE block, in order to avoid affecting the other two working searches.
I tried 2 approaches (see above query), but neither worked.
Using TypeORM would be an alternative.
EDIT: For example, could I make that statement work somehow? I want to compare VALUE with the search string that is provided as argument:
p.json ->> '{"activities":[{"venue":{"name": VALUE}}]}' ILIKE ${`%${filter.search}`}
First, you should use the jsonb type instead of the json type in postgres for many reasons, see the manual :
... In general, most applications should prefer to store JSON data as
jsonb, unless there are quite specialized needs, such as legacy
assumptions about ordering of object keys...
Then you can use the following query to get the whole json data based on the search_parameter provided to the query via the user interface as far as the search_parameter is a regular expression (see the manual) :
SELECT query
FROM plans_json p
CROSS JOIN LATERAL jsonb_path_query(p.json :: jsonb , FORMAT('$ ? (#.data.plans.records[*].activities[*].venue.name like_regex "%s")', search_parameter) :: jsonpath) AS query
If you need to retrieve part of the json data only, then you transfer in the jsonb_path_query function the corresponding part of the jsonpath which is in the '(#.jsonpath)' section to the '$' section. For instance, if you just want to retrieve the jsonb object {"name": "test2", ...} then the query is :
SELECT query
FROM plans_json p
CROSS JOIN LATERAL jsonb_path_query(p.json :: jsonb , FORMAT('$.data.plans.records[*].activities[*] ? (#.venue.name like_regex "%s")', search_parameter) :: jsonpath) AS query
I'm using Azure data factory to retrieve data and copy into a database... the Source looks like this:
{
"GroupIds": [
"4ee1a-0856-4618-4c3c77302b",
"21259-0ce1-4a30-2a499965d9",
"b2209-4dda-4e2f-029384e4ad",
"63ac6-fcbc-8f7e-36fdc5e4f9",
"821c9-aa73-4a94-3fc0bd2338"
],
"Id": "w5a19-a493-bfd4-0a0c8djc05",
"Name": "Test Item",
"Description": "test item description",
"Notes": null,
"ExternalId": null,
"ExpiryDate": null,
"ActiveStatus": 0,
"TagIds": [
"784083-4c77-b8fb-0135046c",
"86de96-44c1-a497-0a308607",
"7565aa-437f-af36-8f9306c9",
"d5d841-1762-8c14-d8420da2",
"bac054-2b6e-a19b-ef5b0b0c"
],
"ResourceIds": []
}
In my ADF pipeline, I am trying to get the count of GroupIds and store that in a database column (along with the associated Id from the JSON above).
Is there some kind of syntax I can use to tell ADF that I just want the count of GroupIds or is this going to require some kind of recursive loop activity?
You can use the length function in Azure Data Factory (ADF) to check the length of json arrays:
length(json(variables('varSource')).GroupIds)
If you are loading the data to a SQL database then you could use OPENJSON, a simple example:
DECLARE #json NVARCHAR(MAX) = '{
"GroupIds": [
"4ee1a-0856-4618-4c3c77302b",
"21259-0ce1-4a30-2a499965d9",
"b2209-4dda-4e2f-029384e4ad",
"63ac6-fcbc-8f7e-36fdc5e4f9",
"821c9-aa73-4a94-3fc0bd2338"
],
"Id": "w5a19-a493-bfd4-0a0c8djc05",
"Name": "Test Item",
"Description": "test item description",
"Notes": null,
"ExternalId": null,
"ExpiryDate": null,
"ActiveStatus": 0,
"TagIds": [
"784083-4c77-b8fb-0135046c",
"86de96-44c1-a497-0a308607",
"7565aa-437f-af36-8f9306c9",
"d5d841-1762-8c14-d8420da2",
"bac054-2b6e-a19b-ef5b0b0c"
],
"ResourceIds": []
}'
SELECT *
FROM OPENJSON( #json, '$.GroupIds' )
SELECT COUNT(*) countOfGroupIds
FROM OPENJSON( #json, '$.GroupIds' );
My results:
If your data is stored in a table the code is similar. Make sense?
Another funky way to approach it if you really need the count in-line, is to convert the JSON to XML using the built-in functions and then run some XPath on it. It's not as complicated as it sounds and would allow you to get the result inside the pipeline.
The Data Factory XML function converts JSON to XML, but that JSON must have a single root property. We can fix up the json with concat and a single line of code. In this example I'm using a Set Variable activity, where varSource is your original JSON:
#concat('{"root":', variables('varSource'), '}')
Next, we can just apply the XPath with another simple expression:
#string(xpath(xml(json(variables('varIntermed1'))), 'count(/root/GroupIds)'))
My results:
Easy huh. It's a shame there isn't more built-in support for JPath unless I'm missing something, although you can use limited JPath in the Copy activity.
You can use Data flow activity in the Azure data factory pipeline to get the count.
Step1:
Connect the Source to JSON dataset, and in Source options under JSON settings, select single document.
In the source preview, you can see there are 5 GroupIDs per ID.
Step2:
Use flatten transformation to deformalize the values into rows for GroupIDs.
Select GroupIDs array in Unroll by and Unroll root.
Step3:
Use Aggregate transformation, to get the count of GroupIDs group by ID.
Under Group by: Select a column from the drop-down for your aggregation.
Under Aggregate: You can build the expression to get the count of the column (GroupIDs).
Aggregate Data preview:
Step4: Connect the output to Sink transformation to load the final output to database.
I have a table with a field called 'keywords'. It is a JSONB field with an array of keyword metadata, including the keyword's name.
What I would like is to query the counts of all these keywords by name, i.e. aggregate on keyword name and count(id). All the examples of GROUP BY queries I can find just result in the grouping occuring on the full list (i.e. only giving me counts where the two records have the same set of keywords).
So is it possible to somehow expand the list of keywords in a way that lets me get these counts?
If not, I am still at the planning stage and could refactor my schema to better handle this.
"keywords": [
{
"addedAt": "2017-04-07T21:11:00+0000",
"addedBy": {
"email": "foo#bar.com"
},
"keyword": {
"name": "Animal"
}
},
{
"addedAt": "2017-04-07T20:54:00+0000",
"addedBy": {
"email": "foo#bar.comm"
},
"keyword": {
"name": "Mammal"
}
}
]
step-by-step demo:db<>fiddle
SELECT
elems -> 'keyword' ->> 'name' AS keyword, -- 2
COUNT(*) AS count
FROM
mytable t,
jsonb_array_elements(myjson -> 'keywords') AS elems -- 1
GROUP BY 1 -- 3
Expand the array records into one row per element
Get the keyword's names
Group these text values.
I have a table schema as follows:
DummyTable
-------------
someData JSONB
All my values will be a JSON object. For example, when you do a select *
from DummyTable, it would look like
someData(JSONB)
------------------
{"values":["P1","P2","P3"],"key":"ProductOne"}
{"values":["P3"],"key":"ProductTwo"}
I want a query which will give me result set as follows:
[
{
"values": ["P1","P2","P3"],
"key": "ProductOne"
},
{
"values": ["P4"],
"key": "ProductTwo"
}
]
I'm using Postgres version 9.4.2. I looked at documentation page of the same, but could not find the query which would give the above result.
However, in my API, I can build the JSON by iterating over rows, but I would prefer query doing the same. I tried json_build_array, row_to_json on a result which would be given by select * from table_name, but no luck.
Any help would be appreciated.
Here is the link I looked for to write a query for JSONB
You can use json_agg or jsonb_agg:
create table dummytable(somedata jsonb not null);
insert into dummytable(somedata) values
('{"values":["P1","P2","P3"],"key":"ProductOne"}'),
('{"values":["P3"],"key":"ProductTwo"}');
select jsonb_pretty(jsonb_agg(somedata)) from dummytable;
Result:
[
{
"key": "ProductOne",
"values": [
"P1",
"P2",
"P3"
]
},
{
"key": "ProductTwo",
"values": [
"P3"
]
}
]
Although retrieving the data row by row and building on client side can be made more efficient, as the server can start to send data much sooner - after it retrieves first matching row from storage. If it needs to build the json array first, it would need to retrieve all the rows and merge them before being able to start sending data.