I wanted to create a materialized view from a label table and then create indexes on it. However, when I type the query to create this view, postgres pops an error.
Here is the query that I type to return all the vertices containing the "Book" label :
demo=# SELECT * FROM cypher ('demo', $$
demo$# MATCH (v:Book)
demo$# RETURN v
demo$# $$) as (vertex agtype);
vertex
---------------------------------------------------------------------------------------------------------------------------------------
{"id": 1125899906842625, "label": "Book", "properties": {"title": "The Hobbit"}}::vertex
{"id": 1125899906842626, "label": "Book", "properties": {"title": "SPQR: A History of Ancient Rome", "author": "Mary Beard"}}::vertex
(2 rows)
Here is the way that I'm creating the materialized view :
demo=# CREATE MATERIALIZED VIEW book_view AS SELECT * FROM cypher ('demo', $$
MATCH (v:Book)
RETURN v.author, v.title
$$) as (author agtype, title agtype);
ERROR: unhandled cypher(cstring) function call
DETAIL: demo
Related
I have created a graph called 'cyc_graph', Now I'm testing to see if I can insert some vertices in this graph using the agtype_build_map function, but this function requires graphID as a parameter. So How can I get the graphID of a graph already created from PostgreSQL terminal?
I tried something like this
SELECT 'cyc_graph.vtxs'::regclass::oid;
But this gives Oid of vtxs table. (vtxs is label name for vertices). I understand that cyc_graph is a schema name. So I don't know how can I get graphID/Oid of a schema name.
Inside the terminal, after loading AGE extension and setting search_path, use the command:
SELECT oid, name
FROM ag_graph;
It will output something like this:
oid | name
--------+-------------------
72884 | graph1
353258 | graph2
353348 | graph3
(3 rows)
The column oid is the Oid of the graphs.
But maybe you want to do it from the source code?
Call the function search_graph_name_cache(char* graph_name);
(located here)
It will return a pointer to a struct defined as graph_cache_data, which has the Oid of the graph.
What is GraphID?
Simple entities are assigned a unique graphid. A graphid is a unique composition of the entity’s label id and a unique sequence assigned to each label. Note that there will be overlap in ids when comparing entities from different graphs.
Reference: https://age.apache.org/age-manual/master/intro/types.html
test=# LOAD 'age';
LOAD
test=# SET search_path = ag_catalog, "$user", public;
SET
test=# SELECT * FROM cypher('graph', $$
MATCH (v)
RETURN v
$$) as (v agtype);
v
------------------------------------------------------------------------------------------------------------
{"id": 844424930131969, "label": "Person", "properties": {"name": "John"}}::vertex
{"id": 844424930131970, "label": "Person", "properties": {"name": "Jeff"}}::vertex
{"id": 844424930131971, "label": "Person", "properties": {"name": "Joan"}}::vertex
{"id": 844424930131972, "label": "Person", "properties": {"name": "Bill"}}::vertex
{"id": 844424930131973, "label": "Person", "properties": {"name": "Andres", "title": "Developer"}}::vertex
(5 rows)
Here the id is actually GraphID.
Consider the following:
create table query(id integer, query_definition jsonb);
create table query_item(path text[], id integer);
insert into query (id, query_definition)
values
(100, '{"columns":[{"type":"integer","field":"id"},{"type":"str","field":"firstname"},{"type":"str","field":"lastname"}]}'::jsonb),
(101, '{"columns":[{"type":"integer","field":"id"},{"type":"str","field":"firstname"}]}'::jsonb);
insert into query_item(path, id) values
('{columns,0,type}'::text[], 100),
('{columns,1,type}'::text[], 100),
('{columns,2,type}'::text[], 100),
('{columns,0,type}'::text[], 101),
('{columns,1,type}'::text[], 101);
I have a query table which has a jsonb column named query_definition.
The jsonb value looks like the following:
{
"columns": [
{
"type": "integer",
"field": "id"
},
{
"type": "str",
"field": "firstname"
},
{
"type": "str",
"field": "lastname"
}
]
}
In order to replace all "type": "..." with "type": "string", I've built the query_item table which contains the following data:
path |id |
----------------+---+
{columns,0,type}|100|
{columns,1,type}|100|
{columns,2,type}|100|
{columns,0,type}|101|
{columns,1,type}|101|
path matches each path from the json root to the "type" entry, id is the corresponding query's id.
I made up the following sql statement to do what I want:
update query q
set query_definition = jsonb_set(q.query_definition, query_item.path, ('"string"')::jsonb, false)
from query_item
where q.id = query_item.id
But it partially works, as it takes the 1st matching id and skips the others (the 1st and 4th line of query_item table).
I know I could build a for statement, but it requires a plpgsql context and I'd rather avoid its use.
Is there a way to do it with a single update statement?
I've read in this topic it's possible to make it with strings, but I didn't find out how to adapt this mechanism with jsonb treatment.
I will try and present the current setup as an abstract view, the focus being on the logical approach to batch insert.
CREATE TABLE factv (id, ..other columns..);
CREATE TABLE facto (id, ..other columns..);
CREATE TABLE dims (id serial, dimensions jsonb);
The 2 fact tables share the same dimensions, but they've got different columns.
There's an event stream which send messages to a table and there's function which is executed for each row, the logic is similar to:
CREATE OR REPLACE FUNCTION insert_event() RETURNS TRIGGER AS $$
IF event_json ->> 'type' = 'someEvent' THEN
WITH factv_insert AS (
INSERT INTO factv VALUES (id,..other columns..)
fn_createDimId,
NEW.event_json->>..,
...
RETURNING id
)
INSERT INTO facto VALUES (id,..other columns..)
(select id from factv_insert),
NEW.event_json->>..
ELSE DoSomethingElse...
END IF;
RETURN NEW;
END;
$$ LANGUAGE PLPGSQL;
The function called in here fn_createDimId just looks up the dims table and if these dimension are not found, they're inserted. If those are already in there, just give me that id for those dimensions as the id for this fact insert.
I do have some new events coming through and I need to grab some information which breaks the rules around insert into..values with ERROR: more than one row returned by a subquery used as an expression
The event structure is similar, but not limited to
{
"type": "someEvent",
"instruction": {
"contains": {
"id": "containerid",
"map": {
"50561:null:null": {
"productid": "50561",
"quantity": 3
},
"50562:null:null": {
"productid": "50562",
"quantity": 8
},
"50559:null:null": {
"productid": "50559",
"quantity": 5
}
}
},
"target": {
"50561": "Random",
"50562": "Random",
"50559": "Mix",
}
}
}
What is causing the problems here is the information around target and the respective quantities for those ids. From the event shown above, I need to aggregate and insert into the fact table:
-------|-----
target | qty
-------|-----
Random | 11
Mixed | 5
-------------
If I was to query to grab the information, I would run the following:
WITH meta_data as (
SELECT
json_object_keys(event_json -> 'instruction' ->'target') as prodid
,event_json -> 'instruction' ->'target'->>json_object_keys(event_json -> 'instruction' ->'target') as target
,event_json -> 'instruction' ->'contains'->'map'->json_object_keys(event_json -> 'instruction' ->'contains'->'map')->>'quantity' as qty
FROM event_table )
select
target,
sum(qty::int)
from meta_data
group by target;
I am looking to find a solution which makes it possible to do the same logical operations, but overcomes the error on the multiple returned rows, ideally an iteration for each event which returns more than 1 row.
I'm struggling to find the right syntax for updating an array in a jsonb column in postgres 9.6.6
Given a column "comments", with this example:
[
{
"Comment": "A",
"LastModified": "1527579949"
},
{
"Comment": "B",
"LastModified": "1528579949"
},
{
"Comment": "C",
"LastModified": "1529579949"
}
]
If I wanted to append Z to each comment (giving AZ, BZ, CZ).
I know I need to use something like jsonb_set(comments, '{"Comment"}',
Any hints on finishing this off?
Thanks.
Try:
UPDATE elbat
SET comments = array_to_json(ARRAY(SELECT jsonb_set(x.original_comment,
'{Comment}',
concat('"',
x.original_comment->>'Comment',
'Z"')::jsonb)
FROM (SELECT jsonb_array_elements(elbat.comments) original_comment) x))::jsonb;
It uses jsonb_array_elements() to get the array elements as set, applies the changes on them using jsonb_set(), transforms this to an array and back to json with array_to_json().
But that's an awful lot of work. OK, maybe there is a more elegant solution, that I didn't find. But since your JSON seems to have a fixed schema anyway, I'd recommend a redesign to do it the relational way and have a simple table for the comments plus a linking table for the objects the comment is on. The change would have been very, very easy in such a model for sure.
Find a query returning the expected result:
select jsonb_agg(value || jsonb_build_object('Comment', value->>'Comment' || 'Z'))
from my_table
cross join jsonb_array_elements(comments);
jsonb_agg
-----------------------------------------------------------------------------------------------------------------------------------------------------
[{"Comment": "AZ", "LastModified": "1527579949"}, {"Comment": "BZ", "LastModified": "1528579949"}, {"Comment": "CZ", "LastModified": "1529579949"}]
(1 row)
Create a simple SQL function based of the above query:
create or replace function update_comments(jsonb)
returns jsonb language sql as $$
select jsonb_agg(value || jsonb_build_object('Comment', value->>'Comment' || 'Z'))
from jsonb_array_elements($1)
$$;
Use the function:
update my_table
set comments = update_comments(comments);
DbFiddle.
I have the following table:
CREATE TABLE trip
(
id SERIAL PRIMARY KEY ,
gps_data_json jsonb NOT NULL
);
The JSON in gps_data_json contains an array of of trip objects with the following fields (sample data below):
mode
timestamp
latitude
longitude
I'm trying to get all rows that contain a certain "mode".
SELECT * FROM trip
where gps_data_json ->> 'mode' = 'WALK';
I pretty sure I'm using the ->> operator wrong, but I'm unsure who to tell the query that the JSONB field is an array of objects?
Sample data:
INSERT INTO trip (gps_data_json) VALUES
('[
{
"latitude": 47.063480377197266,
"timestamp": 1503056880725,
"mode": "TRAIN",
"longitude": 15.450349807739258
},
{
"latitude": 47.06362533569336,
"timestamp": 1503056882725,
"mode": "WALK",
"longitude": 15.450264930725098
}
]');
INSERT INTO trip (gps_data_json) VALUES
('[
{
"latitude": 47.063480377197266,
"timestamp": 1503056880725,
"mode": "BUS",
"longitude": 15.450349807739258
},
{
"latitude": 47.06362533569336,
"timestamp": 1503056882725,
"mode": "WALK",
"longitude": 15.450264930725098
}
]');
The problem arises because ->> operator cannot walk through array:
First unnest your json array using json_array_elements function;
Then use the operator for filtering.
Following query does the trick:
WITH
A AS (
SELECT
Id
,jsonb_array_elements(gps_data_json) AS point
FROM trip
)
SELECT *
FROM A
WHERE (point->>'mode') = 'WALK';
Unnesting the array works fine, if you only want the objects containing the values queried.
The following checks for containment and returns the full JSONB:
SELECT * FROM trip
WHERE gps_data_json #> '[{"mode": "WALK"}]';
See also Postgresql query array of objects in JSONB field
select * from
(select id, jsonb_array_elements(gps_data_json) point from trip where id = 16) t
where point #> '{"mode": "WALK"}';
In My Table, id = 16 is to make sure that the specific row is jsonb-array datatype ONLY. Since other rows data is just JSONB object. So you must filter out jsonb-array data FIRST. Otherwise : ERROR: cannot extract elements from an object