How to extract data from postgresql jsonb object in multiple levels - postgresql

I have a table that contains json data with an 'signatures' array occurring multiple levels:
{
"packet": "data"
"signatures": [
{ "packet": "data" },
{ "packet": "data" }
]
"userIds": [
{
"packet": "data",
"signatures": [
{ "packet": "data" },
]
}
]
}
I know how to get all the signatures that occur on one level, i.e. all the signatures directly on the "packet" or all the signatures on all "userIDs":
select string_agg(signature->'packet'->>'data', ',')
from keys,
jsonb_array_elements(doc->'userIDs') userID,
jsonb_array_elements(userID->'signatures') signature
But is there a way to extract all the signatures without knowing on which level they are?

Related

How to add new property in an object nested in 2 arrays (JSONB postgresql)

I am looking to you for help in adding a property to a json object nested in 2 arrays.
Table Example :
CREATE TABLE events (
seq_id BIGINT PRIMARY KEY,
data JSONB NOT NULL,
evt_type TEXT NOT NULL
);
example of my JSONB data column in my table:
{
"Id":"1",
"Calendar":{
"Entries":[
{
"Id": 1,
"SubEntries":[
{
"ExtId":{
"Id":"10",
"System": "SampleSys"
},
"Country":"FR",
"Details":[
{
"ItemId":"1",
"Quantity":10,
},
{
"ItemId":"2",
"Quantity":3,
}
],
"RequestId":"222",
"TypeId":1,
}
],
"OrderResult":null
}
],
"OtherThingsArray":[
]
}
}
So I need to add new properties into a SubEntries object based on the Id value of the ExtId object (The where clause)
How can I do that please ?
Thanks a lot
You can use jsonb_set() for this, which takes jsonpath assignments as a text[] (array of text values) as
SELECT jsonb_set(
input_jsonb,
the starting jsonb document
path_array '{i,j,k[, ...]}'::text[],
the path array, where the series {i, j, k} progresses at each level with either the (string) key or (integer) index (starting at zero)denoting the new key (or index) to populate
new_jsonb_value,
if adding a key-value pair, you can use something like to_jsonb('new_value_string'::text) here to force things to format correctly
create_if_not_exists_boolean
if adding new keys/indexes, give this as true so they'll be appended; otherwise you'll be limited to overwriting existing keys
)
Example
json
{
"array1": [
{
"id": 1,
"detail": "test"
}
]
}
SQL
SELECT
jsonb_set('{"array1": [{"id": 1, "detail": "test"}]}'::jsonb,
'{array1,0,update}'::TEXT[],
to_jsonb('new'::text),
true
)
Output
{
"array1": [
{
"id": 1,
"upd": "new",
"detail": "test"
}
]
}
Note that you can only add 1 nominal level of depth at a time (i.e. either a new key or a new index entry), but you can circumvent this by providing the full depth in the assignment value, or by using jsonb_set() iteratively:
select
jsonb_set(
jsonb_set('{"array1": [{"id": 1, "detail": "test"}]}'::jsonb, '{array1,0,upd}'::TEXT[], '[{"new": "val"}]'::jsonb, true),
'{array1,0,upd,0,check}'::TEXT[],
'"test_val"',
true)
would be required to produce
{
"array1": [
{
"id": 1,
"upd": [
{
"new": "val",
"check": "test_val"
}
],
"detail": "test"
}
]
}
If you need other, more complex logic to evaluate which values need to be added to which objects, you can try:
dynamically creating a set of jsonb_set() statements for execution
using the outputs from queries of jsonb_each() and jsonb_array_elements() to evaluate the row logic down at the SubEntities level, and then using jsonb_object_agg() and jsonb_agg() as appropriate to build the document back up to the root level from the resultant object-arrays and key-value collections

Sort records based on nested array element in MongoDB

I have modal something like this
[{
"_id":"",
"document_name":"Document 1",
"document_users" : [
{
"name":"Jhon",
"email":"jhon#gmail.com",
"added_at":"2020-12-11 10:30:01"
},
{
"name":"Michel",
"email":"michel#gmail.com",
"added_at":"2020-12-11 10:40:00"
}
]
},
{
"_id":"",
"document_name":"Document 2",
"document_users" : [
{
"name":"Jhon",
"email":"jhon#gmail.com",
"added_at":"2020-12-12 10:30:01"
},
{
"name":"Henry",
"email":"henry#gmail.com",
"added_at":"2020-12-14 10:40:00"
}
]
}]
I need to sort this main documents records based on particular user's "added_at" in the "document_users". Here email id is unique in the "document_users" field.
In nutshell, sort the main documents based on nested elements.
It will be great if I get any help how to write the mongoDB query for this one

Search values using Index in mongodb

I am new to Mongodb and wish to implement search on field in mongo collection.
I have the following structure for my test collection:-
{
'key': <unique key>,
'val_arr': [
['laptop', 'macbook pro', '16gb', 'i9', 'spacegrey'],
['cellphone', 'iPhone', '4gb', 't2', 'rose gold'],
['laptop', 'macbook air', '8gb', 'i5', 'black'],
['router', 'huawei', '10x10', 'white'],
['laptop', 'macbook', '8gb', 'i5', 'silve'],
}
And I wish to find them based on index number and value, i.e.
Find the entry where first element in any of the val_arr is laptop and 3rd element's value is 8gb.
I tried looking at composite indexes in mongodb, but they have a limit of 32 keys to be indexed. Any help in this direction is appreciated.
There is a limit on indexes here but it really should not matter. In your case you actually say 'key': <unique key>. So if that really is "unique" then it's the only thing in the collection that need be indexed, as long as you actually include that "key" as part of every query you make since this will determine you to select a document.
Indexes on arrays "within" a document really don't matter that much unless you actually intend to search directly for those elements within a document. That might be the case, but this actually has no bearing on matching your values by numbered index positions:
db.collection.find(
{
"val_arr": {
"$elemMatch": { "0": "laptop", "2": "8gb" }
}
},
{ "val_arr.$": 1 }
)
Which would return:
{
"val_arr" : [
[
"laptop",
"macbook air",
"8gb",
"i5",
"black"
]
]
}
The $elemMatch allows you to express "multiple conditions" on the same array element. This is needed over standard dot notation forms because otherwise the condition is simply looking for "any" array member which matches the value at the index. For instance:
db.collection.find({ "val_arr.0": "laptop", "val_arr.2": "4gb" })
Actually matches the given document even though that "combination" does not exist on a single "row", but both values are actually present in the array as a whole. But just in different members. Using those same values with $elemMatch makes sure the pair is matched on the same element.
Note the { "val_arr.$": 1 } in the above example, which is the projection for the "single" matched element. That is optional, but this is just to talk about identifying the matches.
Using .find() this is as much as you can do and is a limitation of the positional operator in that it can only identify one matching element. The way to do this for "multiple matches" is to use aggregate() with $filter:
db.collection.aggregate([
{ "$match": {
"val_arr": {
"$elemMatch": { "0": "laptop", "2": "8gb" }
}
}},
{ "$addFields": {
"val_arr": {
"$filter": {
"input": "$val_arr",
"cond": {
"$and": [
{ "$eq": [ { "$arrayElemAt": [ "$$this", 0 ] }, "laptop" ] },
{ "$eq": [ { "$arrayElemAt": [ "$$this", 2 ] }, "8gb" ] }
]
}
}
}
}}
])
Which returns:
{
"key" : "k",
"val_arr" : [
[
"laptop",
"macbook air",
"8gb",
"i5",
"black"
],
[
"laptop",
"macbook",
"8gb",
"i5",
"silve"
]
]
}
The initial query conditions which actually select the matching document go into the $match and are exactly the same as the query conditions shown earlier. The $filter is applied to just get the elements which actually match it's conditions. Those conditions do a similar usage of $arrayElemAt inside the logical expression as to how the index values of "0" and "2" are applies in the query conditions itself.
Using any aggregation expression incurs an additional cost over the standard query engine capabilities. So it is always best to consider if you really need it before you dive and and use the statement. Regular query expressions are always better as long as they do the job.
Changing Structure
Of course whilst it's possible to match on index positions of an array, none of this actually helps in being able to actually create an "index" which can be used to speed up queries.
The best course here is to actually use meaningful property names instead of plain arrays:
{
'key': "k",
'val_arr': [
{
'type': 'laptop',
'name': 'macbook pro',
'memory': '16gb',
'processor': 'i9',
'color': 'spacegrey'
},
{
'type': 'cellphone',
'name': 'iPhone',
'memory': '4gb',
'processor': 't2',
'color': 'rose gold'
},
{
'type': 'laptop',
'name': 'macbook air',
'memory': '8gb',
'processor': 'i5',
'color': 'black'
},
{
'type':'router',
'name': 'huawei',
'size': '10x10',
'color': 'white'
},
{
'type': 'laptop',
'name': 'macbook',
'memory': '8gb',
'processor': 'i5',
'color': 'silve'
}
]
}
This does allow you "within reason" to include the paths to property names within the array as part of a compound index. For example:
db.collection.createIndex({ "val_arr.type": 1, "val_arr.memory": 1 })
And then actually issuing queries looks far more descriptive in the code than cryptic values of 0 and 2:
db.collection.aggregate([
{ "$match": {
"val_arr": {
"$elemMatch": { "type": "laptop", "memory": "8gb" }
}
}},
{ "$addFields": {
"val_arr": {
"$filter": {
"input": "$val_arr",
"cond": {
"$and": [
{ "$eq": [ "$$this.type", "laptop" ] },
{ "$eq": [ "$$this.memory", "8gb" ] }
]
}
}
}
}}
])
Expected results, and more meaningful:
{
"key" : "k",
"val_arr" : [
{
"type" : "laptop",
"name" : "macbook air",
"memory" : "8gb",
"processor" : "i5",
"color" : "black"
},
{
"type" : "laptop",
"name" : "macbook",
"memory" : "8gb",
"processor" : "i5",
"color" : "silve"
}
]
}
The common reason most people arrive at a structure like you have in the question is typically because they think they are saving space. This is not simply not true, and with most modern optimizations to the storage engines MongoDB uses it's basically irrelevant over any small gains that might have been anticipated.
Therefore, for the sake of "clarity" and also in order to actually support indexing on the data within your "arrays" you really should be changing the structure and use named properties here instead.
And again, if your entire usage pattern of this data is not using the key property of the document in queries, then it probably would be better to store those entries as separate documents to begin with instead of being in an array at all. That also makes getting results more efficient.
So to break that all down your options here really are:
You actually always include key as part of your query, so indexes anywhere else but on that property do not matter.
You change to using named properties for the values on the array members allowing you to index on those properties without hitting "Multikey limitations"
You decide you never access this data using the key anyway, so you just write all the array data as separate documents in the collection with proper named properties.
Going with one of those that actually suits your needs best is essentially the solution allowing you to efficiently deal with the sort of data you have.
N.B Nothing to do with the topic at hand really ( except maybe a note on storage size ), but it would generally be recommended that things with an inherent numeric value such as the memory or "8gb" types of data actually be expressed as numeric rather than "strings".
The simple reasoning is that whilst you can query for "8gb" as an equality, this does not help you with ranges such as "between 4 and 12 gigabytes.
Therefore it usually makes much more sense to use numeric values like 8 or even 8000. Note that numeric values will actually have an impact on storage in that they will typically take less space than strings. Which given that the omission of property names may have been attempting to reduce storage but does nothing, does show an actual area where storage size can be reduced as well.

How can I filter multiple key fields in mongoDB?

I want to filter my set of documents from mongo. I have keys in my document that share common suffix of _flag and the rest can be anything. This keys are flagged to Boolean values.
I just need to filter only for false value.
I could filter the false values by providing the full name of the key itself. But I need to filter all documents that have the key suffix extension of _flag.
I am attaching my document below :
{
"id":1,
"rollno":"212",
"StudentName":"Ammi",
"details":{
"examNo":2,
"form":[
[
{
"F_id":"443",
"text":"First Name",
"type":"text",
"required":true,
"School":{
"key":"First_Name",
"label":"First Name"
},
"First_Name_flag":false
}
],
[
{
"F_id":"444",
"text":"Second Name",
"type":"text",
"required":true,
"School":{
"key":"Second_Name",
"label":"Second Name"
},
"Second_Name_flag":false
}
]
]
},
"Admitted":true
}

Querying nested mongo document [duplicate]

This question already has answers here:
MongoDB - how to query for a nested item inside a collection?
(3 answers)
Closed 6 years ago.
I have few JSON documents stored in MongoDB that are basically error reports, i.e. each document contains a nested data structure mismatches between datasets. They look like the following:
{
"nodename": "BLAH"
"errors": {
"PC": {
"value": {
"PC93196": [
"post",
"casper"
],
"PC03196": [
"netdb"
]
}
}
}
}
So in the above case, a process/tool determines the PC value of PC93196 and PC03196. netdb in this case reported a different PC value to that of post and casper.
How do I query for all documents that have errors where the process/tool casper is involved?
The document schema doesn't really lend itself to being queried very easily in the way you're trying to. This is because the value object is composed of named objects, rather than being an Array of objects. I.e. the value object has two properties: PC93196 and PC03196.
If your document was structured slightly differently it would be much easier to query in the way you want. For example a document structure like this:
{
"nodename": "BLAH",
"errors": {
"PC": [
{
"name": "PC93196",
"type": [
"post",
"casper"
]
},
{
"name": "PC03196",
"type": [
"netdb"
]
}
]
}
}
Would allow you to write a query like this:
db.errors.aggregate([{$unwind:"$errors.PC"},{$match:{"errors.PC.type":"casper"}}])
which would provide a result:
{ "_id" : ObjectId("5763c480dad14fd061657f91"), "nodename" : "BLAH", "errors" : { "PC" : { "name" : "PC93196", "type" : [ "post", "casper" ] } } }