Update nested property in JSONB field in PostgreSQL - postgresql

I have data structure like:
id bigint
articles jsonb
Example:
id: 1,
articles: {
"1": {
"title": "foo"
},
"2": {
"title": "bar"
}
}
I want to change field name of title (for example articleTitle). Is there any easy way to do that ?
Edit: I can do that with string replace, but can I do that operating on jsonb? Like using jsonb_set() ?
UPDATE person
SET articles = replace(articles::TEXT,'"title":','"articleTitle":')::jsonb

Related

Using SQLAlchey query top x items in a Postgres JSONB field

I have a model with a JSONB field (Postgres).
from sqlalchemy.dialects.postgresql import JSONB
class Foo(Base):
__tablename__ = 'foo'
data = Column(JSONB, nullable=False)
...
where the data field looks like this:
[
{
"name": "a",
"value": 0.0143
},
{
"name": "b",
"value": 0.0039
},
{
"name": "c",
"value": 0.1537
},
...
]
and, given a search_name I want to return the rows where name is in the top x ranked names.
I know I can access the fields with something like:
res = session.query(Foo).filter(Foo.data['name'] == search_name)
but how do I order the JSON and extract the top names from that?
A SQLAlchemy solution would be preferred, but also a plain SQL one I can use as raw is fine.

How to add new property in an object nested in 2 arrays (JSONB postgresql)

I am looking to you for help in adding a property to a json object nested in 2 arrays.
Table Example :
CREATE TABLE events (
seq_id BIGINT PRIMARY KEY,
data JSONB NOT NULL,
evt_type TEXT NOT NULL
);
example of my JSONB data column in my table:
{
"Id":"1",
"Calendar":{
"Entries":[
{
"Id": 1,
"SubEntries":[
{
"ExtId":{
"Id":"10",
"System": "SampleSys"
},
"Country":"FR",
"Details":[
{
"ItemId":"1",
"Quantity":10,
},
{
"ItemId":"2",
"Quantity":3,
}
],
"RequestId":"222",
"TypeId":1,
}
],
"OrderResult":null
}
],
"OtherThingsArray":[
]
}
}
So I need to add new properties into a SubEntries object based on the Id value of the ExtId object (The where clause)
How can I do that please ?
Thanks a lot
You can use jsonb_set() for this, which takes jsonpath assignments as a text[] (array of text values) as
SELECT jsonb_set(
input_jsonb,
the starting jsonb document
path_array '{i,j,k[, ...]}'::text[],
the path array, where the series {i, j, k} progresses at each level with either the (string) key or (integer) index (starting at zero)denoting the new key (or index) to populate
new_jsonb_value,
if adding a key-value pair, you can use something like to_jsonb('new_value_string'::text) here to force things to format correctly
create_if_not_exists_boolean
if adding new keys/indexes, give this as true so they'll be appended; otherwise you'll be limited to overwriting existing keys
)
Example
json
{
"array1": [
{
"id": 1,
"detail": "test"
}
]
}
SQL
SELECT
jsonb_set('{"array1": [{"id": 1, "detail": "test"}]}'::jsonb,
'{array1,0,update}'::TEXT[],
to_jsonb('new'::text),
true
)
Output
{
"array1": [
{
"id": 1,
"upd": "new",
"detail": "test"
}
]
}
Note that you can only add 1 nominal level of depth at a time (i.e. either a new key or a new index entry), but you can circumvent this by providing the full depth in the assignment value, or by using jsonb_set() iteratively:
select
jsonb_set(
jsonb_set('{"array1": [{"id": 1, "detail": "test"}]}'::jsonb, '{array1,0,upd}'::TEXT[], '[{"new": "val"}]'::jsonb, true),
'{array1,0,upd,0,check}'::TEXT[],
'"test_val"',
true)
would be required to produce
{
"array1": [
{
"id": 1,
"upd": [
{
"new": "val",
"check": "test_val"
}
],
"detail": "test"
}
]
}
If you need other, more complex logic to evaluate which values need to be added to which objects, you can try:
dynamically creating a set of jsonb_set() statements for execution
using the outputs from queries of jsonb_each() and jsonb_array_elements() to evaluate the row logic down at the SubEntities level, and then using jsonb_object_agg() and jsonb_agg() as appropriate to build the document back up to the root level from the resultant object-arrays and key-value collections

How to refer currently updating records in mongoDB query?

Below is my collection
[{documentId: 123, id: uniqueValue }]
Expected result
[{documentId: 123, id: id1,uniqueKey: uniqueValue }]
How do I refer "id" column for currently updating records, also id column can be anything for which my outer query is giving me the column name
db.supplier.updateMany( { documentId : 123}, { $set: { "uniqueKey": id} } );
so in above query "id" is coming like outerObject.mapping.idColumn which I want to substitute in above query.
The whole point of doing this, is to create index on column, and current collection does not have fixed column name on which I want to fire a query
Example
There are two collections collectionOne and collectionTwo
for each document in collectionOne there are multiple document in collectionTwo. The docId is used for lookup.
collectionOne
[{
docId :123,
col1 : lookupColumn
metaData: "some metaData",
extra : "extra columns"
}, ... ]
collectionTwo
[{
docId :123,
lookupColumn:"1",
a:"A",
b:"B" ....
},
{ docId :123,
lookupColumn:"2",
a:"A",
b:"B" ....
}
{ docId :123,
lookupColumn:"3",
a:"A",
b:"B" ....},.....]
lookupColumn in collectionTwo may have different name and mapping of that name is given in collectionOne by col1 field (which is always same), in this example col1 value is lookupColumn so I want to create a column newKey and copy value of lookupColumn into it.
So I came up with below Query
db.collectionOne.find({}).forEach(function(obj) {
if(obj.columns) {
existingColumn =obj.columns.col1;
db.collectionTwo.updateMany( { docId: obj.docId}, { $set: { "newKey": existingColumn} } );
}
}
problem is I am not able to pick an existing column name using variable existingColumn, I have tried using $ as well, which inserts $"existingColumn" as newKey value.
I have updated query with one more loop over collectionTwo but I feel that in optimized and unnecessary.
To go from
{documentId: 123, id: uniqueValue }
to
{documentId: 123, id: id1, uniqueKey: uniqueValue }
Use the pipeline style of update, which lets you use aggregation syntax:
db.collection.update({documentId: 123}, [{$set:{uniqueKey:"$id", id:"id1"}}])
EDIT
The latest edit to the question makes this a lot more clear.
You were almost there.
In MongoDB 4.2, the second argument to updateMany accepts either an update document like you were using:
db.collectionTwo.updateMany( { docId: obj.docId}, { $set: { "newKey": existingColumn} } );
Or it can accept an aggregation-like pipeline, but not all stages are available. For this use, if you make that second argument an array so that it is recognized as a pipeline, you can use the "$variable" structure. Since you already have the field name in a javascript variable, prepend "$" to the fieldname:
db.collectionTwo.updateMany( { docId: obj.docId}, [{ $set: { "newKey": "$" + existingColumn} }] );

How to do PostgreSQL '||' operator with knex.update

I am working with the following data in table my_table:
[
{
"item": {
"id": 1,
"data": {
"name": "ABC",
"status": "Active"
}
}
},
{
"item": {
"id": 2,
"data": {
"name": "DEF",
"status": "Active"
}
}
}
]
I would like to update name property of data, keeping the rest of data intact. A PostgreSQL query for that purpose would look like this:
UPDATE my_table SET data = data || '{"name":"GHI"}' WHERE id = 1;
However, I am struggling to achieve this with knex, as I've tried:
knex('my_table')
.update({ data: knex.raw('data || ?', [{ name: 'GHI' }]) })
.where('id', 1);
and many other similar queries, but in vain. If you might have any ideas about this, please share them below. Thanks in advance!
Using knex you indeed have to use raw expressions to update single field inside JSONB column.
However in objection.js which is an ORM, built on top of knex, there are some additional support for JSONB operations.
With objection.js your update would look like
await MyTableModel.query(knex).update({'data:name', 'GHI'}).where('id', 1);
Which outputs SQL like this (with bindings ["GHI", 1]):
update "my_table" set "data" = jsonb_set("data", '{name}', ?, true) where "id" = ?
Runkit example https://runkit.com/embed/0dai0bybplxv

How to set unique compound index on json array elements?

I have table tags and json column translations inside. This column looks like this:
translations: [
{ text: "Tag1", language: "en-us" },
{ text: "Tag1_cn", language: "zh-cn" },
...
]
Is it possible to set unique index on text+language across all rows? I would like to prevent inserting tag with same text+language to the tags table. So far I was using two tables - tags and tags_translations however I wanted to avoid extra join when querying for tags.
e.g.
CREATE TABLE jsondemo (blah json);
INSERT INTO jsondemo(blah) VALUES ('[
{ "text": "Tag1", "language": "en-us" },
{ "text": "Tag1_cn", "language": "zh-cn" }]');