How to set unique compound index on json array elements? - postgresql

I have table tags and json column translations inside. This column looks like this:
translations: [
{ text: "Tag1", language: "en-us" },
{ text: "Tag1_cn", language: "zh-cn" },
...
]
Is it possible to set unique index on text+language across all rows? I would like to prevent inserting tag with same text+language to the tags table. So far I was using two tables - tags and tags_translations however I wanted to avoid extra join when querying for tags.
e.g.
CREATE TABLE jsondemo (blah json);
INSERT INTO jsondemo(blah) VALUES ('[
{ "text": "Tag1", "language": "en-us" },
{ "text": "Tag1_cn", "language": "zh-cn" }]');

Related

How to add new property in an object nested in 2 arrays (JSONB postgresql)

I am looking to you for help in adding a property to a json object nested in 2 arrays.
Table Example :
CREATE TABLE events (
seq_id BIGINT PRIMARY KEY,
data JSONB NOT NULL,
evt_type TEXT NOT NULL
);
example of my JSONB data column in my table:
{
"Id":"1",
"Calendar":{
"Entries":[
{
"Id": 1,
"SubEntries":[
{
"ExtId":{
"Id":"10",
"System": "SampleSys"
},
"Country":"FR",
"Details":[
{
"ItemId":"1",
"Quantity":10,
},
{
"ItemId":"2",
"Quantity":3,
}
],
"RequestId":"222",
"TypeId":1,
}
],
"OrderResult":null
}
],
"OtherThingsArray":[
]
}
}
So I need to add new properties into a SubEntries object based on the Id value of the ExtId object (The where clause)
How can I do that please ?
Thanks a lot
You can use jsonb_set() for this, which takes jsonpath assignments as a text[] (array of text values) as
SELECT jsonb_set(
input_jsonb,
the starting jsonb document
path_array '{i,j,k[, ...]}'::text[],
the path array, where the series {i, j, k} progresses at each level with either the (string) key or (integer) index (starting at zero)denoting the new key (or index) to populate
new_jsonb_value,
if adding a key-value pair, you can use something like to_jsonb('new_value_string'::text) here to force things to format correctly
create_if_not_exists_boolean
if adding new keys/indexes, give this as true so they'll be appended; otherwise you'll be limited to overwriting existing keys
)
Example
json
{
"array1": [
{
"id": 1,
"detail": "test"
}
]
}
SQL
SELECT
jsonb_set('{"array1": [{"id": 1, "detail": "test"}]}'::jsonb,
'{array1,0,update}'::TEXT[],
to_jsonb('new'::text),
true
)
Output
{
"array1": [
{
"id": 1,
"upd": "new",
"detail": "test"
}
]
}
Note that you can only add 1 nominal level of depth at a time (i.e. either a new key or a new index entry), but you can circumvent this by providing the full depth in the assignment value, or by using jsonb_set() iteratively:
select
jsonb_set(
jsonb_set('{"array1": [{"id": 1, "detail": "test"}]}'::jsonb, '{array1,0,upd}'::TEXT[], '[{"new": "val"}]'::jsonb, true),
'{array1,0,upd,0,check}'::TEXT[],
'"test_val"',
true)
would be required to produce
{
"array1": [
{
"id": 1,
"upd": [
{
"new": "val",
"check": "test_val"
}
],
"detail": "test"
}
]
}
If you need other, more complex logic to evaluate which values need to be added to which objects, you can try:
dynamically creating a set of jsonb_set() statements for execution
using the outputs from queries of jsonb_each() and jsonb_array_elements() to evaluate the row logic down at the SubEntities level, and then using jsonb_object_agg() and jsonb_agg() as appropriate to build the document back up to the root level from the resultant object-arrays and key-value collections

Update nested property in JSONB field in PostgreSQL

I have data structure like:
id bigint
articles jsonb
Example:
id: 1,
articles: {
"1": {
"title": "foo"
},
"2": {
"title": "bar"
}
}
I want to change field name of title (for example articleTitle). Is there any easy way to do that ?
Edit: I can do that with string replace, but can I do that operating on jsonb? Like using jsonb_set() ?
UPDATE person
SET articles = replace(articles::TEXT,'"title":','"articleTitle":')::jsonb

Postgres jsonb document storage like mongo

I have simple table like this
CREATE TABLE things (
id SERIAL PRIMARY KEY,
data jsonb
);
and i want to store complex nested json like this
{
"id": 1,
"title": "thing",
"things": [
{
"title": "thing 1",
"moreThings": [
{ "title": "more thing 1" }
]
}
]
}
but i found it difficult to insert and manipulate complex nested json.
i want to do same thing as mongo's push and populate, can i achieve same functionality with postgres json storage? if i can how? or is it an anti pattern and i should just write normal tables with relations?

On mongoimport, can I easily map a field to _id on json import?

I need to import a dataset that looks like the similar (abbreviated) dataset.
[
{
"itemId": 1,
"name": "Item",
"qty": "10"
},
...
]
The tricky part is that doing inserts indefinitely will not raise any exception, but the itemId field would represent a valid identifier equivalent to _id field if it could be accepted as such.
Does any option akin to --idField itemId exists?

How to maintain the uniqueness based on a particular fieldin array Without using Unique index

I have the document like this.
[{
"_id" : ObjectId("aaa"),
"host": "host1",
"artData": [
{
"aid": "56004721",
"accessMin": NumberLong(1481862180
},
{
"aid": "56010082",
"accessMin": NumberLong(1481861880)
},
{
"aid": "55998802",
"accessMin": NumberLong(1481861880)
}
]
},
{
"_id" : ObjectId("bbb"),
"host": "host2",
"artData": [
{
"aid": "55922560",
"accessMin": NumberLong(1481862000)
},
{
"aid": "55922558",
"accessMin": NumberLong(1481861880)
},
{
"aid": "55940094",
"accessMin": NumberLong(1481861760)
}
]
}]
while updating any document, duplicate "aid" should not be added again in the array.
One option i got is using the unique index on artData.aid field. But building indexes is not preferred as i wont need it as per the requirement.
Is there any way to solve this?
Option 1: While designing Schema for that document use unique:true.
for example:
var newSchema = new Schema({
artData: [
{
aid: { type: String, unique: true },
accessMin: Number
}]
});
module.exports = mongoose.model('newSchema', newSchema );
Option 2: refer a link to avoid duplicate
As per this doc, you may use a multikey index as follows:
{ "artData.aid": 1 }
That being said, since you dont want to use a multikey index, another option for insertion is to
Query the document to find artData's that match the aid
Difference the result set with the set you are about to insert
remove the items that match your query
insert the remaining items from step 2
Ideally your query from step 1 wont return a set that is too large -- making this a surprisingly fast operation. That said, It's really based on the number of duplicates you assume you will be trying to insert. If the number is really high, the result of the query from step 1 could return a large set of items, in which case this solution may not be appropriate, but its all I've got for you =(.
My suggestion is to really re-evaluate the reason for not using multikey indexing