Couchbase N1QL update query for Array of objects - nosql

I am new to n1ql. I want to search all records in a bucket with "ABC" and replace it with "DEF". Can you please help me in creating this query and index?
Sample records
{
"userTypeNm": "pro",
"userStateArray": [
{
"bindCd": "1591779772457",
"name": "########",
"state": "**ABC**",
"ts": "1591779772457"
}
],
"vts": "1591779772457",
"ets": "1591779772457",
"daoObj": {
"authDaObj": {
"data": "eyJ0cmFuc2FjdGlvbklkIjoiVVNMT0dPTi0xN2U3YWQ5ZC0wN",
"id": "829892839892"
}
}
}

CREATE INDEX ix1 ON default
(DISTINCT ARRAY v.state FOR v IN userStateArray END) WHERE userTypeNm = "pro";
UPDATE default AS d SET usa.state = "DEF" FOR usa IN d.userStateArray WHEN usa.state = "ABC" END
WHERE ANY v IN d.userStateArray SATISFIES v.state = "ABC" END;
https://docs.couchbase.com/server/current/n1ql/n1ql-language-reference/update.html

Related

Postgres - jsonb create and update new attribute in column

I have such a column attributes:
{
"a:value": "123",
"a:origin": "abc"
}
I want to create a new attribute which should look like this:
"abcKey:value": {
"value": "123ABC",
"version": "1"
}
So, in the end attributes should look like this:
{
"a:value": "123",
"a:origin": "abc",
"abcKey:value": {
"value": "123ABC",
"version": "1"
}
}
How can I do this?
I tried this
update my_table
set attributes = jsonb_set(attributes, '{abcKey:value,value}', '"123ABC"')
attributes = jsonb_set(attributes, '{abcKey:value,version}', '"1"')
where ...;
But this does not work because I think, I have to create the new attribute at first. How can I create and update this new attribute (maybe in one step)?
Thank you very much!
I wrote two samples for you:
-- if you have same objects are json
with tb as (
select
'{
"a:value": "123",
"a:origin": "abc"
}'::jsonb a1,
'{"abcKey:value": {
"value": "123ABC",
"version": "1"
}}'::jsonb a2
)
select a1, a2, a1||a2 from tb; -- you can concate json objects
-- if you can create json objects via using key value
with tb as (
select
'{
"a:value": "123",
"a:origin": "abc"
}'::jsonb a1
)
select a1 || jsonb_build_object('abcKey:value', jsonb_build_object('value', '123ABC', 'version', 1)) from tb;

Postgres query array elements inside JSON?

I have Postgres 11 table called fb_designs that has a json column with data structured like so:
{
"listings": [
{
"id": "KTyneMdrAhAEKyC9Aylf",
"active": true
},
{
"id": "ZcjK9M4tuwhWWdK8WcfX",
"active": false
}
]
}
and a tags column in a character varying[] format as so {dWLaRWChaThFPH6b3BpA,BrYiPaUiou020hsmRugR}. Both lengths are undefined.
What I am trying to do is produce a some queries that will let me say in laymans terms,
show me all results where at all items.listings has an
active status and tags contains BrYiPaUiou020hsmRugR
I got this far, however I'm not sure how to add in WHERE uid = "foo", WHERE tags contains "foo", "bar and WHERE title is like %hoot%
SELECT id, title, tags, selected_preview_image, items
FROM fb_designs r, json_array_elements(r.items#>'{listings}') obj
WHERE obj->>'active' = 'true'
GROUP BY id
If they are all true, then none of them are false. Sounds like you want to negate the containment operation over false.
select * from fb_designs where
not items::jsonb #> '{"listings":[{"active": false}]}'
and tags && ARRAY['BrYiPaUiou020hsmRugR']::varchar[]

Creating an AND query on a list of items in Azure Cosmos

I'm building an application in Azure Cosmos and I'm having trouble creating a query. Using the dataset below, I want to create a query that only finds CharacterId "Susan" by searching for all characters that have the TraitId of "Athletic" and "Slim".
Here is my JSON data set
{
"characterId": "Bob",
"traits": [
{
"traitId": "Athletic",
"traitId": "Overweight"
}
],
},
{
"characterId": "Susan",
"traits": [
{
"traitId": "Athletic",
"traitId": "Slim"
}
],
},
{
"characterId": "Jerry",
"traits": [
{
"traitId": "Slim",
"traitId": "Strong"
}
],
}
]
The closest I've come is this query but it acts as an OR statement and what I want is an AND statement.
SELECT * FROM Characters f WHERE f.traits IN ("Athletic", "Slim")
Any help is greatly appreciated.
EDITED: I figured out the answer to this question. If anyone is interested this query gives the results I was looking for:
SELECT * FROM Characters f
WHERE EXISTS (SELECT VALUE t FROM t IN f.traits WHERE t.traitId = 'Athletic')
AND EXISTS (SELECT VALUE t FROM t IN f.traits WHERE t.traitId = 'Slim')
The answer that worked for me is to use EXISTS statements with SELECT statements that searched the traits list. In my program I can use StringBuilder to create a SQL statement that concatenates an AND EXISTS statement for each of the traits I want to find:
SELECT * FROM Characters f
WHERE EXISTS (SELECT VALUE t FROM t IN f.traits WHERE t.traitId = 'Athletic')
AND EXISTS (SELECT VALUE t FROM t IN f.traits WHERE t.traitId = 'Slim')

Cross-venue visitor reporting approach in Location Based Service system

I'm finding an approach to resolve cross-venue vistor report for my client, he wants an HTTP API that return the total unique count of his customer who has visited more than one shop in day range (that API must return in 1-2 seconds).
The raw data sample (...millions records in reality):
--------------------------
DAY | CUSTOMER | VENUE
--------------------------
1 | cust_1 | A
2 | cust_2 | A
3 | cust_1 | B
3 | cust_2 | A
4 | cust_1 | C
5 | cust_3 | C
6 | cust_3 | A
Now, I want to calculate the cross-visitor report. IMO the steps would be as following:
Step 1: aggregate raw data from day 1 to 6
--------------------------
CUSTOMER | VENUE VISIT
--------------------------
cus_1 | [A, B, C]
cus_2 | [A]
cus_3 | [A, C]
Step 2: produce the final result
Total unique cross-customer: 2 (cus_1 and cus_3)
I've tried somes solutions:
I firstly used MongoDB to store data, then using Flask to write an API that uses MongoDB's utilities: aggregation, addToSet, group, count... But the API's response time is unacceptable.
Then, I switched to ElasticSearch with hope on its Aggregation command sets, but they do not support pipeline group command on the output result from the first "terms" aggregation.
After that, I read about Redis Sets, Sorted Sets,... But they couldn't help.
Could you please show me a clue to solve my problem.
Thank in advanced!
You can easily do this with Elasticsearch by leveraging one date_histogram aggregation to bucket by day, two terms aggregations (first bucket by customer and then by venue) and then only select the customers which visited more than one venue any given day using the bucket_selector pipeline aggregation. It looks like this:
POST /sales/_search
{
"size": 0,
"aggs": {
"by_day": {
"date_histogram": {
"field": "date",
"interval": "day"
},
"aggs": {
"customers": {
"terms": {
"field": "customer.keyword"
},
"aggs": {
"venues": {
"terms": {
"field": "venue.keyword"
}
},
"cross_selector": {
"bucket_selector": {
"buckets_path": {
"venues_count": "venues._bucket_count"
},
"script": {
"source": "params.venues_count > 1"
}
}
}
}
}
}
}
}
}
In the result set, you'll get customers 1 and 3 as expected.
UPDATE:
Another approach involves using a scripted_metric aggregation in order to implement the logic yourself. It's a bit more complicated and might not perform well depending on the number of documents and hardware you have, but the following algorithm would yield the response 2 exactly as you expect:
POST sales/_search
{
"size":0,
"aggs": {
"unique": {
"scripted_metric": {
"init_script": "params._agg.visits = new HashMap()",
"map_script": "def cust = doc['customer.keyword'].value; def venue = doc['venue.keyword'].value; def venues = params._agg.visits.get(cust); if (venues == null) { venues = new HashSet(); } venues.add(venue); params._agg.visits.put(cust, venues)",
"combine_script": "def merged = new HashMap(); for (v in params._agg.visits.entrySet()) { def cust = merged.get(v.key); if (cust == null) { merged.put(v.key, v.value) } else { cust.addAll(v.value); } } return merged",
"reduce_script": "def merged = new HashMap(); for (agg in params._aggs) { for (v in agg.entrySet()) {def cust = merged.get(v.key); if (cust == null) {merged.put(v.key, v.value)} else {cust.addAll(v.value); }}} def unique = 0; for (m in merged.entrySet()) { if (m.value.size() > 1) unique++;} return unique"
}
}
}
}
Response:
{
"took": 1413,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"skipped": 0,
"failed": 0
},
"hits": {
"total": 7,
"max_score": 0,
"hits": []
},
"aggregations": {
"unique": {
"value": 2
}
}
}

How to update nested object (i.e., doc field having object type) in single document in mongodb

I have doc collection which have object type field named price (i.e., see below), I just want to update/insert that field by adding new key value pairs to it.
suppose i have this as collection (in db):
[
{
_id: 1,
price: {
amazon: 102.1,
apple: 500
}
},
....
....
];
Now I want to write an query which either update price's or inserts if not exist in price.
let's suppose these as input data to update/insert with:
var key1 = 'ebay', value1 = 300; // will insert
var key2 = 'amazon', value2 = 100; // will update
assume doc having _id: 1 for now.
Something like $addToSet operator?, Though $addToSet only works for array & i want to work within object).
expected output:
[
{
_id: 1,
price: {
amazon: 100, // updated
apple: 500,
ebay: 300 // inserted
}
},
....
....
];
How can i do/achieve this?
Thanks.
You could construct the update document dynamically to use the dot notation and the $set operator to do the update correctly. Using your example above, you'd want to run the following update operation:
db.collection.update(
{ "_id": 1 },
{
"$set": { "price.ebay": 300, "price.amazon": 100 }
}
)
So, given the data input, you would want to construct an update document like { "price.ebay": 300, "price.amazon": 100 }
With the inputs as you have described
var key1 = 'ebay', value1 = 300; // will insert
var key2 = 'amazon', value2 = 100; // will update
Construct the update object:
var query = { "_id": 1 },
update = {};
update["price."+key1] = value1;
update["price."+key2] = value2;
db.collection.update(query, {"$set": update});