Grails-Mongo Check Contains in Domain's List Criteria Query - mongodb

I'm using grails 3.3.5 with GORM Version 6.1.9
In my application I've created the domain as follows..
class Camera {
String cameraId
List<String> typesInclude
static constraints = {
typesInclude nullable: false
}
}
Now I've added some records in the Camera Colletions.
db.camera.find().pretty()
{
"_id" : NumberLong(1),
"version" : NumberLong(0),
"typesInclude" : [
"T1"
],
"cameraId" : "cam1"
}
{
"_id" : NumberLong(2),
"version" : NumberLong(0),
"typesInclude" : [
"T2"
],
"cameraId" : "cam2",
}
{
"_id" : NumberLong(3),
"version" : NumberLong(0),
"typesInclude" : [
"T2",
"T3"
],
"cameraId" : "cam3",
}
Now when I'm trying to get Camera By type like T2. I'm unable to get results using the following function..
def getCameraListByType(String type){
def cameraInstanceList = Camera.createCriteria().list {
ilike("typesInclude","%${type}%")
}
return cameraInstanceList
}
Any help would be appreciated.

I wouldn't use criteria queries with mongo, as they barely reflect document-oriented paradigm.
Use the native queries instead, as they are way more powerful:
def getCameraListByType(String type){
Camera.collection.find( [ typesInclude:[ $regex:/$type/, $options:'i' ] ] ).collect{ it as Camera }
}

Related

Add field to every document with existing data (move fields data to new field)

I have almost no experience in SQL or noSQL.
I need to update every document so that my fields "Log*" are under the new field "Log"
I found some help from this StackOverflow, but I am still wondering how to move the data.
Thank you very much
Original document
// collection: Services
{
"_id" : ObjectId("5ccb4f99f4953d4894acbe79"),
"Name" : "WebAPI",
"LogPath" : "Product\\APIService\\",
"LogTypeList" : [
{
"Name" : "ApiComCounter",
"FileName" : "ApiComCounter.log"
},
{
"Name" : "ApiService",
"FileName" : "ApiService.log"
}
]
}
Final Document
// collection: Services
{
"_id" : ObjectId("5ccb6fa2ae8f8a5d7037a5dd"),
"Name" : "InvoicingService",
"Log" : {
"LogPath" : "Product\\APIService\\",
"LogTypeList" : [
{
"Name" : "ApiComCounter",
"FileName" : "ApiComCounter.log"
},
{
"Name" : "ApiService",
"FileName" : "ApiService.log"
}
]
}
}
This requires MongoDB 4.2 or higher:
db.<collection>.updateMany({}, [
{$set: {"Log.LogPath": "$LogPath", "Log.LogTypeList": "$LogTypeList"}},
{$unset: ["LogPath", "LogTypeList"]}
])

Mongodb update and delete operations in a single query

I have documents in which I would like to update the hostUser with one of the members of the document,also have to delete the record from the member document and add the chips of the deleted member in the club chips.
Here is the sample document.
{
"_id" : "1002",
"hostUser" : "1111111111",
"clubChips" : 10000,
"requests" : {},
"profile" : {
"clubname" : "AAAAA",
"image" : "0"
},
"tables" : [
"SJCA3S0Wm"
],
"isDeleted" : false,
"members" : {
"1111111111" : {
"chips" : 0,
"id" : "1111111111"
},
"2222222222" : {
"chips" : 0,
"id" : "2222222222"
}
}
}
This is what I have tried.
db.getCollection('test').updateMany({"hostUser":"1111111111"},
{"$set":{"hostUser":"2222222222"},"$unset":{"members.1111111111":""}})
This is how you would handle unset and set in a single call to updateMany. Can you please clarify what you meant by "check if the values exist in the member field"?
db.getCollection('test').updateMany(
{"hostUser":"1111111111"},
{
'$set': {"hostUser":"2222222222"} ,
'$unset': {"members.1111111111":""}
}
)

Mongo query to return distinct count, large documents

I need to be able to get a count of distinct 'transactions' the problem I'm having is that using .distinct() comes back with an error because the documents too large.
I'm not familiar with aggregation either.
I need to be able to group it by 'agencyID' as you see below there are 2 different agencyID's
I need to be able to count transactions where the agencyID is 01721487 etc
db.myCollection.distinct("bookings.transactions").length
this doesn't work as I need to be able to group by agencyID and if there are too many results I get an error saying it's too large.
{
"_id" : ObjectId("5624a610a6e6b53b158b4744"),
"agencyID" : "01721487",
"paxID" : "-530189664",
"bookings" : [
{
"bookingID" : "24232",
"transactions" : [
{
"tranID" : "001",
"invoices" : [
{
"invNum" : "1312",
"type" : "r",
"inv_date" : "20150723",
"inv_time" : "0953",
"inv_val" : -300
}
],
"tranType" : "Fee",
"tranDate" : "20150723",
"tranTime" : "0952",
"opCode" : "admin",
"udf_1" : "j s"
}
],
"acctID" : "acct11",
"agt_id" : "xy"
}
],
"title" : "",
"firstname" : "",
"surname" : "f bar"
}
I've also tried this but it didn't work for me.
thank you for text data -
this is something you could play with:
db.kieron.aggregate([{
$unwind : "$bookings"
}, {
$match : {
"bookings.transactions" : {
$exists : true,
$not : {
$size : 0
}
}
}
}, {
$group : {
_id : "$agencyID",
count : {
$sum : {
$size : "$bookings.transactions"
}
}
}
}
])
as there is nested array we need to unwind it first, and then we can check size of inner array.
Happy reporting!

meteor client find is not working due to $eq

I subscribed to my servers's publication as follows:
Template.observedQuestions.onCreated(function(){
var self = this;
self.autorun(function(){
self.subscribe('observedQuestionsFeed');
});
});
Now I need to fetch my data using helper function:
Template.observedQuestions.helpers({
observedQuestionsList : function(){
questions = Questions.find({
observedByUsers : {$exists: true,$elemMatch:{$eq:Meteor.userId()}}});
return questions;
}
});
but it does not work due to $eq being not recognised in minimongo.
How to solve it?
doc sample:
{
"_id" : "rP4JP8jkprwwi3ZCp",
"qUserId" : "NLLW3RBXqnbSGuZ3n",
"type" : "question",
"date" : ISODate("2016-02-13T11:23:10.845Z"),
"subject" : "test",
"question" : "test",
"replies" : [
{
"rID" : "LphcqKnkTHf25SCwq",
"rUserID" : "NLLW3RBXqnbSGuZ3n",
"date" : ISODate("2016-02-13T11:23:10.847Z"),
"answer" : "reply1."
},
{
"rID" : "HxaohnEgxwNJLtf2z",
"rUserID" : "NLLW3RBXqnbSGuZ22",
"date" : ISODate("2016-02-13T11:23:10.848Z"),
"answer" : "reply2"
}
],
"observedByUsers" : [ "Bi24LGozvtihxFrNe" ]
}
Judging from your sample Questions document, the field observedByUsers is a simple array which contains user IDs.
As a result, you could simply use the following query:
Questions.find({observedByUsers: Meteor.userId()});

MongoDB MapReduce producing different results for each document

This is a follow-up from this question, where I tried to solve this problem with the aggregation framework. Unfortunately, I have to wait before being able to update this particular mongodb installation to a version that includes the aggregation framework, so have had to use MapReduce for this fairly simple pivot operation.
I have input data in the format below, with multiple daily dumps:
"_id" : "daily_dump_2013-05-23",
"authors_who_sold_books" : [
{
"id" : "Charles Dickens",
"original_stock" : 253,
"customers" : [
{
"time_bought" : 1368627290,
"customer_id" : 9715923
}
]
},
{
"id" : "JRR Tolkien",
"original_stock" : 24,
"customers" : [
{
"date_bought" : 1368540890,
"customer_id" : 9872345
},
{
"date_bought" : 1368537290,
"customer_id" : 9163893
}
]
}
]
}
I'm after output in the following format, that aggregates across all instances of each (unique) author across all daily dumps:
{
"_id" : "Charles Dickens",
"original_stock" : 253,
"customers" : [
{
"date_bought" : 1368627290,
"customer_id" : 9715923
},
{
"date_bought" : 1368622358,
"customer_id" : 9876234
},
etc...
]
}
I have written this map function...
function map() {
for (var i in this.authors_who_sold_books)
{
author = this.authors_who_sold_books[i];
emit(author.id, {customers: author.customers, original_stock: author.original_stock, num_sold: 1});
}
}
...and this reduce function.
function reduce(key, values) {
sum = 0
for (i in values)
{
sum += values[i].customers.length
}
return {num_sold : sum};
}
However, this gives me the following output:
{
"_id" : "Charles Dickens",
"value" : {
"customers" : [
{
"date_bought" : 1368627290,
"customer_id" : 9715923
},
{
"date_bought" : 1368622358,
"customer_id" : 9876234
},
],
"original_stock" : 253,
"num_sold" : 1
}
}
{ "_id" : "JRR Tolkien", "value" : { "num_sold" : 3 } }
{
"_id" : "JK Rowling",
"value" : {
"customers" : [
{
"date_bought" : 1368627290,
"customer_id" : 9715923
},
{
"date_bought" : 1368622358,
"customer_id" : 9876234
},
],
"original_stock" : 183,
"num_sold" : 1
}
}
{ "_id" : "John Grisham", "value" : { "num_sold" : 2 } }
The even indexed documents have the customers and original_stock listed, but an incorrect sum of num_sold.
The odd indexed documents only have the num_sold listed, but it is the correct number.
Could anyone tell me what it is I'm missing, please?
Your problem is due to the fact that the format of the output of the reduce function should be identical to the format of the map function (see requirements for the reduce function for an explanation).
You need to change the code to something like the following to fix the problem, :
function map() {
for (var i in this.authors_who_sold_books)
{
author = this.authors_who_sold_books[i];
emit(author.id, {customers: author.customers, original_stock: author.original_stock, num_sold: author.customers.length});
}
}
function reduce(key, values) {
var result = {customers:[] , num_sold:0, original_stock: (values.length ? values[0].original_stock : 0)};
for (i in values)
{
result.num_sold += values[i].num_sold;
result.customers = result.customers.concat(values[i].customers);
}
return result;
}
I hope that helps.
Note : the change num_sold: author.customers.length in the map function. I think that's what you want