mongodb Alerts for frequent queries - mongodb

I have this query that inserts when a listener is listening to a song.
const nowplayingData = {"type":"S","station": req.params.stationname, "song": data[1], "artist": data[0], "timeplay":npdate};
LNowPlaying.findOneAndUpdate(
nowplayingData,
{ $addToSet: { history: [uuid] } }, { upsert: true }, function(err) {
if (err) {
console.log('ERROR when submitting round');
console.log(err);
}
});
I have been getting the following emails for the last week - they are starting to get annoying.
Mongodb Alerts
These alerts don't show anything wrong with the query or the code.
I also have the following query that checks for the latest userID matching the station name.
I believe this is the query setting off the alerts - because of the amount of times we request the same query over and over (runs every 10 seconds and may have unto 1000 people requesting the info at the same time.)
var query = LNowPlaying.findOne({"station":req.params.stationname, "history":{$in: [y]}}).sort({"_id":-1})
query.exec(function (err, docs) {
/*res.status(200).json({
data: docs
});*/
console.error(docs)
if(err){
console.error("error")
res.status(200).json(
err
);
}
I am wondering how can I make this better so that I don't get the alerts - I know I either have to make an index works which I believe needs to be station name and history array.
I have tried to create a new index using the fields station and history But got this error
Index build failed: 6ed6d3f5-bd61-4d70-b8ea-c62d7a10d3ba: Collection AdStitchr.NowPlaying ( 8190d374-8f26-4c31-bcf7-de4d11803385 ) :: caused by :: Field 'history' of text index contains an array in document: { _id: ObjectId('5f102ab25b43e19dabb201f5'), artist: "Cobra Dukes", song: "Leave The Light On (Hook N Sling Remix) [nG]", station: "DRN1", timeplay: new Date(1594898580000), __v: 0, history: [ "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE1OTQ5ODE0MjQsImlhdCI6MTU5NDg5NTAyNCwib2lkIjoicmFkaW9tZWRpYSJ9.ECVxBzAYZcpyueBP_Xlyncn41OgrezrOF8Dn3CdAnOU" ] }
Can you not index an Array?
How I am trying to create the index.
my index creation

Related

MongoDB updating the wrong subdocument in array

I've recently started using MongoDB using Mongoose (from NodeJS), but now I got stuck updating a subdocument in an array.
Let me show you...
I've set up my Restaurant in MongoDB like so:
_id: ObjectId("5edaaed8d8609c2c47fd6582")
name: "Some name"
tables: Array
0: Object
id: ObjectId("5ee277bab0df345e54614b60")
status: "AVAILABLE"
1: Object
id: ObjectId("5ee277bab0df345e54614b61")
status: "AVAILABLE"
As you can see a restaurant can have multiple tables, obviously.
Now I would like to update the status of a table for which I know the _id. I also know the _id of the restaurant that has the table.
But....I only want to update the status if we have the corresponding tableId and this table has the status 'AVAILABLE'.
My update statement:
const result = await Restaurant.updateOne(
{
_id: ObjectId("5edaaed8d8609c2c47fd6582"),
'tables._id': ObjectId("5ee277bab0df345e54614b61"),
'tables.status': 'AVAILABLE'
},
{ $set: { 'tables.$.status': 'CONFIRMED' } }
);
Guess what happens when I run the update-statement above?
It strangely updates the FIRST table (with the wrong table._id)!
However, when I remove the 'tables.status' filter from the query, it does update the right table:
const result = await Restaurant.updateOne(
{
_id: ObjectId("5edaaed8d8609c2c47fd6582"),
'tables._id': ObjectId("5ee277bab0df345e54614b61")
},
{ $set: { 'tables.$.status': 'CONFIRMED' } }
);
Problem here is that I need the status to be 'AVAILABLE', or else it should not update!
Can anybody point me in the wright direction with this?
according to the docs, the positional $ operator acts as a placeholder for the first element that matches the query document
so you are updating only the first array element in the document that matches your query
you should use the filtered positional operator $[identifier]
so your query will be something like that
const result = await Restaurant.updateOne(
{
_id: ObjectId("5edaaed8d8609c2c47fd6582"),
'tables._id': ObjectId("5ee277bab0df345e54614b61"),
'tables.status': 'AVAILABLE'
},
{
$set: { 'tables.$[table].status': 'CONFIRMED' } // update part
},
{
arrayFilters: [{ "table._id": ObjectId("5ee277bab0df345e54614b61"), 'table.status': 'AVAILABLE' }] // options part
}
);
by this way, you're updating the table element that has that tableId and status
hope it helps

Update with upsert, but only update if date field of document in db is less than updated document

I am having a bit of an issue trying to come up with the logic for this. So, what I want to do is:
Bulk update a bunch of posts to my remote MongoDB instance BUT
If update, only update if lastModified field on the remote collection is less than lastModified field in the same document that I am about to update/insert
Basically, I want to update my list of documents if they have been modified since the last time I updated them.
I can think of two brute force ways to do it...
First, querying my entire collection, trying to manually remove and replace the documents that match the criteria, add the new ones, and then mass insert everything back to the remote collection after deleting everything in remote.
Second, query each item and then deciding, if there is one in remote, if I want to update it or no. This seems like it would be very tasking when dealing with remote collections.
If relevant, I am working on a NodeJS environment, using the mondodb npm package for database operations.
You can use the bulkWrite API to carry out the updates based on the logic you specified as it handles this better.
For example, the following snippet shows how to go about this assuming you already have the data from the web service you need to update the remote collection with:
mongodb.connect(mongo_url, function(err, db) {
if(err) console.log(err);
else {
var mongo_remote_collection = db.collection("remote_collection_name");
/* data is from http call to an external service or ideally
place this within the service callback
*/
mongoUpsert(mongo_remote_collection, data, function() {
db.close();
})
}
})
function mongoUpsert(collection, data_array, cb) {
var ops = data_array.map(function(data) {
return {
"updateOne": {
"filter": {
"_id": data._id, // or any other filtering mechanism to identify a doc
"lastModified": { "$lt": data.lastModified }
},
"update": { "$set": data },
"upsert": true
}
};
});
collection.bulkWrite(ops, function(err, r) {
// do something with result
});
return cb(false);
}
If the data from the external service is huge then consider sending the writes to the server in batches of 500 which gives you a better performance as you are not sending every request to the server, just once in every 500 requests.
For bulk operations MongoDB imposes a default internal limit of 1000 operations per batch and so the choice of 500 documents is good in the sense that you have some control over the batch size rather than let MongoDB impose the default, i.e. for larger operations in the magnitude of > 1000 documents. So for the above case in the first approach one could just write all the array at once as this is small but the 500 choice is for larger arrays.
var ops = [],
counter = 0;
data_array.forEach(function(data) {
ops.push({
"updateOne": {
"filter": {
"_id": data._id,
"lastModified": { "$lt": data.lastModified }
},
"update": { "$set": data },
"upsert": true
}
});
counter++;
if (counter % 500 === 0) {
collection.bulkWrite(ops, function(err, r) {
// do something with result
});
ops = [];
}
})
if (counter % 500 != 0) {
collection.bulkWrite(ops, function(err, r) {
// do something with result
}
}

Using TTL in MongoDB [duplicate]

I have a very certain thing i want to accomplish, and I wanted to make sure it is not possible in mongoose/mongoDB before I go and code the whole thing myself.
I checked mongoose-ttl for nodejs and several forums and didn't find quite what I need.
here it is:
I have a schema with a date field createDate. Now i wish to place a TTL on that field, so far so good, i can do it like so (expiration in 5000 seconds):
createDate: {type: Date, default: Date.now, expires: 5000}
but I would like my users to be able to "up vote" documents they like so those documents will get a longer period of time to live, without changing the other documents in my collection.
So, Can i change a TTL of a SINGLE document somehow once a user tells me he likes that document using mongoose or other existing npm related modules?
thank you
It has been more than a year, but this may be useful for others, so here is my answer:
I was trying accomplish this same thing, in order to allow a grace period after an entry deletion, so the user can cancel the operation afterwards.
As stated by Mike Bennett, you can use a TTL index making documents expire at a specific clock time.
Yo have to create an index, setting the expireAfterSeconds to zero:
db.yourCollection.createIndex({ "expireAt": 1 }, { expireAfterSeconds: 0 });
This will not affect any of the documents in your collection, unless you set expireAfterSeconds on a particular document like so:
db.log_events.insert( {
"expireAt": new Date('July 22, 2013 14:00:00'),
"logEvent": 2,
"logMessage": "Success!"
} )
Example in mongoose
Model
var BeerSchema = new Schema({
name: {
type: String,
unique: true,
required: true
},
description: String,
alcohol: Number,
price: Number,
createdAt: { type: Date, default: Date.now }
expireAt: { type: Date, default: undefined } // you don't need to set this default, but I like it there for semantic clearness
});
BeerSchema.index({ "expireAt": 1 }, { expireAfterSeconds: 0 });
Deletion with grace period
Uses moment for date manipulation
exports.deleteBeer = function(id) {
var deferred = q.defer();
Beer.update(id, { expireAt: moment().add(10, 'seconds') }, function(err, data) {
if(err) {
deferred.reject(err);
} else {
deferred.resolve(data);
}
});
return deferred.promise;
};
Revert deletion
Uses moment for date manipulation
exports.undeleteBeer = function(id) {
var deferred = q.defer();
// Set expireAt to undefined
Beer.update(id, { $unset: { expireAt: 1 }}, function(err, data) {
if(err) {
deferred.reject(err);
} else {
deferred.resolve(data);
}
});
return deferred.promise;
};
You could use the expire at clock time feature in mongodb. You will have to update the expire time each time you want to extend the expiration of a document.
http://docs.mongodb.org/manual/tutorial/expire-data/#expire-documents-at-a-certain-clock-time

Query with waterline and sails JS

I am using SailsJS with waterline.
My user model has_many bookings. I need to get all users that have passed more than 3 bookings in the last months.
I managed to write a query that gets all users that made a booking in the last month.
findUsersWhoBookedLastMonth: function(cb){
var dateM30J = moment().subtract(30,'days').format();
console.log(dateM30J);
Reservation.find({
select: ['user'],
where:{
date:{
'>=' : dateM30J
}
},
}, function(err, user_ids) {
if (err) {
console.log('ERROR: ' + err);
cb(err, null)
}
else {
user_ids = _.uniq(user_ids);
cb(null, user_ids)
}
});
},
but i can't figure out how to get all users that have booked more than three times in the last month
Your Reservation.find(...) returns an array, just check for each user if the array size is > 3.
Or you can use .count(), it has the same effect.

How do I use new Meteor.Collection.ObjectID() in my mongo queries with meteor?

I have a Collection that has documents with an array of nested objects.
Here is fixture code to populate the database:
if (Parents.find().count() == 0) {
var parentId = Parents.insert({
name: "Parent One"
});
Children.insert({
parent: parentId,
fields: [
{
_id: new Meteor.Collection.ObjectID(),
position: 3,
name: "three"
},
{
_id: new Meteor.Collection.ObjectID(),
position: 1,
name: "one"
},
{
_id: new Meteor.Collection.ObjectID(),
position: 2,
name: "two"
},
]
});
}
You might be asking yourself, why do I even need an ObjectID when I can just update based off of the names. This is a simplified example to a much more complex schema that I'm currently working on and the the nested object are going to be created dynamically, the ObjectID's are definitely going to be necessary to make this work.
Basically, I need a way to save those nested objects with a unique ID and be able to update the fields by their _id.
Here is my Method, and the call I'm making from the browser console:
Meteor.methods({
upChild: function( options ) {
console.log(new Meteor.Collection.ObjectID());
Children.update({_id: options._id, "fields._id": options.fieldId }, {$set: {"fields.$.position": options.position}}, function(error){
if(error) {
console.log(error);
} else {
console.log("success");
}
});
}
});
My call from the console:
Meteor.call('upChild', {
_id: "5NuiSNQdNcZwau92M",
fieldId: "9b93aa1ef3868d762b84d2f2",
position: 1
});
And here is a screenshot of the html where I'm rendering all of the data for the Parents and Children collections:
Just an observation, as I was looking how generate unique IDs client side for a similar reason. I found calling new Meteor.Collection.ObjectID() was returning a object in the form 'ObjectID("abc...")'. By assigning Meteor.Collection.ObjectID()._str to _id, I got string as 'abc...' instead, which is what I wanted.
I hope this helps, and I'd be curious to know if anyone has a better way of handling this?
Jason
Avoid using the _str because it can change in the future. Use this:
new Meteor.Collection.ObjectID().toHexString() or
new Meteor.Collection.ObjectID().valueOf()
You can also use the official random package:
Random.id()