Long running Mongo queries in Meteor - mongodb

How would one go about updating 1000s of documents in a collection in meteor where forEach has to be used to first calculate the changes for each individual document?
There's a timeout of 10 minutes or so as well as a certain number of megabytes. What I've done in the past is split the updates into groups of 300 and update like that. But is there a simpler way to do it in meteor to allow the for each loop to run for an hour of needed?

Using percolate:synced-cron you could easily do this in batches.
SyncedCron.add({
name: 'Update mass quantities',
schedule: function(parser) {
// parser is a later.parse object
return parser.text('every 1 minute'); // or at any interval you wish
},
job: function() {
var query = { notYetProcessed: true }; // or whatever your criteria are
var batchSize = { limit: 300 }; // for example
myCollection.find(query,batchSize).forEach(doc){
var update = { $set: { notYetProcessed: false }}; // along with everything else you want to update
myCollection.update(doc._id,update);
}
}
});
This will run every minute until there are no more records to be processed. It will continue running after that of course but won't find anything to update.

Related

Faster way to get delta of 2 collections in Mongo

I have a complete list of catalog inventory data in Mongo
The basic schema is:
productSku (string)
inventory (number)
This collection consists of approximately 14 million records.
I have another list of actively sold products with a similar schema.
Right now I have it as a json file.
It consists of approximately 23,000 records.
Every 5 hours the 14 million records updates with the latest inventory data.
Once that happens I need to create a CSV of the 23,000 product's latest inventory.
I'm doing it like this:
const inventoryModel = require('../data/inventoryModel');
const activeProducts = require('./activeProducts.json');
const inventoryUpdate = [];
for (const product of activeProducts) {
let latest = await inventoryModel.findOne({ productSku: product.sku }).exec()
latest = latest ? latest._doc : null;
// If there's no current inventory record for the product
if (!lastest) {
// If there was previously an inventory greater than 0
if (product.inventory) {
// Set the latest inventory to you
inventoryUpdate.push({ sku: product.sku, inventory: 0 });
}
} else {
// If there's a change in inventory
if (latest.inventory != product.inventory) {
inventoryUpdate.push({ sku: product.sku, inventory: latest.inventory });
}
}
}
This gives me an array inventoryUpdate that I can use to create a CSV for a mass update. This works fine but it's very slow. It takes about an hour to complete!
I was thinking about maybe adding activeProducts to Mongo as well and if I can somehow keep the execution of the logic within Mongo this would be a lot faster. If possible this is beyond my current understanding and ability.
Anyone have any suggestions?

MongoDB bulkWrite multiple updateOne vs updateMany

I have cases where I build bulkWrite operations where some documents have the same update object, is there any performance benefit to merging the filters and send one updateMany with those filters instead of multiple updateOnes in the same bulkWrite?
It's obviously better to use updateMany over multiple updateOnes when using the normal methods, but with bulkWrite, since it's a single command, are there any significant gains of preferring one over the other?
Example:
I have 200k documents that I need to update, I have 10 total unique status field for all 200K documents, so my options are:
Solutions:
A) Send one single bulkWrite with 10 updateMany operations, and each one of those operations will affect 20K documents.
B) Send one single bulkWrite with 200K updateOne each operations holding its filter and status.
As #AlexBlex noted, I have to look out for accidentally updating more than one document with the same filter, in my case I use _id as my filter, so accidentally updating other documents is not a concern in my case, but is definitely something to look out for when considering the updateMany option.
Thanks #AlexBlex.
Short answer:
Using updateMany is at least twice faster, but might accidentally update more documents than you intended, keep reading to learn how to avoid this and gain the performance benefits.
Long answer:
We ran the following experiment to know the answer for that, the following are the steps:
Create a bankaccounts mongodb collection, each document contains only one field (balance).
Insert 1 million documents into the bankaccounts collection.
Randomize the order in memory of all 1 million documents to avoid any possible optimizations from the database using ids that are inserted in the same sequence, simulating a real-world scenario.
Build write operations for bulkWrite from the documents with a random number between 0 and 100.
Execute the bulkWrite.
Log the time the bulkWrite took.
Now, the experiment lies in the 4th step.
In one variation of the experiment we build an array consisting of 1 million updateOne operations, each updateOne has filter for a single document, and its respective `update object.
In the second variation, we build 100 updateMany operations, each including filter for 10K documents ids, and their respective update.
Results:
updateMany with multiple documents ids is 243% faster than multiple updateOnes, this can not be used everywhere though, please read "The risk" section to learn when it should be used.
Details:
We ran the script 5 times for each variation, the detailed results are as follows:
With updateOne: 51.28 seconds on average.
With updateMany: 21.04 seconds on average.
The risk:
As many people have already pointed out, updateMany is not a direct substitute to updateOne, since it can incorrectly update multiple documents when our intention was to really update only one document.
This approach is only valid when you're using a field that is unique such as _id or any other field that is unique, if the filter is depending on fields that are not unique, multiple documents will be updated and the results will not be equivalent.
65831219.js
// 65831219.js
'use strict';
const mongoose = require('mongoose');
const { Schema } = mongoose;
const DOCUMENTS_COUNT = 1_000_000;
const UPDATE_MANY_OPERATIONS_COUNT = 100;
const MINIMUM_BALANCE = 0;
const MAXIMUM_BALANCE = 100;
const SAMPLES_COUNT = 10;
const bankAccountSchema = new Schema({
balance: { type: Number }
});
const BankAccount = mongoose.model('BankAccount', bankAccountSchema);
mainRunner().catch(console.error);
async function mainRunner () {
for (let i = 0; i < SAMPLES_COUNT; i++) {
await runOneCycle(buildUpdateManyWriteOperations).catch(console.error);
await runOneCycle(buildUpdateOneWriteOperations).catch(console.error);
console.log('-'.repeat(80));
}
process.exit(0);
}
/**
*
* #param {buildUpdateManyWriteOperations|buildUpdateOneWriteOperations} buildBulkWrite
*/
async function runOneCycle (buildBulkWrite) {
await mongoose.connect('mongodb://localhost:27017/test', {
useNewUrlParser: true,
useUnifiedTopology: true
});
await mongoose.connection.dropDatabase();
const { accounts } = await createAccounts({ accountsCount: DOCUMENTS_COUNT });
const { writeOperations } = buildBulkWrite({ accounts });
const writeStartedAt = Date.now();
await BankAccount.bulkWrite(writeOperations);
const writeEndedAt = Date.now();
console.log(`Write operations took ${(writeEndedAt - writeStartedAt) / 1000} seconds with \`${buildBulkWrite.name}\`.`);
}
async function createAccounts ({ accountsCount }) {
const rawAccounts = Array.from({ length: accountsCount }, () => ({ balance: getRandomInteger(MINIMUM_BALANCE, MAXIMUM_BALANCE) }));
const accounts = await BankAccount.insertMany(rawAccounts);
return { accounts };
}
function buildUpdateOneWriteOperations ({ accounts }) {
const writeOperations = shuffleArray(accounts).map((account) => ({
updateOne: {
filter: { _id: account._id },
update: { balance: getRandomInteger(MINIMUM_BALANCE, MAXIMUM_BALANCE) }
}
}));
return { writeOperations };
}
function buildUpdateManyWriteOperations ({ accounts }) {
shuffleArray(accounts);
const accountsChunks = chunkArray(accounts, accounts.length / UPDATE_MANY_OPERATIONS_COUNT);
const writeOperations = accountsChunks.map((accountsChunk) => ({
updateMany: {
filter: { _id: { $in: accountsChunk.map(account => account._id) } },
update: { balance: getRandomInteger(MINIMUM_BALANCE, MAXIMUM_BALANCE) }
}
}));
return { writeOperations };
}
function getRandomInteger (min = 0, max = 1) {
min = Math.ceil(min);
max = Math.floor(max);
return min + Math.floor(Math.random() * (max - min + 1));
}
function shuffleArray (array) {
let currentIndex = array.length;
let temporaryValue;
let randomIndex;
// While there remain elements to shuffle...
while (0 !== currentIndex) {
// Pick a remaining element...
randomIndex = Math.floor(Math.random() * currentIndex);
currentIndex -= 1;
// And swap it with the current element.
temporaryValue = array[currentIndex];
array[currentIndex] = array[randomIndex];
array[randomIndex] = temporaryValue;
}
return array;
}
function chunkArray (array, sizeOfTheChunkedArray) {
const chunked = [];
for (const element of array) {
const last = chunked[chunked.length - 1];
if (!last || last.length === sizeOfTheChunkedArray) {
chunked.push([element]);
} else {
last.push(element);
}
}
return chunked;
}
Output
$ node 65831219.js
Write operations took 20.803 seconds with `buildUpdateManyWriteOperations`.
Write operations took 50.84 seconds with `buildUpdateOneWriteOperations`.
----------------------------------------------------------------------------------------------------
Tests were run using MongoDB version 4.0.4.
At high level, if you have same update object, then you can do updateMany rather than bulkWrite
Reason:
bulkWrite is designed to send multiple different commands to the server as mentioned here
If you have same update object, updateMany is best suited.
Performance:
If you have 10k update commands in bulkWrite, it will be executed batch manner internally. It may impact on the execution time
Exact lines from the reference about batching:
Each group of operations can have at most 1000 operations. If a group exceeds this limit, MongoDB will divide the group into smaller groups of 1000 or less. For example, if the bulk operations list consists of 2000 insert operations, MongoDB creates 2 groups, each with 1000 operations.
Thanks #Alex

Meteor - How to automatically remove a single item from a collection after a specific time period from within a server side method?

I wrote a click. event which calls a method. This method pushes single items (InfoId) into the collection called userManagement. So these items are assigned to that user.
eventhandler:
Template.available.events({
"click .push": function(e) {
e.preventDefault();
var InfoId = this.InfoId;
Meteor.call('pushInfo', InfoId);
}, });
And the method:
Meteor.methods({
'pushInfo': function(InfoId) {
if (this.userId) {
userManagement.update({
'_id': this.userId
}, {
$push: {
'activeInfos': InfoId
}
}
);
}
}
});
However, now I need to automatically remove exactly this previously added single item (InfoId) from 'activeInfos' after a specific time period e. g. three months.
Is there any way to do that?
for doing this you can use cronjob
just install it using meteor add percolate:synced-cron
in cron you need to do two things one is add a task to cron, second is start our cron.
SyncedCron.add({
name: 'your cron name',
schedule: function(parser) {
// parser is a later.parse object
return parser.text('every 2 hours');
},
job: function() {
console.log("hello");
}
});
here schedule: is use to set time and in Job: we will add code which we want to run after a time we added in schedule.
after this start your cron. for this add this
SyncedCron.start();
for more info check this link https://github.com/percolatestudio/meteor-synced-cron .
for schedule timing read this http://bunkat.github.io/later/parsers.html#overview
i hope this will help
One approach to problems like this is to use a mongo TTL index to have mongo remove documents automatically. TTL indexes only work on documents (not on sub documents) so if you wanted to go down that route, you'd need to separate out activeInfos into a separate collection, and use the aggregation pipeline's $lookup stage during finds to recreate your original documents.
db.active_infos.createIndex( { "createdAt": 1 }, { expireAfterSeconds: appropriateNumberOfSeconds } )

How do I publish two random items from a Meteor collection?

I'm making an app where two random things from a collection are displayed to the user. Every time the user refreshes the page or clicks on a button, she would get another random pair of items.
For example, if the collection were of fruits, I'd want something like this:
apple vs banana
peach vs pineapple
banana vs peach
The code below is for the server side and it works except for the fact that the random pair is generated only once. The pair doesn't update until the server is restarted. I understand it is because generate_pair() is only called once. I have tried calling generate_pair() from one of the Meteor.publish functions but it only sometimes works. Other times, I get no items (errors) or only one item.
I don't mind publishing the entire collection and selecting random items from the client side. I just don't want to crash the browser if Items has 30,000 entries.
So to conclude, does anyone have any ideas of how to get two random items from a collection appearing on the client side?
var first_item, second_item;
// This is the best way I could find to get a random item from a Meteor collection
// Every item in Items has a 'random_number' field with a randomly generated number between 0 and 1
var random_item = function() {
return Items.find({
random_number: {
$gt: Math.random()
}
}, {
limit: 1
});
};
// Generates a pair of items and ensure that they're not duplicates.
var generate_pair = function() {
first_item = random_item();
second_item = random_item();
// Regenerate second item if it is a duplicate
while (first_item.fetch()[0]._id === second_item.fetch()[0]._id) {
second_item = random_item();
}
};
generate_pair();
Meteor.publish('first_item', function() {
return first_item;
});
// Is this good Meteor style to have two publications doing essentially the same thing?
Meteor.publish('second_item', function() {
return second_item;
});
The problem with your approach is that subscribing to the same publication with the same arguments (no arguments in this case) over and over in the client will only get you subscribed only once to the server-side logic, this is because Meteor is optimizing its internal Pub/Sub mechanism.
To truly discard the previous subscription and get the server-side publish code to re-execute and send two new random documents, you need to introduce a useless random argument to your publication, your client-side code will subscribe over and over to the publication with a random number and each time you'll get unsubscribed and resubscribed to new random documents.
Here is a full implementation of this pattern :
server/server.js
function randomItemId(){
// get the total items count of the collection
var itemsCount = Items.find().count();
// get a random number (N) between [0 , itemsCount - 1]
var random = Math.floor(Random.fraction() * itemsCount);
// choose a random item by skipping N items
var item = Items.findOne({},{
skip: random
});
return item && item._id;
}
function generateItemIdPair(){
// return an array of 2 random items ids
var result = [
randomItemId(),
randomItemId()
];
//
while(result[0] == result[1]){
result[1] = randomItemId();
}
//
return result;
}
Meteor.publish("randomItems",function(random){
var pair = generateItemIdPair();
// publish the 2 items whose ids are in the random pair
return Items.find({
_id: {
$in: pair
}
});
});
client/client.js
// every 5 seconds subscribe to 2 new random items
Meteor.setInterval(function(){
Meteor.subscribe("randomItems", Random.fraction(), function(){
console.log("fetched these random items :", Items.find().fetch());
});
}, 5000);
You'll need to meteor add random for this code to work.
Meteor.publish 'randomDocs', ->
ids = _(Docs.find().fetch()).pluck '_id'
randomIds = _(ids).sample 2
Docs.find _id: $in: randomIds
Here's another approach, uses the excellent publishComposite package to populate matches in a local (client-only) collection so it doesn't conflict with other uses of the main collection:
if (Meteor.isClient) {
randomDocs = new Mongo.Collection('randomDocs');
}
if (Meteor.isServer) {
Meteor.publishComposite("randomDocs",function(select_count) {
return {
collectionName:"randomDocs",
find: function() {
let self=this;
_.sample(baseCollection.find({}).fetch(),select_count).forEach(function(doc) {
self.added("randomDocs",doc._id,doc);
},self);
self.ready();
}
}
});
}
in onCreated: this.subscribe("randomDocs",3);
(then in a helper): return randomDocs.find({},{$limit:3});

Linear funnel from a collection of events with MongoDB aggregation, is it possible?

I have a number of event documents, each event has a number of fields, but the ones that are relevant for my query are:
person_id - a reference to the person that triggered the event
event - a string key to identify the event
occurred_at - the utc of the time the event occurred
What I want to achieve is:
for a list of event keys eg `['event_1','event_2', 'event_3']
get counts of the number of people that performed each event and all the event previous to that event, in order, ie:
the number of people who performed event_1
the number of people who performed event_1, and then event_2
the number of people who performed event_1, and then event_2, and then event_3
etc
a secondary goal is to be able to get the average occurred_at date for each event so that I can calculate the average time between each event
The best I have got is the following two map reduces:
db.events.mapReduce(function () {
emit(this.person_id, {
e: [{
e: this.event,
o: this.occurred_at
}]
})
}, function (key, values) {
return {
e: [].concat.apply([], values.map(function (x) {
return x.e
}))
}
}, {
query: {
account_id: ObjectId('52011239b1b9229f92000003'),
event: {
$in: ['event_a', 'event_b', 'event_c','event_d','event_e','event_f']
}
},
out: 'people_funnel_chains',
sort: { person_id: 1, occurred_at: 1 }
})
And then:
db.people_funnel_chains.mapReduce(function() {
funnel = ['event_a', 'event_b', 'event_c','event_d','event_e','event_f']
events = this.value.e;
for (var e in funnel) {
e = funnel[e];
if ((i = events.map(function (x) {
return x.e
}).indexOf(e)) > -1) {
emit(e, { c: 1, o: events[i].o })
events = events.slice(i + 1, events.length);
} else {
break;
}
}
}, function(key,values) {
return {
c: Array.sum(values.map(function(x) { return x.c })),
o: new Date(Array.sum(values.map(function(x) { return x.o.getTime() }))/values.length)
};
}, { out: {inline: 1} })
I would like to achieve this is in real time using the aggregate framework but can see no way to do it. For 10s of thousands of records this is taking 10s of seconds, I can run it incrementally which means its fast enough for new data coming in but if I want to modify the original query (eg change the event chain) it can't be done in a single request which I would love it to be able to do.
Update using Cursor.forEach()
Using Cursor.forEach() I've managed to get huge improvement on this (essentially removing the requirement for the first map reduce).
var time = new Date().getTime(), funnel_event_keys = ['event_a', 'event_b', 'event_c','event_d','event_e','event_f'], looking_for_i = 0, looking_for = funnel_event_keys[0], funnel = {}, last_person_id = null;
for (var i in funnel_event_keys) { funnel[funnel_event_keys[i]] = [0,null] };
db.events.find({
account_id: ObjectId('52011239b1b9229f92000003'),
event: {
$in: funnel_event_keys
}
}, { person_id: 1, event: 1, occurred_at: 1 }).sort({ person_id: 1, occurred_at: 1 }).forEach(function(e) {
var current_person_id = e['person_id'].str;
if (last_person_id != current_person_id) {
looking_for_i = 0;
looking_for = funnel_event_keys[0]
}
if (e['event'] == looking_for) {
var funnel_event = funnel[looking_for]
funnel_event[0] = funnel_event[0] + 1;
funnel_event[1] = ((funnel_event[1] || e['occurred_at'].getTime()) + e['occurred_at'].getTime())/2;
looking_for_i = looking_for_i + 1;
looking_for = funnel_event_keys[looking_for_i]
}
last_person_id = current_person_id;
})
funnel;
new Date().getTime() - time;
I wonder if something custom with data in memory would be able to improve on this? Getting 100s of thousands of records out of MongoDB into memory (on a different machine) is going to be a bottle neck, is there a technology I'm not aware of that could do this?
I wrote up a complete answer on my MongoDB blog but as a summary, what you have to do is project your actions based on which ones you care about to map values of action field into appropriate key names, group by person aggregating for the three actions when they did them (and optionally how many times) and then project new fields which check if action2 was done after action1, and action3 was done after action2... Last phase just sums up the number of people who did just 1, or 1 and then 2, or 1 and then 2 and then 3.
Using a function to generate the aggregation pipeline, it's possible to generate results based on array of actions passed in.
In my test case, the entire pipeline ran in under 200ms for a collection of 40,000 documents (this was on my small laptop).
As it was correctly pointed out, the general solution I describe assumes that while an actor can take any action multiple times that they can only advance from action1 to action2 but that they cannot skip directly from action1 to action3 (interpreting action order as describing prerequisites where you cannot do action3 until you've done action2).
As it turns out, aggregation framework can be used even for sequences of events where the order is completely arbitrary but you still want to know how many people at some point did the sequence action1, action2, action3.
The main adjustment to make on the original answer is to add an extra two-stage step in the middle. This step unwinds the collected by person document to re-group it finding the first occurrence of the second action that comes after the first occurrence of the first action.
Once we have that the final comparison becomes for action1, followed by earliest occurrence of action2 and compare that to the latest occurrence of action3.
It can probably be generalized to handle arbitrary number of events but every additional event past two would add two more stages to the aggregation.
Here is my write-up of the modification of the pipeline to achieve the answer you are looking for.