Mongoose plugin not working when used in combination with Feathersjs - mongodb

I have a very annoying problem with a Mongoose plugin in combination with Feathers.
It is a straight forward plugin that is taken from the Mongoose documentation here that updates the document version (__v) on every type of update. It works fine as standalone plugin, but when combined with Feathersjs it fails.
Somehow Mongoose is not converting the object id strings to ObjectIDs correctly when running updates and patches with the plugin in combination with Feathersjs. If I disable the plugin updates and patches do work.
As far as I have been able to determine debugging, the data sent in the call from Feathersjs to Mongoose is the same both with the plugin enabled or disabled. So I'm inclinded to say that Feathersjs is not the culprit, but then again why does the plugin work without Feathersjs?
I have setup an example repo here. Unfortunately this is the minimum setup required to recreate the problem. The stup is as follows:
src/models/schema/categories.schema.js - The example Mongoose schema
definition
src/mongoose/always-update-version-key.js - The plugin
test/mongoose/always-update-version-key.test.js - Test the schema and
plugin without Feathers
test/services/category.test.js - Test the schema and plugin with Feathers
I have added the test to test the plugin with and without Feathersjs. To run the tests execute npm run test:unit. Mongo must be running on the default port (27017).
I hope someone can help me solve this very frustrating problem or point me towards the code in Mongoose where the id strings are converted to ObjectIDs.

The default setting of feathers-mongoose is the lean option set to true for faster queries. When model plugins should be used (which I think should be the case here), set lean: false in the service options:
const options = {
Model,
paginate,
lean: false
};
// Initialize our service with any options it requires
app.use('/category', createService(options));

Related

Is meteor using the Mongo Oplog?

How can I check if meteor is using the oplog of my mongo?
I have a cluster of mongo and set two envs for my meteor.
MONGO_URL=mongodb://mongo/app?replicaSet=rs0
MONGO_OPLOG_URL=mongodb://mongo/local?authSource=app
How can I check if the opt log is actually in use. Meteor can fallback to query polling which is very inefficient but I would like to see if it's working properly with the oplog.
Any ideas?
Quoting the relevant bits from Meteor's OplogObserveDriver docs:
How to tell if your queries are using OplogObserveDriver
For now, we only have a crude way to tell how many observeChanges calls are using OplogObserveDriver, and not which calls they are.
This uses the facts package, an internal Meteor package that exposes real-time metrics for the current Meteor server. In your app, run meteor add facts, and add the {{> serverFacts}} template to your app. If you are using the autopublish package, Meteor will automatically publish all metrics to all users. If you are not using autopublish, you will have to tell Meteor which users can see your metrics by calling Facts.setUserIdFilter in server code; for example:
Facts.setUserIdFilter(function (userId) {
var user = Meteor.users.findOne(userId);
return user && user.admin;
});
(When running your app locally, Facts.setUserIdFilter(function () { return true; }); may be good enough!)
Now look at your app. The facts template will render a variety of metrics; the ones we're looking for are observe-drivers-oplog and observe-drivers-polling in the mongo-livedata section. If observe-drivers-polling is zero or not rendered at all, then all of your observeChanges calls are using OplogObserveDriver!
To set up oplog tailing, you need to set up a user on my_database, and an oplog_user on local. Then, specify the following URIs to connect to your replica set named test-shard (e.g. if there are 3 hosts named test-shard-[0-2]):
MONGO_URL="mongodb://user:PASS#test-shard-0.mongodb.net:27017,test-shard-1.mongodb.net:27017,test-shard-2.mongodb.net:27017/my_database?ssl=true&replicaSet=test-shard&authSource=admin"
MONGO_OPLOG_URL="mongodb://oplog_user:PASS#test-shard-0.mongodb.net:27017,test-shard-1.mongodb.net:27017,test-shard-2.mongodb.net:27017/local?ssl=true&replicaSet=test-shard&authSource=admin"
On MongoDB Atlas they require ssl=true, and also all users authenticate through the admin database. On another deployment you might just authenticate through my_database, in which case you'd remove the authsource=admin for MONGO_URL and write authsource=my_database for MONGO_OPLOG_URL. See this post for another example.
With MongoDB 3.6 and the Mongo node driver 3.0+, you may be able to use a succinct notation for DNS seedlist connections, e.g. on MongoDB Atlas, to specify the environment variables:
MONGO_URL="mongodb+srv://user:PASS#foo.mongodb.net/my_database"
MONGO_OPLOG_URL="mongodb+srv://oplog_user:PASS#foo.mongodb.net/local"
The link above explains how this notation fills in the ssl, replicaSet, and authSource arguments. This is a lot nicer than the long strings above, and also means you can scale your replica set up and down without needing to reconfigure anything.
As hwillson mentioned, use the facts-ui and facts-base packages (formerly facts) to see if there are any oplogObserveDrivers running in your app. If they are all pollingObserveDriver, than oplog is not set up correctly.
If you are using Kadira APM to monitor your app's performance, you can see if oplogs are working by navigating to the "Live Queries" section and having a look at the "Oplog notifications" chart.
You can see in my screenshot that oplogs are working, as values appear in the chart (bottom right). If oplogs weren't working then this chart would be empty.
This may be very late, but this is the only way that worked for me :
someCollection._driver.mongo._oplogHandle
if this is set to null then the oplog is not enabled, otherwise you can use this handle to check for more details.

Titan - How to Use 'Lucene' Search Backend

I am attempting to use the lucene search backend with Titan. I am setting the index.search.backend property to lucene as so.
TitanFactory.Builder config = TitanFactory.build();
config.set("storage.backend", "hbase");
config.set("storage.hostname", "node1");
config.set("storage.hbase.table", "titan");
config.set("index.search.backend", "lucene");
config.set("index.search.directory", "/tmp/foo");
TitanGraph graph = config.open();
GraphOfTheGodsFactory.load(graph);
graph.getVertices().forEach(v -> System.out.println(v.toString()));
Of course, this does not work because this setting is of the GLOBAL_OFFLINE variety. The logs make me aware of this. Titan ignores my 'lucene' setting and then attempts to use Elasticsearch as the search backend.
WARN com.thinkaurelius.titan.graphdb.configuration.GraphDatabaseConfiguration
- Local setting index.search.backend=lucene (Type: GLOBAL_OFFLINE)
is overridden by globally managed value (elasticsearch). Use
the ManagementSystem interface instead of the local configuration to control
this setting.
After some reading, I understand that I need to use the Management System to set the index.search.backend. I need some code that looks something like the following.
graph.getManagementSystem().set("index.search.backend", "lucene");
graph.getManagementSystem().set("index.search.directory", "/tmp/foo");
graph.getManagementSystem().commit();
I am confused on how to integrate this in my original example code above. Since this is a GLOBAL_OFFLINE setting, I cannot set this on an open graph. At the same time, I do not know how to get a graph unless I open one first. How do I set the search backend correctly?
There is no inmemory search backend. The supported search backends are Lucene, Solr, and Elasticsearch.
Lucene is a good option for a small scale, single machine search backend. You need to set 2 properties to do this, index.search.backend and index.search.directory:
index.search.backend=lucene
index.search.directory=/path/to/titansearchindexdir
As you've noted, the search backend is a GLOBAL_OFFLINE, so you should configure this before initially creating your graph. Since you've already created a titan table in your HBase, either disable and drop the titan table, or set your graph configuration to point at a new storage.hbase.table.

Import "normal" MongoDB collections into DerbyJS 0.6

Same situation like this question, but with current DerbyJS (version 0.6):
Using imported docs from MongoDB in DerbyJS
I have a MongoDB collection with data that was not saved through my
Derby app. I want to query against that and pull it into my Derby app.
Is this still possible?
The accepted answer there links to a dead link. The newest working link would be this: https://github.com/derbyjs/racer/blob/0.3/lib/descriptor/query/README.md
Which refers to the 0.3 branch for Racer (current master version is 0.6).
What I tried
Searching the internets
The naïve way:
var query = model.query('projects-legacy', { public: true });
model.fetch(query, function() {
query.ref('_page.projects');
})
(doesn't work)
A utility was written for this purpose: https://github.com/share/igor
You may need to modify it to only run against a single collection instead of the whole database, but it essentially goes through every document in the database and modifies it with the necessary livedb metadata and creates a default operation for it as well.
In livedb every collection has a corresponding operations collection, for example profiles will have a profiles_ops collection which holds all the operations for the profiles.
You will have to convert the collection to use it with Racer/livedb because of the metadata on the document itself.
An alternative if you dont want to convert is to use traditional AJAX/REST to get the data from your mongo database and then just put it in your local model. This will not be real-time or synced to the server but it will allow you to drive your templates from data that you dont want to convert for some reason.

How to properly handle mongoose schema migrations?

I'm completely new to MongoDB & Mongoose and can't seem to find an answer as to how to handle migrations when a schema changes.
I'm used to running migration SQL scripts that alter table structure and any underlying data that needs to be changed. This typically involves DB downtime.
How is this typically handled within MongoDB/Mongoose? Any gotcha's that I need to be aware of?
In coming across this and reasonably understanding how migrations work on a relational database, MongoDB makes this a little simpler. I've come to 2 ways to break this down. The things to consider when dealing with data migrations in MongoDB (not all that uncommon from RDBs) are:
Ensuring local test environments do not break when a developer merges the latest from the project repository
Ensuring any data is correctly updated on the live version regardless if a user is logged in or out if authentication is used. (Of course if everyone is automatically logged out when an upgrade is made, then only worrying about when a user logs in is necessary).
1) If your change will log everyone out or application downtime is expected then the simple way to do this is have a migration script to connect to local or live MongoDB and upgrade the correct data. Example where a user's name is changed from a single string to an object with given and family name (very basic of course and would need to be put into a script to run for all developers):
Using the CLI:
mongod
use myDatabase
db.myUsers.find().forEach( function(user){
var curName = user.name.split(' '); //need some more checks..
user.name = {given: curName[0], family: curName[1]};
db.myUsers.save( user );
})
2) You want the application to migrate the schemas up and down based on the application version they are running. This will obviously be less of a burden for a live server and not require down time due to only upgrading users when they use the upgraded / downgraded versions for the first time.
If your using middleware in Expressjs for Nodejs:
Set an app variable in your root app script via app.set('schemaVersion', 1) which will be used later to compare to the users schema version.
Now ensure all the user schemas have a schemaVersion property as well so we can detect a change between the application schema version and the current MongoDB schemas for THAT PARTICULAR USER only.
Next we need to create simple middleware to detect the config and user version
app.use( function( req, res, next ){
//If were not on an authenticated route
if( ! req.user ){
next();
return;
}
//retrieving the user info will be server dependent
if( req.user.schemaVersion === app.get('schemaVersion')){
next();
return;
}
//handle upgrade if user version is less than app version
//handle downgrade if user version is greater than app version
//save the user version to your session / auth token / MongoDB where necessary
})
For the upgrade / downgrade I would make simple js files under a migrations directory with an upgrade / downgrade export functions that will accept the user model and run the migration changes on that particular user in the MongoDB. Lastly ensure the users version is updated in your MongoDB so they don't run the changes again unless they move to a different version again.
If you're used to SQL type migrations or Rails-like migrations then you'll find my cli tool migrate-mongoose the right fit for you.
It allows you to write migrations with an up and a down function and manages the state for you based on success and failure of your migrations.
It also supports ES6 if you're using ES 2015 syntax.
You get access to your mongoose models via the this object, making it easy to make the changes you need to your models and schemas.
There are 2 types of migrations:
Offline: Will require you to take your service down for maintenance, then iterate over the entire collection and make the changes you need.
Online: Does not require to take your service down for maintenance. When you read the document, you check its version, and run a version specific migration routine for each version between the old and the new. Then you load the resulting thing.
Not all services can afford an offline migration, I recommend the online approach.

How to rollback transaction in Grails integration tests on MongoDB

How I can (should) configure Grails integration tests to rollback transactions automatically when using MongoDB as datasource?
(I'm using Grails 2.2.1 + mongodb plugin 1.2.0)
For spock integration tests I defined a MongoIntegrationSpec that gives some control of cleaning up test data.
dropDbOnCleanup = true // will drop the entire DB after every feature method is executed.
dropDbOnCleanupSpec = true // will drop the entire DB after after the spec is complete.
dropCollectionsOnCleanup = ["collectionA", "collectionB", ...] // drops collections after every feature method is executed.
dropCollectionsOnCleanupSpec = ["collectionA", "collectionB", ...] // drops collections after the spec is complete.
dropNewCollectionsOnCleanup = true // after every feature method is executed, all new collections are dropped
dropNewCollectionsOnCleanupSpec = true // after the spec is complete, all new collections are dropped
Here's the source
https://github.com/onetribeyoyo/mtm/tree/dev/src/test/integration/com/onetribeyoyo/util/MongoIntegrationSpec.groovy
And the project has a couple usage examples too.
I don't think that it's even possible, because MongoDB doesn't support transactions. You could use suggested static transactional = 'mongo', but it helps only if you didn't flush your data (it's rare situation I think)
Instead you could cleanup database on setUp() manually. You can drop collection for a domain that you're going to test, like:
MyDomain.collection.drop()
and (optinally) fill with all data require for your test.
Can use static transactional = 'mongo' in integration test and/or service class.
Refer MongoDB Plugin for more details.
MongoDB does not support transactions! And hence you cannot use it. The options you have are
1. Go around and drop the collections for the DomainClasses you used.
MyDomain.collection.drop() //If you use mongoDB plugin alone without hibernate
MyDomain.mongo.collection.drop() //If you use mongoDB plugin with hibernate
Draw back is you have to do it for each domain you used
2. Drop the whole database (You don't need to create it explicitly, but you can)
String host = grailsApplication.config.grails.mongo.host
Integer port = grailsApplication.config.grails.mongo.port
Integer databaseName = grailsApplication.config.grails.mongo.databaseName
def mongo = new GMongo(host, port)
mongo.getDB(databaseName).dropDatabase() //this takes 0.3-0.5 seconds in my machin
The second option is easier and faster. To make this work for all your tests, extend IntegrationSpec and add the code to drop the database in the cleanup block (I am assuming you are using Spock test framework) or do a similar thing for JUnit like tests!
Hope this helps!