share backbone code on client and server side with mongodb storage - mongodb

I'm looking for a solution to code only once the models for a backbone, mongodb, nodejs based app.
The storage can be only server side, but I still need proper model definitions both on the server and the client. On the server side I've decided to go with mongodb.
After all the only thing I've found is https://github.com/donedotcom/backbone-mongodb.
I think I've understood backbone quite well, but have never use mongodb before, and I can't figure out how to really use backbone-mongodb. Could someone tell me how it complements backbone, what Document and EmbeddedDocument are meant for and how they related to Backbone.Model? Does this have anything to do with code sharing b/w client and server?
Of course, my idea would be to share the model definitions and validation (done mostly with backbone-validation) b/w the server and the client.
thanks, Viktor

I've just finished rewriting backbone-mongodb
there is an example todo application (stay with commit eb935ae7480c18c9d6fcf2f5a2187cdff3d17a13) available as well

TL;DR
Document <-> Backbone.Model
Read and write data on Node.js by overriding Backbone.sync.
EmbeddedDocument no exact match: probably possible to implement via Backbone-relational, some assembly required.
Long read
Since MongoDB is a document-centric database Backbone.Model's will fit Mongo's Document's quite nicely. You can think about MongoDB's Documents as if you could store searchable JSON blobs (..oversimplification for the sake of getting started, but still). They will more will more or less be an exact match to Backbones Models. EmbeddedDocument's corresponds somewhat (..oversimplification again, same reason) to related tables in traditional relational systems. They don't have an exact match in the Backbone world, but you could possible use Backbone-relational to handle them in your Node application. I haven't tried it but I'm making a qualified guess that it will need certain amount of hand-holding.
On the Node side, you'll want to override Backbone.sync, probably globally to read and write Modelobjects to MongoDB Documents.
Also, embedded documents are just that - they are the actual data stored inside another object, not a link to that data stored independently (docs). It's also possible to do links, which are more like traditional relations (see same link).
To be able to correctly program something with this combination, I think you should read at least a bit more on MongoDB, here's some pointers:
Getting started with MongoDB and Python, Python-centric but still a very good introduction to MongoDB.
Have you checked out this MongoDB port of the typical Backbone Todo?
Here's another example of someone describing a webapp using Node & MongoDB. It's not Backbone-driven but it'll still show you a lot about how to work with MongoDB from Node.js.

Related

MongoDB + Postgres (or do I need a graph db?)

I'm planning to build a wiki/resource app which, by itself, makes sense to be using Mongo for. However, the primary purpose of the app is to have associative tables showing connections between individual content items. A majorly simplified example would be, Odin, Zeus, Jupiter would be a row within an "Allfather" association. The problem is that these tables could grow indefinitely and it seems like developing this type of network in Mongo would be a rather complex and frustating experience.
I've thinking about using Mongo for the pages and just maintaining a small Postgres database for these associations, but something intuitively feels wrong about that. However, I'm an experienced frontend dev that's just starting to dabble in backend/database, so I'm not willing to trust my intuitions on databases yet haha.
Is postgres + mongo a good solution for the above problem, or is this where something like a graph database (which I only learned about yesterday) would come into play?
After spending the last several hours researching further, it does appear that a graph database is the right solution to managing the "association" feature that I'm looking to develop here because the actual relationships will be rather multidimensional in nature.
Furthermore, I've decided to go with ArangoDB as it merges key-value (ie. Redis or postgres' hstore IIRC), document store (ie. Mongo's documents or Postgres' JSONB), and also does graph database functionality. Arango can do joins between documents and, even better, it has a single, unified query language that works across all 3 types of models. It also has a rather robust tooling environment around it already that seems pretty promsing.
I found this youtube video rather enlightening as well if anyone wants a nice introduction to understanding why you might want to use a "multi-model database" like ArangoDB.

MongoDB + Elasticsearch or only Elasticsearch?

We have a new project there for index a large amount of data and for provide real time. I have also complexe search with facets, full text, geospatial...
The first prototype is to index in MongoDB and next, into Elasticsearch, because I had read that Elasticsearch does not apply a checksum on stored files and the index can't be fully trusted.
But since last versions (in the version 1.5), there is now a checksum and I'm guessing if we can use Elasticsearch as primary data store ? And what is the benefit to use MongoDB in addition to Elasticsearch ?
I can't find up to date answer about thoses features in Elasticsearch
Thanks a lot
Talking about arguments to use Mongo instead of/together with ES:
User/role management.
Built-in in MongoDB. May not fit all your needs, may be clumsy somewhere, but it exists and it was implemented pretty long time ago.
The only thing for security in ES is shield. But it ships only for Gold/Platinum subscription for production use.
Schema
ES is schemaless, but its built on top of Lucene and written in Java. The core idea of this tool - index and search documents, and working this way requires index consistency. At back end, all documents should be fitted in flat lucene index, which requires some understanding about how ES should deal with your nested documents and values, and how you should organize your indexes to maintain balance between speed and data completeness/consistency. Working with ES requires you to keep some things about schema in mind constantly. I.e: as you can index almost anything to ES without putting corresponding mapping in advance, ES can "guess" mapping on the fly but sometimes do it wrong and sometimes implicit mapping is evil, because once it put, it can't be changed w/o reindexing whole index. So, its better to not treat ES as schemaless store, because you can step on a rake some time (and this will be pain :) ), but rather treat it as schema-intensive, at least when you work with documents, that can be sliced to concrete fields.
Mongo, on the other hand, can "chew and leave no crumbs" out of almost anything you put in it. And most your queries will work fine, `til you remember how Mongo will deal with your data from JavaScript perspective. And as JS is weakly typed, you can work with really schemaless workflow (for sure, if you need such)
Handling non-table-like data.
ES is limited to handle data without putting it to search index. And this solution is good enough, when you need to store and retrieve some extra data (comparing to data you want to search against).
MongoDB supports gridFS. This gives you ability to handle large chunks of data behind the same interface. I.e., you can store binary data in Mongo and retrieve it within the same interface, from your code perspective.
Well, choose the right tool for the right job. If you require searching capabilities such as full text search, faceting etc, then nothing can beat a full fledged search engine. ElasticSearch(ES) or Solr is just a matter of choice.
You can actually feed(index) documents into ES for searching and then fetch the complete details for a particular entry from MongoDB or any other database.
I can make your task easier, do take a look at my open source work that's using MongoDB, ES, Redis and RabbitMQ, all integrated at one place, here on github
Please note that the application is built in .Net C#.
After having used Elasticsearch on production, I can add up to this thread few notes :
We securized our Elasticsearch clustering via a reverse proxy which check client certificate authenticity at request time before letting the query in : it proves that there is multiple way to add authentication anyway. (If you need more accuracy in security, like by using roles, there is few plugins that can be added to manage permissions)
Elasticsearch mapping and settings (tuning) are really important concepts to fully understand before going on production with it, and that's no that easy to get how everything works quickly.
Clustering and horizontal scaling is very flexible and easy to set up
The suite tools (Kibana, beats, etc ..) are a very convinient way to gather logs, expose key data, etc ...
Search features are extremely advanced, you can really do amazing things when you master a bit how full text search works (fuzzyness, boosting, scoring, stemming, tokenizer, analyzers, and so on ...).
API's are a bit scattered and there is not unique ways to achieve something. And some API are really WTF to use, like the bulk insert API: you need to pass binary data, with JSON format (ofc don't forget end of line characters) and repeating some fields multiple times. This is very verbose and I guess it's legacy code like we all have in our projects ;).
Last thing : if you develop a Java project, do not use Hibernate Search to duplicate data from a datasource to your ES cluster, we had so much issues with Hibernate Search, if we had to do that again, we'd do that manually.
Now about the real question :
To my mind, using only Elasticsearch is sufficient and may reduce complexity of having a multiple NoSQL storage systems.
I think it's worthy when you are doing a duo Relational and Transactional database + NoSQL search engine, but having two system which roughly serves the same purposes is a bit overkilled
I have recently developed a feature in my company,
we wanted to perform some searches and rank the result according to its relevance on multiple factors and conditions.
So in my application, we were already using MongoDB as Db,
So on ElasticSearch index, I exported some of the fields from MongoDB that I want to perform search and filters on.
So according to required conditions I prepared my mongo query and elasticsearch query also and performed the search. Then I filtered and sorted the result according to my need.
The whole flow will was designed in such a way that,
even if there is an error from ES, mongo will fetch the records.
If I get the result from ES then, mongo result will depend on ES result.
This is how I used mongo and ES in combination.
Also, don't forget to properly handle all updates, deletes and new record insertions.
And Just to Know, results for me were Really Good.

ElasticSearch vs. MongoDB for Caching User Data

Up to this point, I have been using MongoDB (Node.js + Mongoose) to save posts which belong to a user, so that I can later retrieve them to display in a stream (just like Facebook, Twitter, etc.)
It recently became necessary to allow the user to deeply search his stream; MongoDB's search was insufficient, so I implemented ElasticSearch on my servers (Amazon EC2 m1.large instances running CentOS, FWIW).
My question: I'm now in a position that I'm duplicating the data between MongoDB (where the user's stream is cached) and ElasticSearch (where it is searched).
Is there any disadvantage to moving my cache ENTIRELY into ElasticSearch, getting rid of the MongoDB all together? It seems a waste to double the storage, and there's no other place that I'm accessing this data (it is only used when presenting/searching the stream of posts).
Specifically, I want to make sure I'm not overlooking anything re: performance. I like the idea of reducing MongoDB as a bottleneck, yet I worry about the memory overhead of ElasticSearch. MongoDB runs on its own server in my cloud setup, whereas ElasticSearch is running on the same instances as node.js. This means I would have MORE ElasticSearch servers (the node.js servers are in an auto-scaling array), but they each are not DEDICATED servers (unlike MongoDB).
The only big obstacle to using ES as a "primary datasource" is that there isn't a good backup mechanism right now. The ES team is working on it and expect it to be out by the end of the year, but in the mean time, you'll have to implement your own backup scripts.
As far as performance, it's really hard to say because almost every situation is unique. ES benefits from memory - so more is always better. In particular, sorts/filters/facets/geo all like to eat memory. If you aren't doing much in the way of faceting, for example, you may be fine with less memory.
ES doesn't need to run on a dedicated node...but it will happily use as many resources as you give it.
Another option is to use just the elastic search indexes. You can choose to not save data in a readable format, so you search in ES and then retrieve documents from MongoDB to your user as needed.
The question bellow comments exactly on that.
Storing only selected fields and not storing _all in pyes/elasticsearch

MongoDB for personal non-distributed work

This might be answered here (or elsewhere) before but I keep getting mixed/no views on the internet.
I have never used anything else except SQL like databases and then I came across NoSQL DBs (mongoDB, specifically). I tried my hands on it. I was doing it just for fun, but everywhere the talk is that it is really great when you are using it across distributed servers. So I wonder, if it is any helpful(in a non-trivial way) for doing small projects and things mainly only on a personal computer? Are there some real advantages when there is just one server.
Although it would be cool to use MapReduce (and talk about it to peers :d) won't it be an overkill when used for small projects run on single servers? Or are there other advantages of this? I need some clear thought. Sorry if I sounded naive here.
Optional: Some examples where/how you have used would be great.
Thanks.
IMHO, MongoDB is perfectly valid for use for single server/small projects and it's not a pre-requisite that you should only use it for "big data" or multi server projects.
If MongoDB solves a particular requirement, it doesn't matter on the scale of the project so don't let that aspect sway you. Using MapReduce may be a bit overkill/not the best approach if you truly have low volume data and just want to do some basic aggregations - these could be done using the group operator (which currently has some limitations with regard to how much data it can return).
So I guess what I'm saying in general is, use the right tool for the job. There's nothing wrong with using MongoDB on small projects/single PC. If a RDBMS like SQL Server provides a better fit for your project then use that. If a NoSQL technology like MongoDB fits, then use that.
+1 on AdaTheDev - but there are 3 more things to note here:
Durability: From version 1.8 onwards, MongoDB has single server durability when started with --journal, so now it's more applicable to single-server scenarios
Choosing a NoSQL DB over say an RDBMS shouldn't be decided upon the single or multi server setting, but based on the modelling of the database. See for example 1 and 2 - it's easy to store comment-like structures in MongoDB.
MapReduce: again, it depends on the data modelling and the operation/calculation that needs to occur. Depending on the way you model your data you may or may not need to use MapReduce.

MongoDb vs CouchDb: write speeds for geographically remote clients

I would like all of my users to be able to read and write to the datastore very quickly. It seems like MongoDb has blazing reads, but the writes seem like they could be very very slow if the one master db needs to be located very far away from the client.
Couchdb seems that it has slow reads, but how about the writes in the case when the client is very far away from the master.
With couchdb, we can have multiple masters, meaning we can always have a write node close to the client. Could couchdb actually be faster for writes than mongodb in the case when our user base is spread very far out geographically?
I would love to use mongoDb due to its blazing fast speed, but some of my users very far away from the only master will have a horrible experience.
For worldwide types of systems, wouldn't couchDb be better. Isn't mongodb completely ruled out in the case where you have users all around the world?
MongoDb, if you're listening, why don't you do some simple multi-master setups, where conflict resolution can be part of the update semantic?
This seems to be the only thing standing in between mongoDb completely dominating the nosql marketshare. Everything else is very impressive.
Disclosure: I am a MongoDB fan and user, i have zero experience with CouchDB.
I have a heavy duty app that is very read write intensive. I'd say reads outnumber writes by a factor of around 30:1. The way mongo is designed reads are always going to be much faster than writes the trick (in my experience) is to make your writes so efficient that you can dedicate a higher percentage of your system resources to the writes.
When building a product on top of mongo the key thing to remember is the _id field. This field is automatically generated and added to all of your JSON objects it will look something like 47cc67093475061e3d95369d when you design your queries (Find's) try and query on this field wherever possible as it contains the machine location (and i think also disk location??? - i should check this) where the object lives so when you use a find or update using this field will really speed up your machine. Consider this in the design of your system.
Example:
2 of the clusters in my database are "users" and "posts". A user can create multiple posts. These two collections have to reference each other alot in the implementation of my app.
In each post object i store the _id of the parent user.
In each user object i store an array of all the posts the user has authored.
Now on each user page I can generate a list of all the authored posts without a resource stressful query but with a direct look up of the _id. The bigger the mongo cluster the bigger the difference this is going to make.
If you're at all familiar with oracle's physical location rowids you may understand this concept only in mongo it is much more awesome and powerful.
I was scared last year when we decided to finally ditch MySQL for mongo but I can tell you the following about my experience:
- Data porting is always horrible but it went as well as I could have imagined.
- Mongo is probably the best documented NoSQL DB out there and the Open Source community is fantastic.
- When they say fast and scalable there not kidding, it flies.
- Schema design is very easy and much more natural and orderly than key/value type db's in my opinion.
- The whole system seems designed for minimal user complexity, adding nodes etc is a breeze.
Ok, seriously I swear mongo didn't pay me to write this (I wish) but apologies for the love fest.
Whatever your choice, best of luck.
Here is a detailed article that 10gen has created, and gives examples of when you should choose MongoDB or CouchDB, with reasons as well.
http://www.mongodb.org/display/DOCS/Comparing+Mongo+DB+and+Couch+DB
Edit
The above link was removed, but can be viewed here: http://web.archive.org/web/20120614072025/http://www.mongodb.org/display/DOCS/Comparing+Mongo+DB+and+Couch+DB
Your question as of now, is full with speculation and guessing.
...why can't we opt out of consistency for certain writes, so long as we're sure that the person that wrote the data will be able to read it consistently, whereas others will observe eventual consistency
What if those writes effect other writes? What if those writes would prevent other people from doing stuff. It's hard to tell the possible side effect if since you didn't tell us any specifics.
My main suggestion to you is that you do some testing. Unless you've tested it, speculation about bottle necks is a complete waste of time. You don't need to test it via remote machines, set up some local DBs and add some artificial lag, then run your tests.
This way you can test the different options you've got, see where MongoDB is better, or where CouchDB excels at. Then you can either take one of them and go with the contras, or you can try and tweak your Database Model it self and do more tests.
Nobody here will be able to give you a general solution to your specific problem (well unless you give us all your code and you pay us for working on it :P ) databases aren't easy especially if you need to scale them under certain requirements.
For worldwide types of systems, wouldn't couchDb be better. Isn't mongodb completely ruled out in the case where you have users all around the world?
MongoDB supports sharding. So you don't need a single master. In fact, it looks like you have a ready shard key (region).
MongoDB also supports replica sets along with sharding. So if you need to run in multiple data centers (DCs) you put a master and one of the replicas in the same DC. In fact, they also suggest adding a 3rd node to a separate DC as a hot backup failover.
You will have to drill into the more detailed configuration of MongoDB, but you can definitely control where data is stored and you can prioritize that other replicas in a DC are "next in line" for promotion to Master.
At this point however, you're well into the details of MongoDB and you'll need to dig around and "play" quite a bit. However, you'll need lots of "play time" for any solution that's really going to handle masters across data centers.