efficiancy of indexing frequently update collection in mongodb - mongodb

I am newbie with MongoDB so my questions might be trivial ... I want to allow my users to upload their address book. the document have the following structure
{
"_id" : "56f29ecc2a00001800dbdf54",
"contacts" : [
{
"name" : "John",
"phoneNumber" : [
"+18144040000"
]
},
{
"name" : "Andrew ",
"phoneNumber" : [
"+14129123456"
]
}
]
}
I would like to run search by phone number in order to find useres with mutual contacts
i.e
{"contacts.phoneNumber":"+14129123456"}
my question is - will it be efficient to add this index
db.addresses.createIndex( { "contacts.phoneNumber": 1 }, { unique: false }, {background: true} )
considering the fact that the user will frequently update his address book from his phone which overrides the current data or insert new one. this upload is a single document with an array of contacts each hold an array of phone numbers .
each upload/update will contain hundreds/thousands of records ?

Your index make sense. Reg the efficiency there is a trade off between read and write. Typically, user interface is expected to respond quickly for any search (i.e. read). So, creating the index on specific field is inevitable. On that basis, indexing on "phone number" is fine considering that the use case required a search or query on phone number directly.
Indexing the document would degrade the write performance. However, this particular index wouldn't degrade the write performance drastically. Having said that if it takes more time, you may need to reconsider the UI design to have progress bar for upload which is a typical UI design for any large uploads.
Also, you can check about the write concern option available in MongoDB. You can configure the write concern whether you are expecting the acknowledgement from the drive or not.
If you consider going with write concern without acknowledgement, it would give you better write performance. However, most of the applications expect acknowledgement on writes to ensure that the write is successful.
https://docs.mongodb.com/manual/reference/write-concern/

Related

DB Design for high volume chat messages

I am new to Couchbase,
I would like to understand how to model storing billions of chat messages originating from a typical IM app in Couchbase. What would be the correct way to model this in Couchbase? Assume 10000 new messages/sec inserts and 40000 updates on these 10000 messages/sec. Assume, one to one chat as the primary use case, although each person would have many buddies - pretty much like Whatsapp
Thanks, appreciate all feedback.
**Update: **
Thanks for your reply, here is my database design:
Sample data store on Couchbase (document store):
document User:
123_user => {"id" : 123, "friend_ids" : [456, 789, ...], "session": "123asdcas123123qsd"}
document History Message (channel_name = userId1 + "-to-" + userId2)
123-to-456_history => {"channel_name": "123-to-456", "message_ids" => ["545_message, 999_message, ...."]}
document Message:
545_message => {"id" : 545, client_id : 4143413, from_uid : 123, "to_uid" : 456, "body" : "Hello world", "create_time" : 1243124124, "state" : 1}
there is problem here, when message_ids field on History Message store million or a billion message ids, this is really a big problem when reading and writing messages history.
Can anyone give me a solution to this problem?
First of all, we need to put CouchBase aside. The key problem is how to model this application scenario, then we know if CouchBase is your best choice.
A one-to-one chat application can use each pair of chatters as a primary key.
For example, Bob-to-Jack, they chat:
1."hello!";
2."go for rest?";
3."no, i'm busy now.";
...
You will insert a new record with primary key "Bob-Jack", and value "hello; go for rest; no,....".
If the conversation stops, this record will stop growing and stored for future use.
If on the next day, the two guys chat again, your application will fetch out this record by key "Bob-Jack", display their yesterday conversation(the value), and update the value by appending new chat content to the end.
The length of the value grows, if it exceeds some threshold, you will split it into two records. As many DB systems have a size limitation for one record.
One guy has many buddies, so there are billions of pairs(keys) in real world, with each pair a long conversation(value). No-sql solutions are good choice for this data volume.
Then you may know if CouchBase is capable of this kind of task. I think it's OK but not the only-one choice.

Mongodb Indexing Issue

I have a collection in which below is the data:
"sel_att" : {
"Technical Specifications" : {
"In Sales Package" : "Charger, Handset, User Manual, Extra Ear Buds, USB Cable, Headset",
"Warranty" : "1 year manufacturer warranty for Phone and 6 months warranty for in the box accessories"
},
"General Features" : {
"Brand" : "Sony",
"Model" : "Xperia Z",
"Form" : "Bar",
"SIM Size" : "Micro SIM",
"SIM Type" : "Single Sim, GSM",
"Touch Screen" : "Yes, Capacitive",
"Business Features" : "Document Viewer, Pushmail (Mail for Exchange, ActiveSync)",
"Call Features" : "Conference Call, Hands Free, Loudspeaker, Call Divert",
"Product Color" : "Black"
},
"Platform/Software" : {
"Operating Frequency" : "GSM - 850, 900, 1800, 1900; UMTS - 2100",
"Operating System" : "Android v4.1 (Jelly Bean), Upgradable to v4.4 (KitKat)",
"Processor" : "1.5 GHz Qualcomm Snapdragon S4 Pro, Quad Core",
"Graphics" : "Adreno 320"
}
}
The data mentioned above is too huge and the fields are all dynamically inserted, how can I index such fields to get faster results?
It seems to me that you have not fully understood the power of document based databases such as MongoDB.
Bellow are just a few thoughts:
you have 1 million records
you have 1 million index values for that collection
you have to RAM available to store 1 million index values in-memory, otherwise the benefits of indexing would not be so keen to show up
yes you can have sharding but you need lots of hardware to accommodate basic needs
What you for sure need is something that can make dynamically link random text to valuable indexes and that allows you to search in vast amounts of text very fast. And for that you should use a tool like ElasticSearch.
Note that you can and should store your content in a NoSQL database and yes MongoDB is a viable option. And for the indexing part ElasticSearch has plugins available to enhance the communication between the two.
P.S. If I recall correctly the plugin is called MongoDB River
EDIT:
I've also added a more comprehensive definition for ElasticSearch. I won't take credit for it since I've grabbed it from Wikipedia:
Elasticsearch is a search server based on Lucene. It provides a
distributed, multitenant-capable full-text search engine with a
RESTful web interface and schema-free JSON documents
EDIT 2:
I've scaled down a bit on the numbers since it might be far-fetched for most projects. But the main idea remains the same. Indexes are not recommended for the use-case described in the question.
Based on what you want to query, you will end up indexing those fields. You can also have secondary indexes in MongoDB. But beware creating too many indexes may improve your query performance but consume additional disk space and make inserts slower due to re-indexing.
MongoDB indexes
Short answer: you can't. Use Elastic Search.
Here is a good tutorial to setup MongoDB River on Elastic Search
The reason is simple, MongoDB does not work like that. It helps you store complex schemaless sets of documents. But you cannot index dozens of different fields and hope to get good performance. Generally a max of 5-6 indices are recommended per collection.
Elastic Search is commonly used in the fashion described above in many other use-cases, so it is an established pattern. For example, Titan Graph DB has the built-in option to use ES for this purpose. If I were you, I would just use that and would not try to make MongoDB do something it is not built to do.
If you have the time and if your data structure lends itself to (I think it might from the json above), then you could also use rdbms to break down these pieces and store them on-the-fly with an EAV like pattern. Elastic Search would be easier to start and probably easier to achieve performance quickly.
Well, there are lots of problems w.r.t having many indexes and has been discussed here. But if at all you need to add indexes for dynamic fields you actually create index from you mongo db driver.
So, lets say if you are using the Mongodb JAVA driver then you could create an index like below: http://docs.mongodb.org/ecosystem/tutorial/getting-started-with-java-driver/#creating-an-index
coll.createIndex(new BasicDBObject("i", 1)); // create index on "i", ascending
PYTHON
http://api.mongodb.org/python/current/api/pymongo/collection.html#pymongo.collection.Collection.create_index
So, when you are populating data using any of the drivers and you find a new field which has come thru then you could fire index creation using driver itself and not have to do it manually.
P.S.: I have not tried this and it might not be suitable or advisable.
Hope this helps!
Indexing of dynamic fields is tricky. There is no such thing as wildcard-indexes. Your options would be:
Option A: Whenever you insert a new document, do an ensureIndex with the option sparse:true for each of its fields. This does nothing when the index already exists and creates a new one when it's a new field. The drawback will be that you will end up with a very large number of indexes and that inserts could get slow because of all the new and old indexes which need to be created/updated.
Option B: Forget about the field-names and refactor your documents to an array of key/value pairs. So
"General Features" : {
"Brand" : "Sony",
"Form" : "Bar"
},
"Platform/Software" : {,
"Processor" : "1.5 GHz Qualcomm",
"Graphics" : "Adreno 320"
}
becomes
properties: [
{ category: "General Features", key: "Brand", value: "Sony" },
{ category: "General Features", key: "Form", value: "Bar" },
{ category: "Platform/Software", key: "Processor", value: "1.5 GHz Qualcomm" },
{ category: "Platform/Software", key: "Graphics", value: "Adreno 320" }
]
This allows you to create a single compound index on properties.category and properties.key to cover all the array entries.

API pagination best practices

I'd love some some help handling a strange edge case with a paginated API I'm building.
Like many APIs, this one paginates large results. If you query /foos, you'll get 100 results (i.e. foo #1-100), and a link to /foos?page=2 which should return foo #101-200.
Unfortunately, if foo #10 is deleted from the data set before the API consumer makes the next query, /foos?page=2 will offset by 100 and return foos #102-201.
This is a problem for API consumers who are trying to pull all foos - they will not receive foo #101.
What's the best practice to handle this? We'd like to make it as lightweight as possible (i.e. avoiding handling sessions for API requests). Examples from other APIs would be greatly appreciated!
I'm not completely sure how your data is handled, so this may or may not work, but have you considered paginating with a timestamp field?
When you query /foos you get 100 results. Your API should then return something like this (assuming JSON, but if it needs XML the same principles can be followed):
{
"data" : [
{ data item 1 with all relevant fields },
{ data item 2 },
...
{ data item 100 }
],
"paging": {
"previous": "http://api.example.com/foo?since=TIMESTAMP1"
"next": "http://api.example.com/foo?since=TIMESTAMP2"
}
}
Just a note, only using one timestamp relies on an implicit 'limit' in your results. You may want to add an explicit limit or also use an until property.
The timestamp can be dynamically determined using the last data item in the list. This seems to be more or less how Facebook paginates in its Graph API (scroll down to the bottom to see the pagination links in the format I gave above).
One problem may be if you add a data item, but based on your description it sounds like they would be added to the end (if not, let me know and I'll see if I can improve on this).
If you've got pagination you also sort the data by some key. Why not let API clients include the key of the last element of the previously returned collection in the URL and add a WHERE clause to your SQL query (or something equivalent, if you're not using SQL) so that it returns only those elements for which the key is greater than this value?
You have several problems.
First, you have the example that you cited.
You also have a similar problem if rows are inserted, but in this case the user get duplicate data (arguably easier to manage than missing data, but still an issue).
If you are not snapshotting the original data set, then this is just a fact of life.
You can have the user make an explicit snapshot:
POST /createquery
filter.firstName=Bob&filter.lastName=Eubanks
Which results:
HTTP/1.1 301 Here's your query
Location: http://www.example.org/query/12345
Then you can page that all day long, since it's now static. This can be reasonably light weight, since you can just capture the actual document keys rather than the entire rows.
If the use case is simply that your users want (and need) all of the data, then you can simply give it to them:
GET /query/12345?all=true
and just send the whole kit.
There may be two approaches depending on your server side logic.
Approach 1: When server is not smart enough to handle object states.
You could send all cached record unique id’s to server, for example ["id1","id2","id3","id4","id5","id6","id7","id8","id9","id10"] and a boolean parameter to know whether you are requesting new records(pull to refresh) or old records(load more).
Your sever should responsible to return new records(load more records or new records via pull to refresh) as well as id’s of deleted records from ["id1","id2","id3","id4","id5","id6","id7","id8","id9","id10"].
Example:-
If you are requesting load more then your request should look something like this:-
{
"isRefresh" : false,
"cached" : ["id1","id2","id3","id4","id5","id6","id7","id8","id9","id10"]
}
Now suppose you are requesting old records(load more) and suppose "id2" record is updated by someone and "id5" and "id8" records is deleted from server then your server response should look something like this:-
{
"records" : [
{"id" :"id2","more_key":"updated_value"},
{"id" :"id11","more_key":"more_value"},
{"id" :"id12","more_key":"more_value"},
{"id" :"id13","more_key":"more_value"},
{"id" :"id14","more_key":"more_value"},
{"id" :"id15","more_key":"more_value"},
{"id" :"id16","more_key":"more_value"},
{"id" :"id17","more_key":"more_value"},
{"id" :"id18","more_key":"more_value"},
{"id" :"id19","more_key":"more_value"},
{"id" :"id20","more_key":"more_value"}],
"deleted" : ["id5","id8"]
}
But in this case if you’ve a lot of local cached records suppose 500, then your request string will be too long like this:-
{
"isRefresh" : false,
"cached" : ["id1","id2","id3","id4","id5","id6","id7","id8","id9","id10",………,"id500"]//Too long request
}
Approach 2: When server is smart enough to handle object states according to date.
You could send the id of first record and the last record and previous request epoch time. In this way your request is always small even if you’ve a big amount of cached records
Example:-
If you are requesting load more then your request should look something like this:-
{
"isRefresh" : false,
"firstId" : "id1",
"lastId" : "id10",
"last_request_time" : 1421748005
}
Your server is responsible to return the id’s of deleted records which is deleted after the last_request_time as well as return the updated record after last_request_time between "id1" and "id10" .
{
"records" : [
{"id" :"id2","more_key":"updated_value"},
{"id" :"id11","more_key":"more_value"},
{"id" :"id12","more_key":"more_value"},
{"id" :"id13","more_key":"more_value"},
{"id" :"id14","more_key":"more_value"},
{"id" :"id15","more_key":"more_value"},
{"id" :"id16","more_key":"more_value"},
{"id" :"id17","more_key":"more_value"},
{"id" :"id18","more_key":"more_value"},
{"id" :"id19","more_key":"more_value"},
{"id" :"id20","more_key":"more_value"}],
"deleted" : ["id5","id8"]
}
Pull To Refresh:-
Load More
It may be tough to find best practices since most systems with APIs don't accommodate for this scenario, because it is an extreme edge, or they don't typically delete records (Facebook, Twitter). Facebook actually says each "page" may not have the number of results requested due to filtering done after pagination.
https://developers.facebook.com/blog/post/478/
If you really need to accommodate this edge case, you need to "remember" where you left off. jandjorgensen suggestion is just about spot on, but I would use a field guaranteed to be unique like the primary key. You may need to use more than one field.
Following Facebook's flow, you can (and should) cache the pages already requested and just return those with deleted rows filtered if they request a page they had already requested.
Option A: Keyset Pagination with a Timestamp
In order to avoid the drawbacks of offset pagination you have mentioned, you can use keyset based pagination. Usually, the entities have a timestamp that states their creation or modification time. This timestamp can be used for pagination: Just pass the timestamp of the last element as the query parameter for the next request. The server, in turn, uses the timestamp as a filter criterion (e.g. WHERE modificationDate >= receivedTimestampParameter)
{
"elements": [
{"data": "data", "modificationDate": 1512757070}
{"data": "data", "modificationDate": 1512757071}
{"data": "data", "modificationDate": 1512757072}
],
"pagination": {
"lastModificationDate": 1512757072,
"nextPage": "https://domain.de/api/elements?modifiedSince=1512757072"
}
}
This way, you won't miss any element. This approach should be good enough for many use cases. However, keep the following in mind:
You may run into endless loops when all elements of a single page have the same timestamp.
You may deliver many elements multiple times to the client when elements with the same timestamp are overlapping two pages.
You can make those drawbacks less likely by increasing the page size and using timestamps with millisecond precision.
Option B: Extended Keyset Pagination with a Continuation Token
To handle the mentioned drawbacks of the normal keyset pagination, you can add an offset to the timestamp and use a so-called "Continuation Token" or "Cursor". The offset is the position of the element relative to the first element with the same timestamp. Usually, the token has a format like Timestamp_Offset. It's passed to the client in the response and can be submitted back to the server in order to retrieve the next page.
{
"elements": [
{"data": "data", "modificationDate": 1512757070}
{"data": "data", "modificationDate": 1512757072}
{"data": "data", "modificationDate": 1512757072}
],
"pagination": {
"continuationToken": "1512757072_2",
"nextPage": "https://domain.de/api/elements?continuationToken=1512757072_2"
}
}
The token "1512757072_2" points to the last element of the page and states "the client already got the second element with the timestamp 1512757072". This way, the server knows where to continue.
Please mind that you have to handle cases where the elements got changed between two requests. This is usually done by adding a checksum to the token. This checksum is calculated over the IDs of all elements with this timestamp. So we end up with a token format like this: Timestamp_Offset_Checksum.
For more information about this approach check out the blog post "Web API Pagination with Continuation Tokens". A drawback of this approach is the tricky implementation as there are many corner cases that have to be taken into account. That's why libraries like continuation-token can be handy (if you are using Java/a JVM language). Disclaimer: I'm the author of the post and a co-author of the library.
Pagination is generally a "user" operation and to prevent overload both on computers and the human brain you generally give a subset. However, rather than thinking that we don't get the whole list it may be better to ask does it matter?
If an accurate live scrolling view is needed, REST APIs which are request/response in nature are not well suited for this purpose. For this you should consider WebSockets or HTML5 Server-Sent Events to let your front end know when dealing with changes.
Now if there's a need to get a snapshot of the data, I would just provide an API call that provides all the data in one request with no pagination. Mind you, you would need something that would do streaming of the output without temporarily loading it in memory if you have a large data set.
For my case I implicitly designate some API calls to allow getting the whole information (primarily reference table data). You can also secure these APIs so it won't harm your system.
Just to add to this answer by Kamilk : https://www.stackoverflow.com/a/13905589
Depends a lot on how large dataset you are working on. Small data sets do work on effectively on offset pagination but large realtime datasets do require cursor pagination.
Found a wonderful article on how Slack evolved its api's pagination as there datasets increased explaining the positives and negatives at every stage : https://slack.engineering/evolving-api-pagination-at-slack-1c1f644f8e12
I think currently your api's actually responding the way it should. The first 100 records on the page in the overall order of objects you are maintaining. Your explanation tells that you are using some kind of ordering ids to define the order of your objects for pagination.
Now, in case you want that page 2 should always start from 101 and end at 200, then you must make the number of entries on the page as variable, since they are subject to deletion.
You should do something like the below pseudocode:
page_max = 100
def get_page_results(page_no) :
start = (page_no - 1) * page_max + 1
end = page_no * page_max
return fetch_results_by_id_between(start, end)
Another option for Pagination in RESTFul APIs, is to use the Link header introduced here. For example Github use it as follow:
Link: <https://api.github.com/user/repos?page=3&per_page=100>; rel="next",
<https://api.github.com/user/repos?page=50&per_page=100>; rel="last"
The possible values for rel are: first, last, next, previous. But by using Link header, it may be not possible to specify total_count (total number of elements).
I've thought long and hard about this and finally ended up with the solution I'll describe below. It's a pretty big step up in complexity but if you do make this step, you'll end up with what you are really after, which is deterministic results for future requests.
Your example of an item being deleted is only the tip of the iceberg. What if you are filtering by color=blue but someone changes item colors in between requests? Fetching all items in a paged manner reliably is impossible... unless... we implement revision history.
I've implemented it and it's actually less difficult than I expected. Here's what I did:
I created a single table changelogs with an auto-increment ID column
My entities have an id field, but this is not the primary key
The entities have a changeId field which is both the primary key as well as a foreign key to changelogs.
Whenever a user creates, updates or deletes a record, the system inserts a new record in changelogs, grabs the id and assigns it to a new version of the entity, which it then inserts in the DB
My queries select the maximum changeId (grouped by id) and self-join that to get the most recent versions of all records.
Filters are applied to the most recent records
A state field keeps track of whether an item is deleted
The max changeId is returned to the client and added as a query parameter in subsequent requests
Because only new changes are created, every single changeId represents a unique snapshot of the underlying data at the moment the change was created.
This means that you can cache the results of requests that have the parameter changeId in them forever. The results will never expire because they will never change.
This also opens up exciting feature such as rollback / revert, synching client cache etc. Any features that benefit from change history.
Refer to API Pagination Design, we could design pagination api through cursor
They have this concept, called cursor — it’s a pointer to a row. So you can say to a database “return me 100 rows after that one”. And it’s much easier for a database to do since there is a good chance that you’ll identify the row by a field with an index. And suddenly you don’t need to fetch and skip those rows, you’ll go directly past them.
An example:
GET /api/products
{"items": [...100 products],
"cursor": "qWe"}
API returns an (opaque) string, which you can use then to retrieve the next page:
GET /api/products?cursor=qWe
{"items": [...100 products],
"cursor": "qWr"}
Implementation-wise there are many options. Generally, you have some ordering criteria, for example, product id. In this case, you’ll encode your product id with some reversible algorithm (let’s say hashids). And on receiving a request with the cursor you decode it and generate a query like WHERE id > :cursor LIMIT 100.
Advantage:
The query performance of db could be improved through cursor
Handle well when new content was inserted into db while querying
Disadvantage:
It’s impossible to generate a previous page link with a stateless API

Can one make a relational database using MongoDB?

I am going to make a student management system using MongoDB. I will have one table for students and another for attendance records. Can I have a key in the attendance table to reach the students table, as pictured below? How?
The idea behind MongoDB is to eliminate (or at least minimize) relational data. Have you considered just embedding the attendance data directly into each student record? This is actually the preferred design pattern for MongoDB and can result in much better performance and scalability.
If you truly need highly relational and normalized data, you might want to reconsider using MongoDB.
The answer depends on how you intend to use the data. You really have 2 options, embed the attendance table, or link it. More on these approaches is detailed here: http://www.mongodb.org/display/DOCS/Schema+Design
For the common use-case, you would probably embed this particular collection, so each student record would have an embedded "attendance" table. This would work because attendance records are unlikely to be shared between students, and retrieving the attendance data is likely to require the student information as well. Retrieving the attendance data would be as simple as:
db.student.find( { login : "sean" } )
{
login : "sean",
first : "Sean",
last : "Hodges",
attendance : [
{ class : "Maths", when : Date("2011-09-19T04:00:10.112Z") },
{ class : "Science", when : Date("2011-09-20T14:36:06.958Z") }
]
}
Yes. There are no hard and fast rules. You have to look at the pros and cons of either embedding or referencing data. This video will definitely help (https://www.youtube.com/watch?v=-o_VGpJP-Q0&t=21s). In your example, the phone number attribute should be on the same table (in a document database), because the phone number of a person rarely changes.

How do document databases deal with changing relationships between objects (or do they at all)?

Say, at the beginning of a project, I want to store a collection of Companies, and within each company, a collection of Employees.
Since I'm using a document database (such as MongoDB), my structure might look something like this:
+ Customers[]
+--Customer
+--Employees[]
+--Employee
+--Employee
+--Customer
+--Employees[]
+--Employee
What happens if, later down the track, a new requirement is to have some Employees work at multiple Companies?
How does one manage this kind of change in a document database?
Doesn't the simplicity of a document database become your worse enemy, since it creates brittle data structures which can't easily be modified?
In the example above, I'd have to run modify scripts to create a new 'Employees' collection, and move every employee into that collection, while maintaining some sort of relationship key (e.g. a CompanyID on each employee).
If I did the above thoroughly enough, I'd end up with many collections, and very little hierarchy, and documents being joined by means of keys.
In that case, am I still using the document database as I should be?
Isn't it becoming more like a relational database?
Speaking about MongoDB specifically...because the database doesn't enforce any relationships like a relational database, you're on the hook for maintaining any sort of data integrity such as this. It's wonderfully helpful in many cases, but you end up writing more application code to handle these sorts of things.
Having said all of that, they key to using a system like MongoDB is modeling your data to fit MongoDB. What you have above makes complete sense if you're using MySQL...using Mongo you'd absolutely get in trouble if you structure your data like it's a relational database.
If you have Employees who can work at one or more Companies, I would structure it as:
// company records
{ _id: 12345, name : 'Apple' }
{ _id: 55555, name : 'Pixar' }
{ _id: 67890, name : 'Microsoft' }
// employees
{ _id : ObjectId('abc123'), name : "Steve Jobs", companies : [ 12345, 55555 ] }
{ _id : ObjectId('abc456'), name : "Steve Ballmer", companies : [ 67890 ] }
You'd add an index on employees.companies, which would make is very fast to get all of the employees who work for a given company...regardless of how many companies they work for. Maintaining a short list of companies per employee will be much easier than maintaining a large list of employees for a company. To get all of the data for a company and all of it's employees would be two (fast) queries.
Doesn't the simplicity of a document
database become your worse enemy,
since it creates brittle data
structures which can't easily be
modified?
The simplicity can bite you, but it's very easy to update and change at a later time. You can script changes via Javascript and run them via the Mongo shell.
My recent answer for this question covers this in the RavenDb context:
How would I model data that is heirarchal and relational in a document-oriented database system like RavenDB?