A fairly common requirement in database applications is to track changes to one or more specific entities in a database. I've heard this called row versioning, a log table or a history table (I'm sure there are other names for it). There are a number of ways to approach it in an RDBMS--you can write all changes from all source tables to a single table (more of a log) or have a separate history table for each source table. You also have the option to either manage the logging in application code or via database triggers.
I'm trying to think through what a solution to the same problem would look like in a NoSQL/document database (specifically MongoDB), and how it would be solved in a uniform way. Would it be as simple as creating version numbers for documents, and never overwriting them? Creating separate collections for "real" vs. "logged" documents? How would this affect querying and performance?
Anyway, is this a common scenario with NoSQL databases, and if so, is there a common solution?
Good question, I was looking into this myself as well.
Create a new version on each change
I came across the Versioning module of the Mongoid driver for Ruby. I haven't used it myself, but from what I could find, it adds a version number to each document. Older versions are embedded in the document itself. The major drawback is that the entire document is duplicated on each change, which will result in a lot of duplicate content being stored when you're dealing with large documents. This approach is fine though when you're dealing with small-sized documents and/or don't update documents very often.
Only store changes in a new version
Another approach would be to store only the changed fields in a new version. Then you can 'flatten' your history to reconstruct any version of the document. This is rather complex though, as you need to track changes in your model and store updates and deletes in a way that your application can reconstruct the up-to-date document. This might be tricky, as you're dealing with structured documents rather than flat SQL tables.
Store changes within the document
Each field can also have an individual history. Reconstructing documents to a given version is much easier this way. In your application you don't have to explicitly track changes, but just create a new version of the property when you change its value. A document could look something like this:
{
_id: "4c6b9456f61f000000007ba6"
title: [
{ version: 1, value: "Hello world" },
{ version: 6, value: "Foo" }
],
body: [
{ version: 1, value: "Is this thing on?" },
{ version: 2, value: "What should I write?" },
{ version: 6, value: "This is the new body" }
],
tags: [
{ version: 1, value: [ "test", "trivial" ] },
{ version: 6, value: [ "foo", "test" ] }
],
comments: [
{
author: "joe", // Unversioned field
body: [
{ version: 3, value: "Something cool" }
]
},
{
author: "xxx",
body: [
{ version: 4, value: "Spam" },
{ version: 5, deleted: true }
]
},
{
author: "jim",
body: [
{ version: 7, value: "Not bad" },
{ version: 8, value: "Not bad at all" }
]
}
]
}
Marking part of the document as deleted in a version is still somewhat awkward though. You could introduce a state field for parts that can be deleted/restored from your application:
{
author: "xxx",
body: [
{ version: 4, value: "Spam" }
],
state: [
{ version: 4, deleted: false },
{ version: 5, deleted: true }
]
}
With each of these approaches you can store an up-to-date and flattened version in one collection and the history data in a separate collection. This should improve query times if you're only interested in the latest version of a document. But when you need both the latest version and historical data, you'll need to perform two queries, rather than one. So the choice of using a single collection vs. two separate collections should depend on how often your application needs the historical versions.
Most of this answer is just a brain dump of my thoughts, I haven't actually tried any of this yet. Looking back on it, the first option is probably the easiest and best solution, unless the overhead of duplicate data is very significant for your application. The second option is quite complex and probably isn't worth the effort. The third option is basically an optimization of option two and should be easier to implement, but probably isn't worth the implementation effort unless you really can't go with option one.
Looking forward to feedback on this, and other people's solutions to the problem :)
Why not a variation on Store changes within the document ?
Instead of storing versions against each key pair, the current key pairs in the document always represents the most recent state and a 'log' of changes is stored within a history array. Only those keys which have changed since creation will have an entry in the log.
{
_id: "4c6b9456f61f000000007ba6"
title: "Bar",
body: "Is this thing on?",
tags: [ "test", "trivial" ],
comments: [
{ key: 1, author: "joe", body: "Something cool" },
{ key: 2, author: "xxx", body: "Spam", deleted: true },
{ key: 3, author: "jim", body: "Not bad at all" }
],
history: [
{
who: "joe",
when: 20160101,
what: { title: "Foo", body: "What should I write?" }
},
{
who: "jim",
when: 20160105,
what: { tags: ["test", "test2"], comments: { key: 3, body: "Not baaad at all" }
}
]
}
We have partially implemented this on our site and we use the 'Store Revisions in a separate document" (and separate database). We wrote a custom function to return the diffs and we store that. Not so hard and can allow for automated recovery.
One can have a current NoSQL database and a historical NoSQL database. There will be a an nightly ETL ran everyday. This ETL will record every value with a timestamp, so instead of values it will always be tuples (versioned fields). It will only record a new value if there was a change made on the current value, saving space in the process. For example, this historical NoSQL database json file can look like so:
{
_id: "4c6b9456f61f000000007ba6"
title: [
{ date: 20160101, value: "Hello world" },
{ date: 20160202, value: "Foo" }
],
body: [
{ date: 20160101, value: "Is this thing on?" },
{ date: 20160102, value: "What should I write?" },
{ date: 20160202, value: "This is the new body" }
],
tags: [
{ date: 20160101, value: [ "test", "trivial" ] },
{ date: 20160102, value: [ "foo", "test" ] }
],
comments: [
{
author: "joe", // Unversioned field
body: [
{ date: 20160301, value: "Something cool" }
]
},
{
author: "xxx",
body: [
{ date: 20160101, value: "Spam" },
{ date: 20160102, deleted: true }
]
},
{
author: "jim",
body: [
{ date: 20160101, value: "Not bad" },
{ date: 20160102, value: "Not bad at all" }
]
}
]
}
For users of Python (python 3+, and up of course) , there's HistoricalCollection that's an extension of pymongo's Collection object.
Example from the docs:
from historical_collection.historical import HistoricalCollection
from pymongo import MongoClient
class Users(HistoricalCollection):
PK_FIELDS = ['username', ] # <<= This is the only requirement
# ...
users = Users(database=db)
users.patch_one({"username": "darth_later", "email": "darthlater#example.com"})
users.patch_one({"username": "darth_later", "email": "darthlater#example.com", "laser_sword_color": "red"})
list(users.revisions({"username": "darth_later"}))
# [{'_id': ObjectId('5d98c3385d8edadaf0bb845b'),
# 'username': 'darth_later',
# 'email': 'darthlater#example.com',
# '_revision_metadata': None},
# {'_id': ObjectId('5d98c3385d8edadaf0bb845b'),
# 'username': 'darth_later',
# 'email': 'darthlater#example.com',
# '_revision_metadata': None,
# 'laser_sword_color': 'red'}]
Full disclosure, I am the package author. :)
Related
I am learning MongoDB and I've encountered a thing that mildly annoys me.
Let's say I got this collection:
[
{
_id: ObjectId("XXXXXXXXXXXXXX"),
name: "Tom",
followers: 10,
active: true
},
{
_id: ObjectId("XXXXXXXXXXXXXX"),
name: "Rob",
followers: 109,
active: true
},
{
_id: ObjectId("XXXXXXXXXXXXXX"),
name: "Jacob",
followers: 2,
active: false
}
]
and I rename the name column to username with the command:
db.getCollection('users').update({}, { $rename: { "name" : "username" }}, false, true)
now the username property is at the end of the record, example:
[
// ... rest of collection has the same structure
{
_id: ObjectId("XXXXXXXXXXXXXX"),
followers: 109,
active: true,
username: "Rob"
}
// ... rest of collection has the same structure
]
How do I prevent this from happening or how do I place them in a specific order? This is infuriating to work with in Robo/Studio 3T. I've got a collection with about 15 columns which are now out of order which in the GUI because of this
The $rename operator logically performs an $unset of both the old name and the new name, and then performs a $set operation with the new name. As such, the operation may not preserve the order of the fields in the document; i.e. the renamed field may move within the document.
Documentation
It is the behaviour from version 2.6
Since it is JSON based, you can get any field easily. And you have very less columns.
Keys in JSON objects are in their very nature unordered. See RFC 4627 which defines JSON, section 1 "Introduction":
An object is an unordered collection of zero or more name/value
pairs, where a name is a string and a value is a string, number,
boolean, null, object, or array.
(Emphasis mine)
Therefore, it would even be correct, if you wrote
{
"name": "Joe",
"city": "New York"
}
and got back
{
"city": "New York",
"name": "Joe"
}
Which of these document models would be better to use in MongoDB?
Animal:
{
_id: "1",
name: "abc",
locations_spotted: [
{
locId: "1",
dates: ["1-1-2009", "12-4-2013"...]
},
{
locId: "2",
dates: ["3-1-2012", "12-3-2013"...]
}
...
]
}
Animal:
{
_id: "1",
name: "abc",
loc1spotdates: ["1-1-2009", "12-4-2013"...],
loc2spotdates: ["3-1-2012", "12-3-2013"...],
...
...
}
There are a limited number of locations and only a few might get added in the future.
First variant is better in my opinion. Because in future if you prefer to add additional information you add it right into the object inside array. You leave your options open. In future you could add time of the day also maybe.
I am designing a document structure and use spring-data-mongodb to access it. The document structure is to store device profile. Each device contains modules of different types. The device can contains multiple modules of the same type. Module types are dynamic as new type of modules are created sometimes.
Please note: I try not to write custom queries to avoid boilerplate code. But, some custom queries should be fine.
I come out with two designs:
the first one use dynamic field (i.e. map). Semantics is better but seems harder to query/update using spring-data-mongodb.
{
deviceId: "12345",
instanceTypeMap: {
"type1": {
moduleMap: {
"1": {field1: "value",field2: "value"},
"2": {field1: "value",field2: "value"}
}
},
"type2": {
moduleMap: {
"30": {fielda: "value",fieldb: "value"},
"45": {fielda: "value",fieldb: "value"}
}
}
}
the second one use array and query/update seems more in-line with spring-data-mongodb.
{
deviceId: "12345",
allInstances: [
{
type: 1,
modules: [
{
id: 1,
field1: "value",
field2: "value"
},
{
id: 2,
field1: "value",
field2: "value"
}
]
},
{
type: 2,
modules: [
{
id: 30,
fielda: "value",
fieldb: "value"
},
{
id: 45,
fielda: "value",
fieldb: "value"
}
]
}
]
}
I am inclined to use array. Is it better to use array instead of dynamic field with spring-data-mongodb. I did some search on-line and found people mentioned that query for key (i.e. in map) is not as easy in spring-data-mongodb. Is that a correct statement? Do I miss anything? Thank you in advance.
I ended up with the design as below. I use one device-instance-type per document. Because, in some scenario,
updates are done on many modules of the same instance type. Those updates can be aggregated as just one database update.
The redundant "moduleId" field is also added for query purpose.
{
deviceId: "12345",
instanceTypeId: "type1",
moduleMap: {
"1": {
moduleId: "1",
field1: "value",
field2: "value"
},
"2": {
moduleId: "2",
field1: "value",
field2: "value"
}
}
}
Now, I can use spring-data-mongodb's query:
findByDeviceId("12345");
findByDeviceIdAndInstanceTypeId("12345","type1");
findByDeviceIdAndInstanceTypeIdAndModuleMapModuleId("12345","type1","1");
From docs:
MongoDB supports no more than 100 levels of nesting for BSON documents.
But, what is meant by "nesting for BSON documents"? For example, if I want to implement Embedding All Comments in One Document (for threading)
{
_id: ObjectId(...),
... lots of topic data ...
replies: [
{ posted: ISODateTime(...),
author: { id: ObjectId(...), name: 'Rick' },
text: 'This is so bogus ... ',
replies: [
{ author: { ... }, ... },
... ]
}
and then if I want to access the 101-th levels comment, I can't do this?
replies_1.0.replies_2.0.replies_3.0.replies_4....0.replies_101
I'm trying to design a schema paradigm in MongoDB which would support multilingual values for variable attributes in documents.
For example, I would have a product catalog where each product may require storing its name, title or any other attribute in various languages.
This same paradigm should probably hold for other locale-specific properties, such as price/currency variations
I've been considering a key-value approach where key is the language code and value is the corresponding value:
{
sku: "1011",
name: { "en": "cheese", "de": "Käse", "es": "queso", etc... },
price: { "usd": 30.95, "eur": 20, "aud": 40, etc... }
}
The problem is I believe this would deny me of using indices on multilingual fields.
Eventually, I'd like a generic, yet intuitive, index-able design.
Any suggestion would be appreciated, thanks.
Wholesale recommendations over your schema design may be a bit broad a topic for discussion here. I can however suggest that you consider putting the elements you are showing into an Array of sub-documents, rather than the singular sub-document with fields for each item.
{
sku: "1011",
name: [{ "en": "cheese" }, {"de": "Käse"}, {"es": "queso"}, etc... ],
price: [{ "usd": 30.95 }, { "eur": 20 }, { "aud": 40 }, etc... ]
}
The main reason for this is consideration for access paths to your elements which should make things easier to query. This I went through in some detail here which may be worth your reading.
It could also be a possibility to expand on this for something like your name field:
name: [
{ "lang": "en", "value": "cheese" },
{ "lang": "de", "value: "Käse" },
{ "lang": "es", "value": "queso" },
etc...
]
All would depend on your indexing and access requirements. It all really depends on what exactly your application needs, and the beauty of MongoDB is that it allows you to structure your documents to your needs.
P.S As to anything where you are storing Money values, I suggest you do some reading and start maybe with this post here:
MongoDB - What about Decimal type of value?