I have a set of documents I need to maintain persistence for. Due to the way MongoDB handle's multi-document operations, I need to embed this set of documents inside a container document in order to ensure atomicity of my operations.
The data lends itself heavily to key-value pairing. Is there any way instead of doing this:
var container = new mongoose.Schema({
// meta information here
subdocs: [{key: String, value: String}]
})
I can instead have subdocs be an associative array (i.e. an object) that applies the subdoc validations? So a container instance would look something like:
{
// meta information
subdocs: {
<key1>: <value1>,
<key2>: <value2>,
...
<keyN>: <valueN>,
}
}
Thanks
Using Mongoose, I don't believe that there is a way to do what you are describing. To explain, let's take an example where your keys are dates and the values are high temperatures, to form pairs like { "2012-05-31" : 88 }.
Let's look at the structure you're proposing:
{
// meta information
subdocs: {
"2012-05-30" : 80,
"2012-05-31" : 88,
...
"2012-06-15": 94,
}
}
Because you must pre-define schema in Mongoose, you must know your key names ahead of time. In this use case, we would probably not know ahead of time which dates we would collect data for, so this is not a good option.
If you don't use Mongoose, you can do this without any problem at all. MongoDB by itself excels at inserting values with new key names into an existing document:
> db.coll.insert({ type : "temperatures", subdocuments : {} })
> db.coll.update( { type : "temperatures" }, { $set : { 'subdocuments.2012-05-30' : 80 } } )
> db.coll.update( { type : "temperatures" }, { $set : { 'subdocuments.2012-05-31' : 88 } } )
{
"_id" : ObjectId("5238c3ca8686cd9f0acda0cd"),
"subdocuments" : {
"2012-05-30" : 80,
"2012-05-31" : 88
},
"type" : "temperatures"
}
In this case, adding Mongoose on top of MongoDB takes away some of MongoDB's native flexibility. If your use case is well suited by this feature of MongoDB, then using Mongoose might not be the best choice.
you can achieve this behavior by using {strict: false} in your mongoose schema, although you should check the implications on the validation and casting mechanism of mongoose.
var flexibleSchema = new Schema( {},{strict: false})
another way is using schema.add method but i do not think this is the right solution.
the last solution i see is to get all the array to the client side and use underscore.js or whatever library you have. but it depends on your app, size of docs, communication steps etc.
Related
I have a use case for a nosql data store but I don't know which one to use:
Each document in my data store has a key for _id and another key as an array of objects. Each object hash element of this array has a key for _elementid and another for color.
I want my server proxy to send an update request to the data store with a substring used as regex that qualifies all documents whose _id matches the regex. I then want to push an element onto the array of each document of this output. This new element will have the same color for each unshift but the _elementid will be unique for each.
Is there a nosql option out there that offers this kind of stored procedure? Does it have limits on the length of the array?
*** EDIT ***
(1)
DOCUMENT A:
{
_id : "this_is-an-example_10982029822",
dataList : [
{
_elementid : "999999283902830",
color : "blue",
}, {
_elementid : "99999273682763",
color : "red"
}
]
}
DOCUMENT B:
{
_id : "this_is-an-example_209382093820",
dataList : [
{
_elementid : "99999182681762",
color : "yellow"
}
]
}
(2) EXAMPLE OF UPDATE REQUEST
(let [regex_ready_array ["this_is-an-example" "fetcher" "finder"]
fetch_query_regex (str "^" (clojure.string/join "|^" regex_ready_array))
element_template {
:_elementid { (rand-int 1000000000000000) }
:color "green"
}
updated_sister_objs (mc/bulk-update connection "arrayStore" {:_id {$regex fetch_query_regex }} "unshift" element_template)])
(3)
DOCUMENT A:
{
_id : "this_is-an-example_10982029822",
dataList : [
{
_elementid : "999999146514612",
color : "green",
}, {
_elementid : "999999283902830",
color : "blue",
}, {
_elementid : "99999273682763",
color : "red"
}
]
}
DOCUMENT B:
{
_id : "this_is-an-example_209382093820",
dataList : [
{
_elementid : "9999997298729873",
color : "green",
}, {
_elementid : "9999918262881762",
color : "yellow"
}
]
}
*** EDIT 2 ***
(1) the dataList array could be large (large enough that MongoDB's 16mb document size limit would present an issue);
(2) the _elementid values to be assigned to the additional dataList elements will be different for each new element and the store will auto assign these as random number values
(3) a single update request should apply all updates, rather than one update per additional element;
(4) the OP is looking for a compare-and-contrast between several 'nosql solutions' which MongoDB, Cassandra, Redis and CouchDB being suggested as possible candidates.
By Seeing your question. I understand you are using JSONs and Clojure.
Lets see which are good NoSQL for JSONs. Quick overview of populor NoSQL
Apache Cassandra : Data Model in Cassandra is essentially a hybrid between a key-value and a column-oriented (or tabular) database management system. Its data model is a partitioned row store with consistency.
Redis: Redis maps keys to types of values.It has some abstract datatypes other than string like List, Sets, Sorted Sets, Hash Tables, Geospatial data.
Apache CouchDB : CouchDB manages a collection of JSON documents.
MongoDB : CouchDB manages a collection of BSON documents. BSON is Binary JSON http://bsonspec.org/spec.html.
If you are using lots of JSON payload you could use MongoDB or Apache CouchDB. But you want to update JSONs based on REGEX.
Lets check REGEX capability of CouchDB and MongoDB
It can be done easily with MAP Reduce in Both CouchDB and MongoDB
Regex Select: db.student.find( { f_name: { $regex: 'this_is-an-example.*'} } ).pretty();
MongoDB: In mongodb we have regex operations. I have tried it and it works fine.
Reference
https://docs.mongodb.com/manual/reference/operator/query/regex/
mongoDB update statement using regex
https://www.w3resource.com/mongodb/mongodb-regex-operators.php
CouchDB: I haven't tried CouchDB with Regex but as far I know it is possible. Regex function is available as per CouchDB documentation.
{
"selector": {
"afieldname": {"$regex": "^A"}
}
}
Reference
http://docs.couchdb.org/en/2.0.0/api/database/find.html
Temporary couchdb view of documents with doc_id matching regular expression
You could you either of this MongoDB and CouchDB. Lots of resources are avalible for MongoDB.
I'm making a database on theses/arguments. They are related to other arguments, which I've placed in an object with a dynamic key, which is completely random.
{
_id : "aeokejXMwGKvWzF5L",
text : "test",
relations : {
cF6iKAkDJg5eQGsgb : {
type : "interpretation",
originId : "uFEjssN2RgcrgiTjh",
ratings: [...]
}
}
}
Can I find this document if I only know what the value of type is? That is I want to do something like this:
db.theses.find({relations['anything']: { type: "interpretation"}}})
This could've been done easily with the positional operator, if relations had been an array. But then I cannot make changes to the objects in ratings, as mongo doesn't support those updates. I'm asking here to see if I can keep from having to change the database structure.
Though you seem to have approached this structure due to a problem with updates in using nested arrays, you really have only caused another problem by doing something else which is not really supported, and that is that there is no "wildcard" concept for searching unspecified keys using the standard query operators that are optimal.
The only way you can really search for such data is by using JavaScript code on the server to traverse the keys using $where. This is clearly not a really good idea as it requires brute force evaluation rather than using useful things like an index, but it can be approached as follows:
db.theses.find(function() {
var relations = this.relations;
return Object.keys(relations).some(function(rel) {
return relations[rel].type == "interpretation";
});
))
While this will return those objects from the collection that contain the required nested value, it must inspect each object in the collection in order to do the evaluation. This is why such evaluation should really only be used when paired with something that can directly use an index instead as a hard value from the object in the collection.
Still the better solution is to consider remodelling the data to take advantage of indexes in search. Where it is neccessary to update the "ratings" information, then basically "flatten" the structure to consider each "rating" element as the only array data instead:
{
"_id": "aeokejXMwGKvWzF5L",
"text": "test",
"relationsRatings": [
{
"relationId": "cF6iKAkDJg5eQGsgb",
"type": "interpretation",
"originId": "uFEjssN2RgcrgiTjh",
"ratingId": 1,
"ratingScore": 5
},
{
"relationId": "cF6iKAkDJg5eQGsgb",
"type": "interpretation",
"originId": "uFEjssN2RgcrgiTjh",
"ratingId": 2,
"ratingScore": 6
}
]
}
Now searching is of course quite simple:
db.theses.find({ "relationsRatings.type": "interpretation" })
And of course the positional $ operator can now be used with the flatter structure:
db.theses.update(
{ "relationsRatings.ratingId": 1 },
{ "$set": { "relationsRatings.$.ratingScore": 7 } }
)
Of course this means duplication of the "related" data for each "ratings" value, but this is generally the cost of being to update by matched position as this is all that is supported with a single level of array nesting only.
So you can force the logic to match with the way you have it structured, but it is not a great idea to do so and will lead to performance problems. If however your main need here is to update the "ratings" information rather than just append to the inner list, then a flatter structure will be of greater benefit and of course be a lot faster to search.
So, my schema design requires that I use an embedded document format. While I recognize that what I'm about to ask could be made easier by redesigning the schema, the current design meets all of the other requirements in place so I'm doing my best to make it work.
Consider the following rudementary schema:
{
"_id" : "01234ABCD,
"type" : "thing",
"resources" : {
foo : [
{
"herp" : "derp",
},
],
bar : [
{
"herp" : "derp",
},
{
"derp" : "herp"
}
]
},
}
Obviously the value that corresponds to the "resources" key is an embedded document. I would like to be able to efficiently calculate the count of keys in that document, and derive results based upon tests on that value. It's important to note that the length and content of the embedded doc is an unknown quantity - hence my reason for wanting to be able to query this meta. Being a complete js idiot, I've managed to cobble together the following query. For example, if I were to look for documents with more than 3 keys in the "resources" document:
db.coll.find({$where: function(){
var total = 0;
for(i in this['resources']){
++total;
if(total > 3){
return true;
}
}
}})
As I'm pretty new to Mongo and terrible at js, I feel like there may be a smarter way to do this. I'm also very curious to hear opinions on whether or not this goes against the Mongo ethos a bit by not pushing this processing to the client. Any feedback or criticism of this approach and implementation are most welcome.
Thanks for reading.
You can use an aggregate pipeline to assemble metadata about the docs and then filter on them.
db.coll.aggregate([
{$project: {
// Compute a total count of the keys in the resources docs
keys: {$add: [{$size: '$resources.foo'}, {$size: '$resources.bar'}]},
// Project the original doc
doc: '$$ROOT'
}},
// Only include the docs that have more than 3 keys
{$match: {keys: {$gt: 3}}}
])
I have recently changed one of my fields from object to array of objects.
In my production I have only 14 documents with this field, so I decided to change those fields.
Is there any best practices to do that?
As it is in my production I need to do it in a best way possible?
I got the document Id's of those collections.like ['xxx','yyy','zzz',...........]
my doc structure is like
_id:"xxx",option1:{"op1":"value1","op2":"value2"},option2:"some value"
and I want to change it like(converting object to array of objects)
_id:"xxx",option1:[{"op1":"value1","op2":"value2"},
{"op1":"value1","op2":"value2"}
],option2:"some value"
Can I use upsert? If so How to do it?
Since you need to create the new value of the field based on the old value, you should retrieve each document with a query like
db.collection.find({ "_id" : { "in" : [<array of _id's>] } })
then iterate over the results and $set the value of the field to its new value:
db.collection.find({ "_id" : { "in" : [<array of _id's>] } }).forEach(function(doc) {
oldVal = doc.option1
newVal = compute_newVal_from_oldVal(oldVal)
db.collection.update({ "_id" : doc._id }, { "$set" : { "option" : newVal } })
})
The document structure is rather schematic, so I omitted putting in actual code to create newVal from oldVal.
Since it is an embedded document type you could use push query
db.collectionname.update({_id:"xxx"},{$push:{option1:{"op1":"value1","op2":"value2"}}})
This will create document inside embedded document.Hope it helps
I am developing an application with mongodb and nodejs
I should also mention that I am new to both so please help me through this question
my database has a collection categories and then in each category I am storing products in subdocument
just like below :
{
_id : ObjectId(),
name: String,
type: String,
products : [{
_id : ObjectId(),
name : String,
description : String,
price : String
}]
});
When it comes to store the orders in database the orders collection will be like this:
{
receiver : String,
status : String,
subOrders : [
{
products :[{
productId : String,
name : String,
price : String,
status : String
}],
tax : String,
total : String,
status : String,
orderNote : String
}
]
}
As you can see we are storing _id of products which is a subdocument of categories in orders
when storing there is no issue obviously, when it comes to fetch these data if we just need the limited field like name or price there will be no issue as well, but if later on we need some extra fields from products like description,... they are not stored in orders.
My question is this:
Is there any easy way to access other fields of products apart from loop through the whole categories in mongodb, namely I need a sample code for querying the description of a product by only having its _id in mongodb?
or our design and implementation was wrong and I have to re-design it from scratch and separate the products from categories into another collection?
please don't put links to websites or weblogs that generally talks about mongodb and its collections implementations unless they focus on a very similar issue to mine
thanks in advance
I'd assume that you'd want to return as many product descriptions as matched the current list of products, so first, there isn't a query to return only matching array elements. Using $elemMatch you can return a specific element or the first match, but not only matching array elements. However, $elemMatch can also be used as a projection operator.
db.categories({ "products._id" : "PID1" },
{ $elemMatch : { "products._id" : "PID1" },
"products._id" : 1,
"products.description" : 1})
You'd definitely want to index the "products._id" field to achieve reasonable performance.
You might consider instead creating a products collection where each document contains a category identifier, much like you would in a relational database. This is a common pattern in MongoDb when embedding doesn't make sense, or complicates queries and aggregations.
Assuming that is true:
You'll need to load the data from the second collection manually. There are no joins in MognoDb. You might consider using $in which takes a list of values for a field and loads all matching documents.
Depending on the driver you're using to access MongoDb, you should be able to use the projection feature of find, which can limit the fields returned for a document to just those you've specified.
As product descriptions ardently likely to change frequently, you might also consider caching the values for a period on the client (like a web server for example).
db.products.find({ _id: { $in : [ 'PID1', 'PID2'] } }, { description : 1 })