Creating Meteor-friendly id's in Mongo? - mongodb

In meteor, when I create an item in a collection, the generated id for the item usually looks like:
"_id" : "vxqbpic8yLdc6Ehor"
However, when I insert a row directly in Mongo, it's generated id looks like:
"_id" : ObjectId("549af35926cee46520611838")
Is there a way for me to insert data directly into mongo generating an id similar to the way meteor does, or is that something special with meteor? I'd be happy with just dropping the "ObjectId()" wrap around the value, if that's possible.

What meteor actually does is create a random 17 character string that consists of characters from within 23456789ABCDEFGHJKLMNPQRSTWXYZabcdefghijkmnopqrstuvwxyz
https://github.com/meteor/meteor/blob/devel/packages/random/random.js is what does that. Namely the RandomGenerator.prototype.id(17) function.
So you can include that in your custom code or any other piece of code that generates 17 character random strings from those characters I've given above, and use its outcome as your ID.
In fact, any other random string would suffice, as long as it is universally random, which Meteor's implementation tries to attain.

Here is a function that uses Math.random() to generate Meteor Id's:
const UNMISTAKABLE_CHARS = '23456789ABCDEFGHJKLMNPQRSTWXYZabcdefghijkmnopqrstuvwxyz';
function newMeteorId () {
return [...Array(17).keys()]
.map(() =>
UNMISTAKABLE_CHARS[
Math.floor(Math.random() * UNMISTAKABLE_CHARS.length)
]
)
.join('')
}
My understanding is that the ids generated using Meteor's random.js are more truly random where as this is not but I believe for most use cases this will be sufficiently random.

Related

Need to remove ObjectID() from _id using Meteor

I am retrieving records from Mongo using Meteor. I use the {{_id}} placeholder in my meteor template to use the _id field of the record, but it adds this into my template....
ObjectID("54f27a1adfe0c11c824e04e9")
... and I would like just to have...
54f27a1adfe0c11c824e04e9
How do I do this?
Just add a global helper:
Template.registerHelper('formatId', function(data) {
return (data && data._str) || data;
});
You can also do it like this with ES6 syntax :
Template.registerHelper('formatId', (id) => (id && id._str) || id);
And use it in any template like this:
{{formatId _id}}
This will work for both mongo-style ObjectIds and meteor-style random strings.
Since your documents are using the MongoDB ID format rather than the default Meteor ID format (simply a randomly generated string), you will need to use the MongoDB ObjectId.toString() function described here. But unfortunately, this simply results in your ObjectID being printed out as a string like ObjectID("54f27a1adfe0c11c824e04e9").
I would recommend creating a document ID template helper that cleans your document IDs before including them in the template. Since this issue is most likely related to all of your documents in all of your collections, I would further suggest creating a global template helper. This can be done like so:
Template.registerHelper('cleanDocumentID', function(objectID) {
var objectIdString = objectID.toString();
var cleanedString = objectIDString.slice(objectIDString.indexOf('"') + 1, -2);
return cleanedString;
});
This template helper slices out just the actual object ID string from the string provided by the ObjectId.toString() function. You can use this template helper in your templates like so:
{{cleanDocumentID _id}}
I know that this is a lot messier than simply using the document ID in the template like {{_id}}, but it is necessary due to the fact that your documents have the MongoDB ObjectID-type document ID rather than the simple randomly generated string as used by Meteor by default.
If you would like to learn more about how to set your MongoDB collections to use randomly generated strings for document IDs and avoid this mess, check out this.
In Blaze templates, just add this {{_id.valueOf}}, and you will have a string value of the actual Object Id.
mongo is capable of storing many types including uuids and custom ones. the field is usually a self-describing object, comprised of the type and the id.
your record is using the default mongo format evidenced by the "ObjectId" prefix.
try ObjectId("507f191e810c19729de860ea").str

Mongo find unique results

What's the easiest way to get all the documents from a collection that are unique based on a single field.
I know I can use db.collections.distrinct to get an array of all the distinct values of a field, but I want to get the first (or really any one) document for every distinct value of one field.
e.g. if the database contained:
{number:1, data:'Test 1'}
{number:1, data:'This is something else'}
{number:2, data:'I'm bad at examples'}
{number:3, data:'I guess there\'s room for one more'}
it would return (based on number being unique:
{number:1, data:'Test 1'}
{number:2, data:'I'm bad at examples'}
{number:3, data:'I guess there\'s room for one more'}
Edit: I should add that the server is running Mongo 2.0.8 so no aggregation and there's more results than group will support.
Update to 2.4 and use aggregation :)
When you really need to stick to the old version of MongoDB due to too much red tape involved, you could use MapReduce.
In MapReduce, the map function transforms each document of the collection into a new document and a distinctive key. The reduce function is used to merge documents with the same distincitve key into one.
Your map function would emit your documents as-is and with the number-field as unique key. It would look like this:
var mapFunction = function(document) {
emit(document.number, document);
}
Your reduce-function receives arrays of documents with the same key, and is supposed to somehow turn them into one document. In this case it would just discard all but the first document with the same key:
var reduceFunction = function(key, documents) {
return documents[0];
}
Unfortunately, MapReduce has some problems. It can't use indexes, so at least two javascript functions are executed for every single document in the collections (it can be limited by pre-excluding some documents with the query-argument to the mapReduce command). When you have a large collection, this can take a while. You also can't fully control how the docments created by MapReduce are formed. They always have two fields, _id with the key and value with the document you returned for the key.
MapReduce is also hard to debug an troubleshoot.
tl;dr: Update to 2.4

MongoDB: Speed of field ("inside record") search in comporation with speed of search in "global scope"

My question may be not very good formulated because I haven't worked with MongoDB yet, so I'd want to know one thing.
I have an object (record/document/anything else) in my database - in global scope.
And have a really huge array of other objects in this object.
So, what about speed of search in global scope vs search "inside" object? Is it possible to index all "inner" records?
Thanks beforehand.
So, like this
users: {
..
user_maria:
{
age: "18",
best_comments :
{
goodnight:"23rr",
sleeptired:"dsf3"
..
}
}
user_ben:
{
age: "18",
best_comments :
{
one:"23rr",
two:"dsf3"
..
}
}
So, how can I make it fast to find user_maria->best_comments->goodnight (index context of collections "best_comment") ?
First of all, your example schema is very questionable. If you want to embed comments (which is a big if), you'd want to store them in an array for appropriate indexing. Also, post your schema in JSON format so we don't have to parse the whole name/value thing :
db.users {
name:"maria",
age: 18,
best_comments: [
{
title: "goodnight",
comment: "23rr"
},
{
title: "sleeptired",
comment: "dsf3"
}
]
}
With that schema in mind you can put an index on name and best_comments.title for example like so :
db.users.ensureIndex({name:1, 'best_comments.title:1})
Then, when you want the query you mentioned, simply do
db.users.find({name:"maria", 'best_comments.title':"first"})
And the database will hit the index and will return this document very fast.
Now, all that said. Your schema is very questionable. You mention you want to query specific comments but that requires either comments being in a seperate collection or you filtering the comments array app-side. Additionally having huge, ever growing embedded arrays in documents can become a problem. Documents have a 16mb limit and if document increase in size all the time mongo will have to continuously move them on disk.
My advice :
Put comments in a seperate collection
Either do document per comment or make comment bucket documents (say,
100 comments per document)
Read up on Mongo/NoSQL schema design. You always query for root documents so if you end up needing a small part of a large embedded structure you need to reexamine your schema or you'll be pumping huge documents over the connection and require app-side filtering.
I'm not sure I understand your question but it sounds like you have one record with many attributes.
record = {'attr1':1, 'attr2':2, etc.}
You can create an index on any single attribute or any combination of attributes. Also, you can create any number of indices on a single collection (MongoDB collection == MySQL table), whether or not each record in the collection has the attributes being indexed on.
edit: I don't know what you mean by 'global scope' within MongoDB. To insert any data, you must define a database and collection to insert that data into.
Database 'Example':
Collection 'table1':
records: {a:1,b:1,c:1}
{a:1,b:2,d:1}
{a:1,c:1,d:1}
indices:
ensureIndex({a:ascending, d:ascending}) <- this will index on a, then by d; the fact that record 1 doesn't have an attribute 'd' doesn't matter, and this will increase query performance
edit 2:
Well first of all, in your table here, you are assigning multiple values to the attribute "name" and "value". MongoDB will ignore/overwrite the original instantiations of them, so only the final ones will be included in the collection.
I think you need to reconsider your schema here. You're trying to use it as a series of key value pairs, and it is not specifically suited for this (if you really want key value pairs, check out Redis).
Check out: http://www.jonathanhui.com/mongodb-query

How do I describe a collection in Mongo?

So this is Day 3 of learning Mongo Db. I'm coming from the MySql universe...
A lot of times when I need to write a query for a MySql table I'm unfamiliar with, I would use the "desc" command - basically telling me what fields I should include in my query.
How would I do that for a Mongo db? I know, I know...I'm searching for a schema in a schema-less database. =) But how else would users know what fields to use in their queries?
Am I going at this the wrong way? Obviously I'm trying to use a MySql way of doing things in a Mongo db. What's the Mongo way?
Type the below query in editor / mongoshell
var col_list= db.emp.findOne();
for (var col in col_list) { print (col) ; }
output will give you name of columns in collection :
_id
name
salary
There is no good answer here. Because there is no schema, you can't 'describe' the collection. In many (most?) MongoDb applications, however, the schema is defined by the structure of the object hierarchy used in the writing application (java or c# or whatever), so you may be able to reflect over the object library to get that information. Otherwise there is a bit of trial and error.
This is my day 30 or something like that of playing around with MongoDB. Unfortunately, we have switched back to MySQL after working with MongoDB because of my company's current infrastructure issues. But having implemented the same model on both MongoDB and MySQL, I can clearly see the difference now.
Of course, there is a schema involved when dealing with schema-less databases like MongoDB, but the schema is dictated by the application, not the database. The database will shove in whatever it is given. As long as you know that admins are not secretly logging into Mongo and making changes, and all access to the database is controller through some wrapper, the only place you should look at for the schema is your model classes. For instance, in our Rails application, these are two of the models we have in Mongo,
class Consumer
include MongoMapper::Document
key :name, String
key :phone_number, String
one :address
end
class Address
include MongoMapper::EmbeddedDocument
key :street, String
key :city, String
key :state, String
key :zip, String
key :state, String
key :country, String
end
Now after switching to MySQL, our classes look like this,
class Consumer < ActiveRecord::Base
has_one :address
end
class Address < ActiveRecord::Base
belongs_to :consumer
end
Don't get fooled by the brevity of the classes. In the latter version with MySQL, the fields are being pulled from the database directly. In the former example, the fields are right there in front of our eyes.
With MongoDB, if we had to change a particular model, we simply add, remove, or modify the fields in the class itself and it works right off the bat. We don't have to worry about keeping the database tables/columns in-sync with the class structure. So if you're looking for the schema in MongoDB, look towards your application for answers and not the database.
Essentially I am saying the exactly same thing as #Chris Shain :)
While factually correct, you're all making this too complex. I think the OP just wants to know what his/her data looks like. If that's the case, you can just
db.collectionName.findOne()
This will show one document (aka. record) in the database in a pretty format.
I had this need too, Cavachon. So I created an open source tool called Variety which does exactly this: link
Hopefully you'll find it to be useful. Let me know if you have questions, or any issues using it.
Good luck!
AFAIK, there isn't a way and it is logical for it to be so.
MongoDB being schema-less allows a single collection to have a documents with different fields. So there can't really be a description of a collection, like the description of a table in the relational databases.
Though this is the case, most applications do maintain a schema for their collections and as said by Chris this is enforced by your application.
As such you wouldn't have to worry about first fetching the available keys to make a query. You can just ask MongoDB for any set of keys (i.e the projection part of the query) or query on any set of keys. In both cases if the keys specified exist on a document they are used, otherwise they aren't. You will not get any error.
For instance (On the mongo shell) :
If this is a sample document in your people collection and all documents follow the same schema:
{
name : "My Name"
place : "My Place"
city : "My City"
}
The following are perfectly valid queries :
These two will return the above document :
db.people.find({name : "My Name"})
db.people.find({name : "My Name"}, {name : 1, place :1})
This will not return anything, but will not raise an error either :
db.people.find({first_name : "My Name"})
This will match the above document, but you will have only the default "_id" property on the returned document.
db.people.find({name : "My Name"}, {first_name : 1, location :1})
print('\n--->', Object.getOwnPropertyNames(db.users.findOne())
.toString()
.replace(/,/g, '\n---> ') + '\n');
---> _id
---> firstName
---> lastName
---> email
---> password
---> terms
---> confirmed
---> userAgent
---> createdAt
This is an incomplete solution because it doesn't give you the exact types, but useful for a quick view.
const doc = db.collectionName.findOne();
for (x in doc) {
print(`${x}: ${typeof doc[x]}`)
};
If you're OK with running a Map / Reduce, you can gather all of the possible document fields.
Start with this post.
The only problem here is that you're running a Map / Reduce on which can be resource intensive. Instead, as others have suggested, you'll want to look at the code that writes the actual data.
Just because the database doesn't have a schema doesn't mean that there is no schema. Generally speaking the schema information will be in the code.
I wrote a small mongo shell script that may help you.
https://gist.github.com/hkasera/9386709
Let me know if it helps.
You can use a UI tool mongo compass for mongoDb. This shows all the fields in that collection and also shows the variation of data in it.
If you are using NodeJS and want to get the all the field names using the API request, this code works for me-
let arrayResult = [];
db.findOne().exec(function (err, docs)){
if(err)
//show error
const JSONobj = JSON.parse(JSON.stringify(docs));
for(let key in JSONobj) {
arrayResult.push(key);
}
return callback(null, arrayResult);
}
The arrayResult will give you entire field/ column names
Output-
[
"_id",
"emp_id",
"emp_type",
"emp_status",
"emp_payment"
]
Hope this works for you!
Consider you have collection called people and you want to find the fields and it's data-types. you can use below query
function printSchema(obj) {
for (var key in obj) {
print( key, typeof obj[key]) ;
}
};
var obj = db.people.findOne();
printSchema(obj)
The result of this query will be like below,
you can use Object.keys like in JavaScript
Object.keys(db.movies.findOne())

id autoincrement/sequence emulation with CassandraDB/MongoDB etc

I'm trying to build small web-system (url shortcutting) using nonsql Cassandra DB, the problem I stack is id auto generation.
Did someone already stack with this problem?
Thanks.
P.S. UUID not works for me, I do need to use ALL numbers from 0 to Long.MAX_VALUE (java). so I do need something that exactly works like sql sequence
UPDATED:
The reason why I'm not ok with GUID ids is inside of scope of my application.
My app has url shortcutting part, and I do need to make url as short as possible. So I follow next approach: I'm taking numbers starting from 0 and convert it base64 string. So in result I have url like mysite.com/QA (where QA is base 64 string).
This is was very easy to implement using SQL DB, I just took auto incremented ID, convert it to URL and was 100-percents sure, that URL is unique.
Don't know about Cassandra, but with mongo you can have an atomic sequence (it won't scale, but will work the way it should, even in sharded environment if the query has the sharded field).
It can be done by using the findandmodify command.
Let's consider we have a special collection named sequences and we want to have a sequence for post numbers (named postid), you could use code similar to this:
> db.runCommand( { "findandmodify" : "sequences",
"query" : { "name" : "postid"},
"update" : { $inc : { "id" : 1 }},
"new" : true } );
This command will return atomically the updated (new) document together with status. The value field contains the returned document if the command completed successfully.
Autoincrement IDs inherently don't scale well as they need a single source to generate the numbers. This is why shardable/replicatable databases such as MongoDB use longer, GUID-like identifiers for objects. Why do you need LONG values so badly?
You might be able to do it using atomic increments, retaining the old value, but I'm not sure. This would be limited to single server setups only.
Im not sure I follow you. What language are you using? Are we talking about uuid?
The following is how you generate UUIDs in some languages:
java.util.UUID.randomUUID(); // (Java) variant 2, version 4
import uuid // (Python)
uuid.uuid1() // version 1