Mongoose schema: 'unique' not being respected - mongodb

I'm trying to use Mongoose with this schema
var RestoSchema = new Schema({
"qname" : {type: String, required: true, unique: true},
...
});
The problem is that this still permits new entries to the database to be created with an existing qname. From what i can see below the index has been created, but without any demonstrable impact when I use the .save method. What am I misunderstanding?
> db.restos.getIndexes()
[
{
"v" : 1,
"key" : {
"_id" : 1
},
"ns" : "af.restos",
"name" : "_id_"
},
{
"v" : 1,
"key" : {
"qname" : 1
},
"ns" : "af.restos",
"name" : "qname_1",
"background" : true,
"safe" : null
}
]

The getIndexes output shows that the index on qname wasn't created as a unique index. Mongoose doesn't alter an existing index, so you'll have to manually drop the index and then restart your app so that Mongoose can re-create it as unique.
In the shell:
db.restos.dropIndex('qname_1')

ByronC's response here seems to have solved it for me. I even tried dropping the collection and recreating and it still didn't work until I used the node.js NPM plugin called "mongoose-unique-validator" which you can learn more about here.
That said, the weird thing is that my unique index worked up until a schema change that implemented the "uuid" plugin which I use now for setting the _id value. There may have been something else going on but it is working for me now so I'm moving on.

Related

Why Mongoose cant create an index in MongoDB Atlas?

I have a Mongoose Schema which contains a field with a certain index:
const reportSchema = new mongoose.Schema({
coords: {
type: [Number],
required: true,
index: '2dsphere'
},
…
}
It works well on my local machine, so when I connect to MongoDB through the shell I get this output for db.reports.getIndexes():
[
{
"v" : 2,
"key" : {
"_id" : 1
},
"name" : "_id_",
"ns" : "weatherApp.reports"
},
{
"v" : 2,
"key" : {
"coords" : "2dsphere"
},
"name" : "coords_2dsphere",
"ns" : "weatherApp.reports",
"background" : true,
"2dsphereIndexVersion" : 3
}
]
Then I deploy this app to Heroku and connect to MongoDB Atlas instead of my local database. It works well for saving and retrieving data, but did not create indexes (only default one):
[
{
"v" : 2,
"key" : {
"_id" : 1
},
"name" : "_id_",
"ns" : "weatherApp.reports"
}
]
What may cause this problem? Atlas allow to create indexes through Web GUI and it works good as well as creating indexes form the shell. But Mongoose fails this operation for some reason.
I had the same issue when I was on mongoose version v5.0.16. However, since I updated to v5.3.6 it is now creating the (compound) indexes for me on Mongo Atlas. (I just wrote a sample app with both versions to verify this is the case).
I'm not sure which version fixed this issue, but it's somewhere between v5.0.16 and v5.3.6, where v5.3.6 is working.

Mongo TTL not removing documents

I'm toying with auto-expiring documents from a collection. The java application creates an index per the Mongo TTL docs.
coll.createIndex(new Document("Expires", 1).append("expireAfterSeconds", 0));
When inserting my document, I set the Expires field to a future Date. For this testing I've been setting it 1 minute in the future.
I've verified the date exists properly, the index appears to be correct, and I've waited 10+ minutes (even though the ttl runner operates every sixty seconds) but the document remains.
{
"_id" : ObjectId("569847baf7794c44b8f2f17b"),
// my data
"Created" : ISODate("2016-01-15T02:02:30.116Z"),
"Expires" : ISODate("2016-01-15T02:03:30.922Z")
}
What else could I have missed? Here are the indexes:
[
{
"v" : 1,
"key" : {
"_id" : 1
},
"name" : "_id_",
"ns" : "prism.prismEventRecord"
},
{
"v" : 1,
"key" : {
"Location.X" : 1,
"Location.Z" : 1,
"Location.Y" : 1,
"Created" : -1
},
"name" : "Location.X_1_Location.Z_1_Location.Y_1_Created_-1",
"ns" : "prism.prismEventRecord"
},
{
"v" : 1,
"key" : {
"Created" : -1,
"EventName" : 1
},
"name" : "Created_-1_EventName_1",
"ns" : "prism.prismEventRecord"
},
{
"v" : 1,
"key" : {
"Expires" : 1,
"expireAfterSeconds" : 0
},
"name" : "Expires_1_expireAfterSeconds_0",
"ns" : "prism.prismEventRecord"
}
]
I wonder if it makes sense to take the java mongo client out of the pic for a minute.
I have created a similar collection, and made the following call in the shell.
db.weblog.createIndex({"expireAt":1},{expireAfterSeconds:0})
When I do, and then I call db.weblog.getIndexes(), this is what the expiring index looks like:
{
"v" : 1,
"key" : {
"expireAt" : 1
},
"name" : "expireAt_1",
"ns" : "logs.weblog",
"expireAfterSeconds" : 0
}
I think your java call may be "appending" a new column to your index (not setting the property you were hoping to set). Take a look... your index def looks like this:
{
"v" : 1,
"key" : {
"Expires" : 1,
"expireAfterSeconds" : 0
},
"name" : "Expires_1_expireAfterSeconds_0",
"ns" : "prism.prismEventRecord"
}
See what I mean? "expireAfterSeconds is a key, not a property. Now -- how do you do THAT with the java shell? Ummm ... don't yell at me, but Im a c# guy ... I found a post or two that punt on the question of ttl indexes from the java client, but they're old-ish.
Maybe the java client has gotten better and now supports options? Hopefully, knowing what the problem is gives a guy with your stellar coding skills enough to take it from here ;-)
EDIT: Stack java driver code (untested):
IndexOptions options = new IndexOptions()
.name('whocareswhatwecallthisindex')
.expireAfter(1, TimeUnit.DAYS);
coll.createIndex(new Document("Expires", 1), options);
EDIT2: C# driver code to create the same index:
var optionsIdx = new CreateIndexOptions() { ExpireAfter = new TimeSpan(0)};
await coll.Indexes.CreateOneAsync(Builders<MyObject>.IndexKeys.Ascending("expiresAt"), optionsIdx);
In case you run into this question as a Golang user, you have two choices:
1 Use structures: This works when you know the payload structure, and is documented extensively
2 Introduce an actual date object into your JSON payload: Only use this if for some reason your payload structure absolutely can't be known ahead of time for some reason.
In my case, the source data comes from a system whose structure is a black-box. I first tried to introduce an ISO-compliant TTL index whose format matched Mongo's but it was still read as text. This led me to deduct that it was due to the driver not being instructed to format it properly.
I believe a deep dive into Mongo's Golang driver's specifics to manipulate this process could only give us a short lived solution (since the implementation details are subject to change). So instead, I suggest introducing a real date property into your payload and let the driver adapt it for Mongo (adapting the principle in this snippet to your own code structure)
err = json.Unmarshal([]byte (objectJsonString.String()), &objectJson)
if err == nil && objectJson != nil {
//Must introduce as native or they are considered text
objectJson["createdAt"] = time.Now()
//Your other code
insertResult, err := collection.InsertOne(context.TODO(), objectJson)
}
So basically, create your JSON or BSON object normally with the rest of the data, and then introduce your TTL index using real date values rather than attempt to have your JSON parser do that work for you.
I look forward to corrections, just please be civil and I'll make sure to update and improve this answer with any observations made.

In a MongoDB index, what do the options "safe" and "force" mean?

I'm looking at our Mongo (2.4.10) indexes, using collection.getIndexes(). I see options that aren't discussed in any doc I can find. Specifically, I see the options "safe" and "force". For example below:
{
"v" : 1,
"name" : "status_1",
"key" : {
"status" : NumberLong(1)
},
"ns" : "db.mycoll",
"force" : true,
"background" : true
},
What do "force" and "safe" mean?
The options you mention ("force" and "safe") are not valid index options for MongoDB 2.4.
They likely resulted from a developer accidentally ensuring an index including these as index options (perhaps having intended for those fields to be part of the index criteria?).
You can reproduce this outcome in the mongo shell:
> db.foo.ensureIndex({foo: true}, {force: true, safe: true})
{
"createdCollectionAutomatically" : true,
"numIndexesBefore" : 1,
"numIndexesAfter" : 2,
"ok" : 1
}
> db.foo.getIndexes()
[
{
"v" : 1,
"key" : {
"_id" : 1
},
"name" : "_id_",
"ns" : "stack.foo"
},
{
"v" : 1,
"key" : {
"foo" : true
},
"name" : "foo_true",
"ns" : "stack.foo",
"force" : true,
"safe" : true
}
]
>
Unknown index options will be ignored (at least as at MongoDB 3.0), so while this is confusing the impact is currently benign. Unfortunately the only way to remove the invalid options would be dropping & rebuilding the affected index(es) as there is no API for changing an existing index.
It's possible that index option validation may be added in a future MongoDB release, but this should be noted as a compatibility change in the release notes. For example, MongoDB 2.6 has several index changes including better field name validation and enforcement of index key length.

convert default index to hashed mongodb

I'd like to make sharding of my existing users collection. Users collection has already single ascending index by default {"_id" : 1}. I want to convert this index to "hashed" and to shard based on this hashed key according to the documentation:
I've tried "brute-force" solution to delete default index and then recreate it with "hashed" parameter but it doesn't allow to do that.
UPDATE: I've also tried db.users.ensureIndex({_id: "hashed"}). But after I run this command nothing really happens.
switched to db bg_shard_single
mongos> db.users.ensureIndex({_id:"hashed"});
mongos> db.users.getIndexes();
[
{
"v" : 1,
"key" : {
"_id" : 1
},
"ns" : "bg_shard_single.users",
"name" : "_id_"
}
]
It does not allow you to do so because you can not create an index from _id field. Instead of this you can do something like this db.collection.ensureIndex( { _id: "hashed" } ) to create a hashing index on this field.
Then you will see "name" : "_id_hashed" as your hashed index which you can use for sharding purposes later.
I've found what was the problem. Apparently, I was using the old version of mongodb. That's why mongos didn't want me to update '_id' to "hashed". After I've updated to 2.4.8 as #Salvador-Dali mentions it becomes "name" : "_id_hashed".

Calling ensureIndex with compound key results in _id field in index object

When I call ensureIndex from the mongo shell on a collection for a compound index an _id field of type ObjectId is auto-generated in the index object.
> db.system.indexes.find();
{ "name" : "_id_", "ns" : "database.coll", "key" : { "_id" : 1 } }
{ "_id" : ObjectId("4ea78d66413e9b6a64c3e941"), "ns" : "database.coll", "key" : { "a.b" : 1, "a.c" : 1 }, "name" : "a.b_1_a.c_1" }
This makes intuitive sense as all documents in a collection need an _id field (even system.indexes, right?), but when I check the indexes generated by morphia's ensureIndex call for the same collection *there is no _id property*.
Looking at morphia's source code, it's clear that it's calling the same code that the shell uses, but for some reason (whether it's the fact that I'm creating a compound index or indexing an Embedded document or both) they produce different results. Can anyone explain this behavior to me?
Not exactly sure how you managed to get an _id field in the indexes collection but both shell and Morphia originated ensureIndex calls for compound indexes do not put an _id field in the index object :
> db.test.ensureIndex({'a.b':1, 'a.c':1})
> db.system.indexes.find({})
{ "v" : 1, "key" : { "_id" : 1 }, "ns" : "test.test", "name" : "_id_" }
{ "v" : 1, "key" : { "a.b" : 1, "a.c" : 1 }, "ns" : "test.test", "name" : "a.b_1_a.c_1" }
>
Upgrade to 2.x if you're running an older version to avoid running into now resolved issues. And judging from your output you are running 1.8 or earlier.