mongodb TTL not working - mongodb

I had executed this command to set a TTL Index on mongodb,
db.sessions.ensureIndex({'expiration':1},{"expireAfterSeconds" : 30})
but after 4 days,I found these documents were not removed.
I had confirmed command and document's field was correct.
I don't know how to fix it.
after executed db.serverStatus(), I got
localTime is 2015-01-16 11:03:05.554+08:00
and the following is some info of my collection
db.sessions.getIndexes()
{
"0" : {
"v" : 1,
"key" : {
"_id" : 1
},
"name" : "_id_",
"ns" : "meta.sessions"
},
"1" : {
"v" : 1,
"key" : {
"expiration" : 1
},
"name" : "expiration_1",
"ns" : "meta.sessions",
**"expireAfterSeconds" : 30**
}
}
db.sessions.find()
/* 0 */
{
"_id" : ObjectId("54b4c2e0f840238ca1436788"),
"data" : ...,
"expiration" : **ISODate("2015-01-13T16:02:33.947+08:00"),**
"sid" : "..."
}
/* 1 */
{
"_id" : ObjectId("54b4c333f840238ca1436789"),
"data" : ...,
"expiration" : ISODate("2015-01-13T16:06:56.942+08:00"),
"sid" : ".."
}
/* ... */

To expire data from a collection (Tested in version 3.2) you must create indexes:
db.my_collection.createIndex( { "createdAt": 1 }, { expireAfterSeconds: 3600 } )
After that, every document that you insert in this collection must have the "createdAt" with the current date:
db.my_collection.insert( {
"createdAt": new Date(),
"dataExample": 2,
"Message": "Success!"
} )
The document will be removed when the date createdAt value + expireAfterSeconds value arrive.
Note: This background task in MongoDB, by default, happens once every 60 seconds.

When you create TTL index in the foreground (like you did), MongoDB begins removing expired documents as soon as the index finishes building. Best to tail -f mongod.log during index creation to track the progress. You may wish to remove & recreate index if something went wrong.
If index was created in the background, the TTL thread can begin deleting documents while the index is building.
TTL thread that removes expired documents runs every 60 seconds.
If you created index on the replica that was taken out of the replica set and is running in standalone mode index WILL be created but documents will NOT be removed until you rejoin (or remove replica set) configuration. If this is the case you may get something similar to this in the mongod.log
** WARNING: mongod started without --replSet yet 1 documents are
** present in local.system.replset
** Restart with --replSet unless you are doing maintenance and no other
** clients are connected.
** The TTL collection monitor will not start because of this.
** For more info see http://dochub.mongodb.org/core/ttlcollections

Related

Cursor with index on a large collection seems not to be detected

I'm running a standalone mongodb 3.6 docker container and I have a collection which contains very small documents and I have a super simple index on the "Date" field set by descending:
> db.collection.getIndexes()
[
{
"v" : 2,
"key" : {
"_id" : 1
},
"name" : "_id_",
"ns" : "myApp.collection"
},
{
"v" : 2,
"key" : {
"Date" : -1
},
"name" : "Date_-1",
"ns" : "myApp.collection",
"sparse" : true
}
]
I'm using the MongoCSharpDriver to perform a query where I get the cursor and I'm getting the following error:
Command find failed: Executor error during find command :: caused by :: errmsg: "Sort operation used more than the maximum 33554432 bytes of RAM. Add an index, or specify a smaller limit."
I'm specifying a BatchSize of 100 documents however I'm not setting the limit of records to be returned since I think that will be handled by the cursor itsef (so both Skip and Limit are set to zero).
My question is, could it be that the actual index is already greater than 32MB? If so, is it that I have to extend the RAM allocated for this? Otherwise how do you solve this kind of issue? Note that I have 46132 documents right now each of them with a size of approx. 2.52 KB
You don't need to extend the RAM. Set allowDiskUse instead of abort the operation it will continue using disk file storage instead of RAM, if the allowDiskUse has been set to true.
db.getCollection('movies').aggregate( [
{ $sort : { year : 1} }
],
{ allowDiskUse: true }
)
And also, I can't see your query, but you said you have some index, always before that ordering, you need to create index.

Unable to recreate a mongo index with the same name

I have an index which i need to modify.
{
"v" : 1,
"key" : {
"expectedDateTime" : 1
},
"name" : "expectedDateTime_1",
"ns" : "expectation.expectation_data",
"expireAfterSeconds" : 43200
}
The expireAfterSeconds is incorrect and needs to be changed to 432000.
When I dropped the index it seemed fine
db.expectation_data.dropIndex({"expectedDateTime":1})
{ "nIndexesWas" : 4, "ok" : 1 }
The getIndexes() shows that the index does not exist.
Then when i try to recreate the index i get this error
db.expectation_data.createIndex({"expectedDateTime":1},
{expireAfterSeconds:432000,name:"expectedDateTime"});
{
"ok" : 0,
"errmsg" : "Index with name: expectedDateTime already exists with different options",
"code" : 85
}
Now on running getIndexes(), i see that the index seems to have recreated with the old TTL. I tried repeating this process multiple times, but ran into the same issue again and again.
I cannot find any documentation which says that i cannot recreate an index of the same name. If i use a different name it works fine
db.expectation_data.createIndex({"expectedDateTime":1}, {expireAfterSeconds:432000});
.
.
>db.expectation_data.getIndexes()
.
.
{
"v" : 1,
"key" : {
"expectedDateTime" : 1
},
"name" : "expectedDateTime_1",
"ns" : "expectation.expectation_data",
"expireAfterSeconds" : 432000
}
Is there any restriction on recreating indexes with the same name ?
This looks like the index is recreated automatically after deletion. Make sure that no applications using ensureIndex or #Index-Annotations are connecting to the database.
As it turned out. This was due to an #Index annotation used in the entity with the old timeout. The application was still running when i made the index changes.
When i stopped the application, i was able to create the index as i originally expected

Make _id index unique in a mongodb collection

I recently saw this error in a Mongo 2.6 replicaset:
WARNING: the collection 'mydatabase.somecollection' lacks a unique index on _id. This index is needed for replication to function properly.
I assumed the _id index would be unique by default. But I am trying to check / set it. getIndexes shows there is no unique option set.
> db.somecollection.getIndexes()[0]
{
"v" : 1,
"key" : {
"_id" : 1
},
"ns" : "mydatabase.somecollection",
"name" : "_id_"
}
> db.somecollection.ensureIndex({"_id":1},{unique:true})
> { "numIndexesBefore" : 3, "note" : "all indexes already exist", "ok" : 1 }
> db.somecollection.getIndexes()[0]
{
"v" : 1,
"key" : {
"_id" : 1
},
"ns" : "mydatabase.somecollection",
"name" : "_id_"
}
I have tried .validate(true):
...
"valid" : true,
"errors" : [ ],
"ok" : 1
}
and also .reIndex() that runs without error. I am unable to remove the _id index to recreate it - how can I set the index to unique or what should I do to ensure data consistency in the RS? Note the RS was upgraded as per upgrade instructions from 2.2 --> 2.4 --> 2.6. I have found this MongoDB - Collection lacks a unique index on _id but there is nothing that resolves my issue in there.
I have seen this in the past when a new member to the replica set was added with a different Compatibility Version. Run db.adminCommand( { getParameter: 1, featureCompatibilityVersion: 1 } ) on all of your nodes in the replica set and if one is different, stop the replication on that node, change the CompatibilityVersion and then ready it to the replica.
So it turns out that the error came up when a new member was added to the existing replica set, and was only shown on that member. If I connect to the database and try add a duplicate _id, I get the usual E11000 duplicate key error index: ... even though getIndexes() doesn't indicate the unique constraint on the index (assuming it is implicit).

How to update _id in Mongodb Replica Set configuration?

I had 5 mongo members in Replica Set. After I deleted 3 from it.
How can I change "_id" in others members to values "0", "1" and "2"?
rs.conf()
{
"_id" : "rs0",
"version" : 151261,
"members" : [
{
"_id" : 3,
"host" : "mongodb3:27017"
},
{
"_id" : 4,
"host" : "mongodb4:27017"
},
{
"_id" : 5,
"host" : "ok:27017",
"arbiterOnly" : true
}
]
}
Directly editing the replica set configuration may not be an elegant way. Instead use the rs.remove(hostname) command to remove a member from replica set , this way you need not have to bring down the primary during reconfiguration which will automatically assign ascending order values to "_id" field.
Try dropping the slaves collection as described here: http://docs.mongodb.org/manual/tutorial/troubleshoot-replica-sets/#duplicate-key-error-on-local-slaves
The master will recreate the collection the next time it is required.
You could try this in the Mongo console:
conf = rs.conf()
conf.members[0]._id = 0
conf.members[1]._id = 1
conf.members[2]._id = 2
rs.reconfig(conf)

mongodb TTL not removing documents

I have a simple schema like:
{
_id: String, // auto generated
key: String, // there is a unique index on this field
timestamp: Date() // set to current time
}
Then I set the TTL index like so:
db.sess.ensureIndex( { "timestamp": 1 }, { expireAfterSeconds: 3600 } )
I expect the record to removed after 1 hour but it is never removed.
I flipped on verbose logging and I see the TTLMonitor running:
Tue Sep 10 10:42:37.081 [TTLMonitor] TTL: { timestamp: 1.0 } { timestamp: { $lt: new Date(1378823557081) } }
Tue Sep 10 10:42:37.081 [TTLMonitor] TTL deleted: 0
When I run that query myself I see all my expired records coming back:
db.sess.find({ timestamp: { $lt: new Date(1378823557081) }})
...
Any ideas? I'm stumped.
EDIT - Example document below
{ "_id" : "3971446b45e640fdb30ebb3d58663807", "key" : "6XTHYKG7XBTQE9MJH8", "timestamp" : ISODate("2013-09-09T18:54:28Z") }
Can you show us what the inserted records actually look like?
How long is "never"? Because there's a big warning:
Warning: The TTL index does not guarantee that expired data will be deleted immediately. There may be a delay between the time a document expires and the time that MongoDB removes the document from the database.
Does the timestamp field have an index already?
This was my issue:
I had the index created wrong like this:
{
"v" : 1,
"key" : {
"columnName" : 1,
"expireAfterSeconds" : 172800
},
"name" : "columnName_1_expireAfterSeconds_172800",
"ns" : "dbName.collectionName"
}
When it should have been this: (expireAfterSeconds is a top level propery)
{
"v" : 1,
"key" : {
"columnName" : 1
},
"expireAfterSeconds" : 172800,
"name" : "columnName_1_expireAfterSeconds_172800",
"ns" : "dbName.collectionName"
}