MongoDB ttl index behave unexpected - mongodb

We currently have a ttl index on a timestamp in MongoDB.
Before we had an ttl expireAfterSeconds : 604800 seconds -> 1 week of documents.
A few weeks ago we change this to a TTL index of 31536000 seconds -> 365 days.
But MongoDB still removes documents after 7 days instead of 365.
MongoDB version: 4.2.18
MongoDB running hosted Atlas / AWS
Indexes in database:
{
v: 2,
key: { _id: 1 },
name: '_id_',
ns: '<DB>.<COLLECTION>'
},
{
v: 2,
key: { id: 1, ts: 1 },
name: 'id_1_ts_1',
ns: '<DB>.<COLLECTION>
},
{
v: 2,
key: { ts: 1 },
name: 'ts_1',
ns: '<DB>.<COLLECTION>',
expireAfterSeconds: 31536000
}
]
We have an environmental variable to set the TTL index on each start of the application,
the code to set the ttl index looks like this:
await collection.createIndex(
{ ts: 1 },
{
expireAfterSeconds: ENV.<Number>
}
);
} catch (e) {
if ((e as any).codeName == "IndexOptionsConflict") {
await collection.dropIndex("ts_1");
await collection.createIndex(
{ ts: 1 },
{
expireAfterSeconds: ENV.<Number>
}
);
}
}
as seen from the indexes, the TTL index should remove documents after one year?
Why does it behave like this? Any insights?

I would look into the collMod command

Related

Creation of Unique Index not working using Mongo shell

I have created small mongodb database, I wanted to create username column as unique. So I used createIndex() command to create index for that column with UNIQUE property.
I tried creating unique index using below command in Mongosh.
db.users.createIndex({'username':'text'},{unqiue:true,dropDups: true})
For checking current index, I used getIndex() command. below is the output for that.
newdb> db.users.getIndexes()
[
{ v: 2, key: { _id: 1 }, name: '_id_' },
{
v: 2,
key: { _fts: 'text', _ftsx: 1 },
name: 'username_text',
weights: { username: 1 },
default_language: 'english',
language_override: 'language',
textIndexVersion: 3
}
]
Now Index is created, so for confirmation I checked same in MongoDB Compass.But I am not able to see UNIQUE property got assign to my newly created index. Please refer below screenshot.
MongoDB Screenshot
I tried deleting old index, as it was not showing UNIQUE property and Created again using MongoDB Compass GUI, and now I can see UNIQUE Property assign to index.
MongoDB Screentshot2
And below is output for getIndex() command in Mongosh.
newdb> db.users.getIndexes()
[
{ v: 2, key: { _id: 1 }, name: '_id_' },
{
v: 2,
key: { _fts: 'text', _ftsx: 1 },
name: 'username_text',
unique: true,
sparse: false,
weights: { username: 1 },
default_language: 'english',
language_override: 'language',
textIndexVersion: 3
}
]
I tried searching similar topics, but didn't found anything related. Is there anything I am missing or doing wrong here?
Misspelled the property unique as unqiue, which leads to this issue.
I tried again with the correct spelling, and it is working now.
Sorry for a dumb question

Mongodb balancing very slow

We are experiencing very slow balancing in our cluster. On our log, it seems that migration progress barely makes progress:
2016-01-25T22:21:15.907-0600 I SHARDING [conn142] moveChunk data transfer progress: { active: true, ns: "music.fav_artist_score", from: "rs1/MONGODB01-SRV:27017,MONGODB05-SRV:27017", min: { _id.u: -9159729253516193447 }, max: { _id.u: -9157438072680830290 }, shardKeyPattern: { _id.u: "hashed" }, state: "clone", counts: { cloned: 128, clonedBytes: 12419, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
2016-01-25T22:21:16.932-0600 I SHARDING [conn142] moveChunk data transfer progress: { active: true, ns: "music.fav_artist_score", from: "rs1/MONGODB01-SRV:27017,MONGODB05-SRV:27017", min: { _id.u: -9159729253516193447 }, max: { _id.u: -9157438072680830290 }, shardKeyPattern: { _id.u: "hashed" }, state: "clone", counts: { cloned: 128, clonedBytes: 12419, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
2016-01-25T22:21:17.957-0600 I SHARDING [conn142] moveChunk data transfer progress: { active: true, ns: "music.fav_artist_score", from: "rs1/MONGODB01-SRV:27017,MONGODB05-SRV:27017", min: { _id.u: -9159729253516193447 }, max: { _id.u: -9157438072680830290 }, shardKeyPattern: { _id.u: "hashed" }, state: "clone", counts: { cloned: 128, clonedBytes: 12419, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
Also, when we shard a new collection. It initially only starts with 8 chunks in the same primary replica set. It does not migrating chunks to other shards
Our configuration is 4 replica sets of (primary, secondary, arbiter) & 3 configs in a replica set. Both sh.getBalancerState() & sh.isBalancerRunning() return true
In MongoDB sharding performance depends upon the key chose for sharding the database. Since, your chunks are always stored on a single node it is highly probable that the shard key you have chosen is monotonically increasing. To avoid this issue, hash the key to allow proper balance of chunks across all the shards. Use the following command for hashed sharding.
sh.shardCollection( "<your-db>", { <shard-key>: "hashed" } )

Find And Update in array of subdocuments

I'm I newbie in mongodb world, and I've already spent a lot of time trying to resolve the following problem.
My collection is called 'routes' with the documents:
{
idGateway: 0,
priority: 1,
channels: [
{
id: 'chanel_1',
control: {
timestamp: ISODate("2015-09-19T16:17:12.393Z"),
qty: 3
}
},
{
id: 'chanel_2',
control: {
timestamp: ISODate("2015-09-18T16:17:12.393Z"),
qty: 5
}
}
]
},
{
idGateway: 1,
priority: 2,
channels: [
{
id: 'chanel_3',
control: {
timestamp: ISODate("2015-09-19T16:17:12.393Z"),
qty: 3
}
},
{
id: 'chanel_4',
control: {
timestamp: ISODate("2015-09-18T16:17:12.393Z"),
qty: 5
}
}
]
}
My needs are select a document and at the same time update its timestamp/qty fields (as a FindAndModify). The criteria to selection is:
Get the document with the lowest priority
From the document, select the channels' document with the MAX(timestamp)
Update timestamp and qty
And return only the channels' document (channel_1).
So, once I've selected the channel_1 I have to avoid that other process select it again befor the update. How can I do this ?
Thanks

server crushed when using mongodb mapreduce

I'm using replication set with 3 members, this is the code for firing mapreduce
var db=new mongodb.Db('sns',replSet,{"readPreference":"secondaryPreferred", "safe":true});
....
collection.mapReduce(Account_Map,Account_Reduce,{out:{'replace': 'log_account'},query:queryObj},function(err,collection){};
Then my primary died and restarted, but voted becoming a secondary, and there remains a collection sns.tmp.mr.account_0 instead of log_account. I'm very new to mongodb, I really want to figure out what the promblem is.
2015-02-06T14:04:34.443+0800 [conn87299] build index on: sns.tmp.mr.account_0_inc properties: { v: 1, key: { 0: 1 }, name: "_temp_0", ns: "sns.tmp.mr.account_0_inc" }
2015-02-06T14:04:34.443+0800 [conn87299]added index to empty collection
2015-02-06T14:04:34.457+0800 [conn87299] build index on: sns.tmp.mr.account_0 properties: { v: 1, key: { _id: 1 }, name: "_id_", ns: "sns.tmp.mr.account_0" }

Find document with its surroundings in MongoDB

I have mongo collection with documents like below:
{
_id: [ObjectId]
writeDate: [DateTime]
publishDate: [DateTime]
...
}
I usually display list of such documents sorting by publishDate first and then on writeDate.
Now when I get given document _id I need to fetch list containing: 2 previous documents, this document and 2 next documents. So it should look like as follows:
[1,2,4,3,6,7,8,5,9,0]
if given id is 6 I should get
[4,3,6,7,8]
and if id is 4 I should get
[1,2,4,3,6]
The thing is that publish dates may be the same (then I additionally sort by writeDate), so I suppose I can't just order using $gte and $lte with given document's date. Also _id are not guaranteed to be in order.
Do you have any clues on how to do this?
You can not do this in one query, but you will have to use three instead:
// current
r = db.so.findOne( { _id: 6 } );
// previous 2
db.so
.find( { publishDate: { $lte: r.publishDate }, _id: { $ne: 6 } )
.sort( { publishDate: -1 } )
.limit( 2 );
// next 2
db.so
.find( { publishDate: { $gte: r.publishDate }, _id: { $ne: 6 } )
.sort( { publishDate: 1 } )
.limit( 2 );