MongoDB Shard Zone Range Overlapping - mongodb

I am configuring a test mongo setup with 2 shards, and I'm trying to add zone ranges to each shard:
sh.updateZoneKeyRange('mydb.test', {id:MinKey,ts:MinKey}, {id:MaxKey,ts:1548787704000}, 'cold')
sh.updateZoneKeyRange('mydb.test', {id:MinKey,ts:1548787704000}, {id:MaxKey,ts:MaxKey}, 'hot')
First command runs fine, but second tells me:
Zone range: { id: MinKey, ts: 1548787704000.0 } -->> { id: MaxKey, ts: MaxKey } on hot is overlapping with existing: { id: MinKey, ts: MinKey } -->> { id: MaxKey, ts: 1548787704000.0 } on cold
I thought the maximum bounds were exclusive and minimum bounds inclusive?

Related

mongodb slow queries from time to time - reason unknown

I have from time to time delayed requests from past few days from my single shard cluster database , but not sure where this comes from , everything seems to be fine on storage/cpu and memory , normally this queries based on index are executed in < 50ms even at 10000 requests/sec , any ideas are highly welcome?
Example delayed query:
2022-04-19T11:11:34.702+0200 I COMMAND [conn7420156] command testdb.testcol command: find { find: "testcol", readConcern: { level: "linearizable" }, filter: { a.p.prid: "37011" }, maxTimeMS: 16000, $db: "test", $clusterTime: { clusterTime: Timestamp(1650359489, 2054), signature: { hash: BinData(0, EFB437ED731ED3ADA61B61AAAD45EB516A82A8A3), keyId: 7036792998071370037 } }, lsid: { id: UUID("6e8e0cbb-a52e-4c92-a259-0e836bde61f3") } } nShards:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:66731 protocol:op_msg 5196ms

MongoDB ttl index behave unexpected

We currently have a ttl index on a timestamp in MongoDB.
Before we had an ttl expireAfterSeconds : 604800 seconds -> 1 week of documents.
A few weeks ago we change this to a TTL index of 31536000 seconds -> 365 days.
But MongoDB still removes documents after 7 days instead of 365.
MongoDB version: 4.2.18
MongoDB running hosted Atlas / AWS
Indexes in database:
{
v: 2,
key: { _id: 1 },
name: '_id_',
ns: '<DB>.<COLLECTION>'
},
{
v: 2,
key: { id: 1, ts: 1 },
name: 'id_1_ts_1',
ns: '<DB>.<COLLECTION>
},
{
v: 2,
key: { ts: 1 },
name: 'ts_1',
ns: '<DB>.<COLLECTION>',
expireAfterSeconds: 31536000
}
]
We have an environmental variable to set the TTL index on each start of the application,
the code to set the ttl index looks like this:
await collection.createIndex(
{ ts: 1 },
{
expireAfterSeconds: ENV.<Number>
}
);
} catch (e) {
if ((e as any).codeName == "IndexOptionsConflict") {
await collection.dropIndex("ts_1");
await collection.createIndex(
{ ts: 1 },
{
expireAfterSeconds: ENV.<Number>
}
);
}
}
as seen from the indexes, the TTL index should remove documents after one year?
Why does it behave like this? Any insights?
I would look into the collMod command

Mongodb balancing very slow

We are experiencing very slow balancing in our cluster. On our log, it seems that migration progress barely makes progress:
2016-01-25T22:21:15.907-0600 I SHARDING [conn142] moveChunk data transfer progress: { active: true, ns: "music.fav_artist_score", from: "rs1/MONGODB01-SRV:27017,MONGODB05-SRV:27017", min: { _id.u: -9159729253516193447 }, max: { _id.u: -9157438072680830290 }, shardKeyPattern: { _id.u: "hashed" }, state: "clone", counts: { cloned: 128, clonedBytes: 12419, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
2016-01-25T22:21:16.932-0600 I SHARDING [conn142] moveChunk data transfer progress: { active: true, ns: "music.fav_artist_score", from: "rs1/MONGODB01-SRV:27017,MONGODB05-SRV:27017", min: { _id.u: -9159729253516193447 }, max: { _id.u: -9157438072680830290 }, shardKeyPattern: { _id.u: "hashed" }, state: "clone", counts: { cloned: 128, clonedBytes: 12419, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
2016-01-25T22:21:17.957-0600 I SHARDING [conn142] moveChunk data transfer progress: { active: true, ns: "music.fav_artist_score", from: "rs1/MONGODB01-SRV:27017,MONGODB05-SRV:27017", min: { _id.u: -9159729253516193447 }, max: { _id.u: -9157438072680830290 }, shardKeyPattern: { _id.u: "hashed" }, state: "clone", counts: { cloned: 128, clonedBytes: 12419, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
Also, when we shard a new collection. It initially only starts with 8 chunks in the same primary replica set. It does not migrating chunks to other shards
Our configuration is 4 replica sets of (primary, secondary, arbiter) & 3 configs in a replica set. Both sh.getBalancerState() & sh.isBalancerRunning() return true
In MongoDB sharding performance depends upon the key chose for sharding the database. Since, your chunks are always stored on a single node it is highly probable that the shard key you have chosen is monotonically increasing. To avoid this issue, hash the key to allow proper balance of chunks across all the shards. Use the following command for hashed sharding.
sh.shardCollection( "<your-db>", { <shard-key>: "hashed" } )

how to enable sharding in test environment

How do I enable sharding in test environment? Here I am sharing what i have done till now, I have one config server:
Config server1: Host-a:27019
One mongos instance on same machine on port 27017
and two mongod instance shard:
Host-a:27020
host-b:27021
When I am enabling sharding on collection it gives me this error:
2016-01-12T10:31:07.522Z I SHARDING [Balancer] ns: feedproductsdata.merchantproducts going to move { _id: "feedproductsdata.merchantproducts-product_id_MinKey", ns: "feedproductsdata.merchantproducts", min: { product_id: MinKey }, max: { product_id: 0 }, version: Timestamp 1000|0, versionEpoch: ObjectId('5694d57ebe78315b68519c38'), lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('5694d57ebe78315b68519c38'), shard: "shard0001" } from: shard0001 to: shard0000 tag []
2016-01-12T10:31:07.523Z I SHARDING [Balancer] moving chunk ns: feedproductsdata.merchantproducts moving ( ns: feedproductsdata.merchantproducts, shard: shard0001:192.168.1.12:27021, lastmod: 1|0||000000000000000000000000, min: { product_id: MinKey }, max: { product_id: 0 }) shard0001:192.168.1.12:27021 -> shard0000:192.168.1.8:27020
2016-01-12T10:31:08.530Z I SHARDING [Balancer] moveChunk result: { errmsg: "exception: socket exception [CONNECT_ERROR] for cfg1.server.com:27019", code: 11002, ok: 0.0 }
2016-01-12T10:31:08.531Z I SHARDING [Balancer] balancer move failed: { errmsg: "exception: socket exception [CONNECT_ERROR] for cfg1.server.com:27019", code: 11002, ok: 0.0 } from: shard0001 to: shard0000 chunk: min: { product_id: MinKey } max: { product_id: 0 }
2016-01-12T10:31:08.604Z I SHARDING [Balancer] distributed lock 'balancer/Knowledgeops-PC:27017:1452594322:41' unlocked.

mongodb balancer does not move chunks because of obscure error

After carefully following this guide, on how to migrate a 3 member replica set to a shard containing 2 X 3 member replica sets, after sharding one of my collections nothing happened. Looking into the logs, i got this:
2014-06-22T16:49:55.467+0000 [Balancer] distributed lock 'balancer/vt-mongo-6:27018:1403429707:1804289383' unlocked.
2014-06-22T16:50:01.830+0000 [Balancer] distributed lock 'balancer/vt-mongo-6:27018:1403429707:1804289383' acquired, ts : 53a70939025c137381ef2126
2014-06-22T16:50:01.945+0000 [Balancer] ns: database.emailevents going to move { _id: "database.emailevents-_id_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('53a6d810089f342ad32992d4'), ns: "database.emailevents", min: { _id: MinKey }, max: { _id: -9199836772564449863 }, shard: "replicaset2" } from: replicaset2 to: replicaset4 tag []
2014-06-22T16:50:01.945+0000 [Balancer] moving chunk ns: database.emailevents moving ( ns: database.emailevents, shard: replicaset2:replicaset2/vt-mongo-4:27017,vt-mongo-5:27017,vt-mongo-6:27017, lastmod: 1|0||000000000000000000000000, min: { _id: MinKey }, max: { _id: -9199836772564449863 }) replicaset2:database_replicaset2/vt-mongo-4:27017,vt-mongo-5:27017,vt-mongo-6:27017 -> replicaset4:replicaset4/vt-mongo-1:27017,vt-mongo-2:27017,vt-mongo-3:27017
2014-06-22T16:50:01.954+0000 [Balancer] moveChunk result: { errmsg: "exception: no primary shard configured for db: config", code: 8041, ok: 0.0 }
2014-06-22T16:50:01.955+0000 [Balancer] balancer move failed: { errmsg: "exception: no primary shard configured for db: config", code: 8041, ok: 0.0 } from: replicaset2 to: replicaset4 chunk: min: { _id: MinKey } max: { _id: -9199836772564449863 }
2014-06-22T16:50:02.166+0000 [Balancer] distributed lock 'balancer/vt-mongo-6:27018:1403429707:1804289383' unlocked.
generally, this error:
result: { errmsg: "exception: no primary shard configured for db: config", code: 8041, ok: 0.0 }
is the result of the machines not being able to talk to one-eachother, but this isn't the case. I have manually checked and all the machines are able to talk to eachother.