I am trying to remove a shard from a Mongo shard (v3.2) - I cannot see any database names that I need to remove but the draining process is not ending which I am guessing is because its waiting for user action to fix something. Based on the error message, can anyone figure out how to further troubelshoot this issue?
db.runCommand( { removeShard : "shard0001" } )
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"note" : "you need to drop or movePrimary these databases",
"dbsToMove" : [ ],
"ok" : 1
}
Deleting all test databases/collections that I did not need resolved the issue. Doing that in a non-test environment would not be the best approach. Still looking for a better solution.
Related
Since 2022-04-26, when we updated to mongo 5.0 with compatibility version on 4.4, we’re having a issue with mongodump and other tools / commands for mongodb.
We investigated and figured it might boil down to a index on a single collection in one of our databases.
Mongodump error: mongodump: read data: make dump: error dumping metadata: (Location5254501) Could not parse catalog entry while replying to listIndexes.
We validated all indices in our cmd database and got an error on one of the indices.
db.tickets.validate()
{
"ns" : "cmd.tickets",
"nInvalidDocuments" : 0,
"nrecords" : 88638415,
"nIndexes" : 42,
"keysPerIndex" : {
},
"indexDetails" : {
},
"valid" : false,
"repaired" : false,
"warnings" : [ ],
"errors" : [
"The index specification for index 'interaction.networkItemId_1' contains invalid field names. The field 'safe' is not valid for an index specification. Specification: { v: 1, key: { interaction.networkItemId: 1 }, name: \"interaction.networkItemId_1\", ns: \"cmd.tickets\", background: true, safe: null }. Run the 'collMod' command on the collection without any arguments to remove the invalid index options"
],
"extraIndexEntries" : [ ],
"missingIndexEntries" : [ ],
"corruptRecords" : [ ],
"advice" : "A corrupt namespace has been detected. See http://dochub.mongodb.org/core/data-recovery for recovery steps.",
"ok" : 1,
"$clusterTime" : {
"clusterTime" : Timestamp(1651416503, 80),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1651407947, 2)
}
We dropped the faulty index and ran a full-validate again, that returned all collections in that database valid.
But mongodump still fails with the same error. Also db.runCommand({listIndexes: "tickets"}) fails.
There are some 42 indices on the tickets collection. When executing db.runCommand({listIndexes: "tickets", cursor: {batchSize:19}}) and then using getMore on that cursorId with an arbitrary batchSize, we can list all indices in that collection.
But when we want to list all or a significant number of indices the command fails. We figured out the magic index seems to be number 20 as db.runCommand({listIndexes: "tickets", cursor: {batchSize:19}}) works but db.runCommand({listIndexes: “tickets”, cursor: {batchSize:20}}) fails with
MongoServerError: Could not parse catalog entry while replying to listIndexes
The full-validate had two warnings but they don’t seem to be that much of a problem
"warnings" : [
"Could not complete validation of table:collection-31-1751203610025669779. This is a transient issue as the collection was actively in use by other operations.",
"Could not complete validation of table:index-32-1751203610025669779. This is a transient issue as the collection was actively in use by other operations."
We plan to increase compatibilitylevel to 5 this week. we are also interested in your opinion if this might worsen or resolve our current issue.
Kind regards
Michael
I have an index which i need to modify.
{
"v" : 1,
"key" : {
"expectedDateTime" : 1
},
"name" : "expectedDateTime_1",
"ns" : "expectation.expectation_data",
"expireAfterSeconds" : 43200
}
The expireAfterSeconds is incorrect and needs to be changed to 432000.
When I dropped the index it seemed fine
db.expectation_data.dropIndex({"expectedDateTime":1})
{ "nIndexesWas" : 4, "ok" : 1 }
The getIndexes() shows that the index does not exist.
Then when i try to recreate the index i get this error
db.expectation_data.createIndex({"expectedDateTime":1},
{expireAfterSeconds:432000,name:"expectedDateTime"});
{
"ok" : 0,
"errmsg" : "Index with name: expectedDateTime already exists with different options",
"code" : 85
}
Now on running getIndexes(), i see that the index seems to have recreated with the old TTL. I tried repeating this process multiple times, but ran into the same issue again and again.
I cannot find any documentation which says that i cannot recreate an index of the same name. If i use a different name it works fine
db.expectation_data.createIndex({"expectedDateTime":1}, {expireAfterSeconds:432000});
.
.
>db.expectation_data.getIndexes()
.
.
{
"v" : 1,
"key" : {
"expectedDateTime" : 1
},
"name" : "expectedDateTime_1",
"ns" : "expectation.expectation_data",
"expireAfterSeconds" : 432000
}
Is there any restriction on recreating indexes with the same name ?
This looks like the index is recreated automatically after deletion. Make sure that no applications using ensureIndex or #Index-Annotations are connecting to the database.
As it turned out. This was due to an #Index annotation used in the entity with the old timeout. The application was still running when i made the index changes.
When i stopped the application, i was able to create the index as i originally expected
when I use removeShard like this:
db.runCommand({removeShard:"shard0001"})
shell show below:
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"note" : "you need to drop or movePrimary these databases",
"dbsToMove" : [
],
"ok" : NumberInt(1)
}
There is no dbsToMove to move but only one chunk need to remove.
but I am wait these procession for two hours for only 1 record in a collection of this chunk and seems nothing changes.
how could I remove this chunk?
I have a MongoDB collection with a lot of indexes.
Would it bring any benefits to delete indexes that are barely used?
Is there any way or tool which can tell me (in numbers) how often a index is used?
EDIT: I'm using version 2.6.4
EDIT2: I'm now using version 3.0.3
Right, so this is how I would do it.
First you need a list of all your indexes for a certain collection (this will be done collection by collection). Let's say we are monitoring the user collection to see which indexes are useless.
So I run a db.user.getIndexes() and this results in a parsable output of JSON (you can run this via command() from the client side as well to integrate with a script).
So you now have a list of your indexes. It is merely a case of understanding which queries use which indexes. If that index is not hit at all you know it is useless.
Now, you need to run every query with explain() from that output you can judge which index is used and match it to and index gotten from getIndexes().
So here is a sample output:
> db.user.find({religion:1}).explain()
{
"queryPlanner" : {
"plannerVersion" : 1,
"namespace" : "meetapp.user",
"indexFilterSet" : false,
"parsedQuery" : {
"religion" : {
"$eq" : 1
}
},
"winningPlan" : {
"stage" : "FETCH",
"inputStage" : {
"stage" : "IXSCAN",
"keyPattern" : {
"religion" : NumberLong(1)
},
"indexName" : "religion_1",
"isMultiKey" : false,
"direction" : "forward",
"indexBounds" : {
"religion" : [
"[1.0, 1.0]"
]
}
}
},
"rejectedPlans" : [ ]
},
"serverInfo" : {
"host" : "ip-172-30-0-35",
"port" : 27017,
"version" : "3.0.0",
"gitVersion" : "a841fd6394365954886924a35076691b4d149168"
},
"ok" : 1
}
There are a set of rules that the queryPlanner field will use and you will need to discover and write for them but this first one is simple enough.
As you can see: the winning plan (in winningPlan) is a single (could be multiple remember, this stuff you will need to code around) IXSCAN (index scan) and the key pattern for the index used is:
"keyPattern" : {
"religion" : NumberLong(1)
},
Great, now we can match that the key output of getIndexes():
{
"v" : 1,
"key" : {
"religion" : NumberLong(1)
},
"name" : "religion_1",
"ns" : "meetapp.user"
},
to tells us that the religion index is not useless and is in fact used.
Unfortunately this is the best way I can see. It used to be that MongoDB had an index stat for number of times the index was hit but it seems that data has been removed.
So you would just rinse and repeat this process for every collection you have until you have removed the indexes that are useless.
One other way of doing this, of course, is to remove all indexes and then re-add indexes as you test your queries. Though that might be bad if you do need to do this in production.
On a side note: the best way to fix this problem is to not have it at all.
I make this easier for me by using a indexing function within my active record. Once every so often I run (from PHP) something of the sort: ./yii index/rebuild which essentially goes through my active record models and detects which indexes I no longer use and have removed from my app and removes them in turn. It will, of course, create new indexes.
I have a database which consists of few collections , i have tried copying from one collection to another .
In this process connection was lost and had to recopy them
now i find around 40000 records duplicates.
Format of my data:
{
"_id" : ObjectId("555abaf625149715842e6788"),
"reviewer_name" : "Sudarshan A",
"emp_name" : "Wilson Erica",
"evaluation_id" : NumberInt(550056),
"teamleader_id" : NumberInt(17199),
"reviewer_id" : NumberInt(1659),
"team_manager" : "Las Vegas",
"teammanager_id" : NumberInt(12245),
"team_leader" : "Thomas Donald",
"emp_id" : NumberInt(7781)
}
here only evaluation id is unique.
Queries that i have tried:
ensureIndex({id:1}, {unique:true, dropDups:true})
dropDups was removed in mongodb ~2.7.
Here is other realization method
but I don't test it