MongoError additional information - mongodb

I'm interested by Mongo Errors, the only useful link I found is not that useful actually.
http://www.mongodb.org/about/contributors/error-codes/
"name": "MongoError",
"err": "E11000 duplicate key error index: AyolanDB.users.$email_1 dup key: { : \"copain6#gmail.com\" }",
"code": 11000,
"n": 0,
"connectionId": 838,
"ok": 1
For example, what does mean the "ok"? That the DB is still running? What's "n"?
If anyone has more information about that... Because it's quite poor actually!

ok - means a command successfully completed
n - number of document affected (insert | update | remove)
For more info, see getLastError

Related

Compose Transporter throws error when collection_filters is set to sync data for current day from DocumentDB/MongoDB to file/ElasticSearch

I am using Compose Transporter to sync data from DocumentDB to ElasticSearch instance in AWS. After one time sync, I added following collection_filters in pipeline.js to sync incremental data daily:
// pipeline.js
var source = mongodb({
"uri": "mongodb <URI>"
"ssl": true,
"collection_filters": '{ "mycollection": { "createdDate": { "$gt": new Date(Date.now() - 24*60*60*1000) } }}',
})
var sink = file({
"uri": "file://mongo_dump.json"
})
t.Source("source", source, "^mycollection$").Save("sink", sink, "/.*/")
I get following error:
$ transporter run pipeline.js
panic: malformed collection_filters [recovered]
panic: Panic at 32: malformed collection_filters [recovered]
panic: Panic at 32: malformed collection_filters
goroutine 1 [running]:
github.com/compose/transporter/vendor/github.com/dop251/goja.(*Runtime).RunProgram.func1(0xc420101d98)
/Users/JP/gocode/src/github.com/compose/transporter/vendor/github.com/dop251/goja/runtime.go:779 +0x98
When I change collection_filters so that value of "gt" key is single string token (see below), malformed error vanishes but it doesn't fetch any document:
'{ "mycollection": { "createdDate": { "$gt": "new Date(Date.now() - 24*60*60 * 1000)" } }}',
To check if something is fundamentally wrong with the way I am querying, tried simple string filter and that works well:
"collection_filters": '{ "articles": { "createdBy": "author name" }}',
I tried various ways to pass createdDate filter but either getting malformed error or no data. However same filter on mongo shell gives me expected output. Note that I tried with ES as well as file as sink before asking here.

Where is the "code" field in the Problems pane?

VS Code supports capturing error codes in custom problem matchers. What use are they? They don't seem to be displayed anywhere.
As described by the documentation:
code the match group index for the problem's code. Can be omitted if no code value is provided by the compiler.
For example, take the following tslint error:
ERROR: (comment-format) C:/Users/Kendall/Source/ncre/src/ncre.ts[84, 5]: comment must start with uppercase letter
Use this problem matcher:
"problemMatcher": {
"owner": "tslint",
"fileLocation": "absolute",
"pattern": {
"regexp": "^(ERROR|WARNING): \\((.+?)\\) (.+?)\\[(\\d+), (\\d+)\\]: (.+)$",
"severity": 1,
"code": 2,
"file": 3,
"line": 4,
"column": 5,
"message": 6
}
}
The error is displayed in the Problems pane like this:
The code comment-format doesn't appear here.
I've verified that it's correctly captured by copying the error and the code does appear in the result:
file: 'file:///c%3A/Users/Kendall/Source/ncre/src/ncre.ts'
severity: 'Error'
message: 'comment must start with uppercase letter'
at: '84,5'
source: ''
code: 'comment-format'
Am I missing something, or is capturing the error code mostly pointless?
Update: it's implemented in VSCode September 2018. Hooray!

How I can debug MongoDB slow chunk migration?

I'm trying to move chunk inside the cluster:
mongos>db.adminCommand({ moveChunk: "db.col", find: {_id: ObjectId("58171b29b9b4ebfb3e8b4e42")}, to: "shard_v2"});
{ "millis" : 428681, "ok" : 1 }
In log I see following record:
2016-11-08T20:27:05.972+0300 I SHARDING [conn27] moveChunk migrate
commit accepted by TO-shard: { active: false, ns: "db.col", from:
"host:27017", min: { _id: ObjectId('58171b29b9b4ebfb3e8b4e42') }, max:
{ _id: ObjectId('58171f29b9b4eb31408b4b4c') }, shardKeyPattern: { _id:
1.0 }, state: "done", cc, ok: 1.0 }
So I have 23MB of data migrated in 430 sec. It is really slow.
I've uploaded a sample file to "host" and it was uploaded extremely fast (7-8MB per sec), so I do not think it is disk or network issue (cluster also does not have any load (no active queries)). What else I can check to improve chunk migration perfomance?
The performance most certainly is not limited by your setup. It may be MongoDbs migration policy that tries not to effect the normal database tasks.
There is a great answer on this issue on DBA stack exchange: https://dba.stackexchange.com/questions/81545/mongodb-shard-chunk-migration-500gb-takes-13-days-is-this-slow-or-normal

Neo4j: Change legacy index from exact to fulltext

In my Neo4j (2.1.1 Community Edition) database I have Lucene legacy index in place called node_auto_index:
GET http://localhost:7474/db/data/index/node/
{
"node_auto_index": {
"template": "http://localhost:7474/db/data/index/node/node_auto_index/{key}/{value}",
"provider": "lucene",
"type": "exact"
}
}
Now I would like to change the type from "exact" to "fulltext". How can I do that using REST? I tried the following approaches but neither of them worked:
DELETE and recreate
I tried to delete it first before recreating as "fulltext", but it is read-only:
DELETE http://localhost:7474/db/data/index/node/node_auto_index/node_auto_index
{
"message": "read only index",
"exception": "UnsupportedOperationException",
"fullname": "java.lang.UnsupportedOperationException",
"stacktrace": [
"org.neo4j.kernel.impl.coreapi.AbstractAutoIndexerImpl$ReadOnlyIndexToIndexAdapter.readOnlyIndex(AbstractAutoIndexerImpl.java:254)",
"org.neo4j.kernel.impl.coreapi.AbstractAutoIndexerImpl$ReadOnlyIndexToIndexAdapter.delete(AbstractAutoIndexerImpl.java:290)",
"org.neo4j.server.rest.web.DatabaseActions.removeNodeIndex(DatabaseActions.java:437)",
"org.neo4j.server.rest.web.RestfulGraphDatabase.deleteNodeIndex(RestfulGraphDatabase.java:935)",
"java.lang.reflect.Method.invoke(Unknown Source)",
"org.neo4j.server.rest.transactional.TransactionalRequestDispatcher.dispatch(TransactionalRequestDispatcher.java:139)",
"java.lang.Thread.run(Unknown Source)"
]
}
POST to replace
POST http://localhost:7474/db/data/index/node/
{
"name" : "node_auto_index",
"config" : {
"to_lower_case" : "true",
"type" : "fulltext",
"provider" : "lucene"
}
}
 
{
"message": "Supplied index configuration:\n{to_lower_case=true, type=fulltext, provider=lucene}\ndoesn't match stored config in a valid way:\n{provider=lucene, type=exact}\nfor 'node_auto_index'",
"exception": "IllegalArgumentException",
"fullname": "java.lang.IllegalArgumentException",
"stacktrace": [
"org.neo4j.kernel.impl.coreapi.IndexManagerImpl.assertConfigMatches(IndexManagerImpl.java:168)",
"org.neo4j.kernel.impl.coreapi.IndexManagerImpl.findIndexConfig(IndexManagerImpl.java:149)",
"org.neo4j.kernel.impl.coreapi.IndexManagerImpl.getOrCreateIndexConfig(IndexManagerImpl.java:209)",
"org.neo4j.kernel.impl.coreapi.IndexManagerImpl.getOrCreateNodeIndex(IndexManagerImpl.java:314)",
"org.neo4j.kernel.impl.coreapi.IndexManagerImpl.forNodes(IndexManagerImpl.java:302)",
"org.neo4j.server.rest.web.DatabaseActions.createNodeIndex(DatabaseActions.java:398)",
"org.neo4j.server.rest.web.RestfulGraphDatabase.jsonCreateNodeIndex(RestfulGraphDatabase.java:830)",
"java.lang.reflect.Method.invoke(Unknown Source)",
"org.neo4j.server.rest.transactional.TransactionalRequestDispatcher.dispatch(TransactionalRequestDispatcher.java:139)",
"java.lang.Thread.run(Unknown Source)"
]
}
for all the future readers of this question.
I faced the similar situation and found a much cleaner approach to fix this. Instead of deleting the node_auto_index, try the following steps.
open the shell / command line for the db:
- neo4j-sh (0)$ index --get-config node_auto_index
- ==> {
- ==> "provider": "lucene",
- ==> "type": "exact"
- ==> }
- neo4j-sh (0)$ index --set-config node_auto_index type fulltext
- ==> INDEX CONFIGURATION CHANGED, INDEX DATA MAY BE INVALID
- neo4j-sh (0)$ index --get-config node_auto_index
- ==> {
- ==> "provider": "lucene",
- ==> "type": "fulltext"
- ==> }
- neo4j-sh (0)$
Worked perfectly fine for me . Hope this helps someone in need :-)
Neo4j does not allow deleting the auto indexes node_auto_index and relationship_auto_index, nor by REST nor by any other API.
However there's a dirty trick to do the job. This trick will delete all auto and other legacy indexes. It does not touch the schema indexes. Be warned, it's a potentially dangerous operation, so make sure you have a valid backup in place. Stop the database and then do a
rm -rf data/graph.db/index*
Restart the database and all auto and legacy indexes are gone.

No updatedExisting from getLastError in MongoLab

I am running updates against a database in MongoLab (Heroku) and cannot get information from getLastError.
As an example, below are statements to update a collection in a MongoDB database running locally in my machine (db version v2.0.3-rc1).
ariels-MacBook:mongodb ariel$ mongo
MongoDB shell version: 2.0.3-rc1
connecting to: test
> db.mycoll.insert({'key': '1','data': 'somevalue'});
> db.mycoll.find();
{ "_id" : ObjectId("505bcc5783cdc9e90ffcddd8"), "key" : "1", "data" : "somevalue" }
> db.mycoll.update({'key': '1'},{$set: {'data': 'anothervalue'}});
> db.runCommand('getlasterror');
{
"updatedExisting" : true,
"n" : 1,
"connectionId" : 4,
"err" : null,
"ok" : 1
}
>
All is well locally.
Now I switch to a database in MongoLab and run the same statements to update a document. getLastError is not returning an updatedExisting field. Hence, I am unable to test if my update was successful or otherwise.
ariels-MacBook:mongodb ariel$ mongo ds0000000.mongolab.com:00000/heroku_app00000 -u someuser -p somepassword
MongoDB shell version: 2.0.3-rc1
connecting to: ds000000.mongolab.com:00000/heroku_app00000
> db.mycoll.insert({'key': '1','data': 'somevalue'});
> db.mycoll.find();
{ "_id" : ObjectId("505bcf9b2421140a6b8490dd"), "key" : "1", "data" : "somevalue" }
> db.mycoll.update({'key': '1'},{$set: {'data': 'anothervalue'}});
> db.runCommand('getlasterror');
{
"n" : 0,
"lastOp" : NumberLong("5790450143685771265"),
"connectionId" : 1097505,
"err" : null,
"ok" : 1
}
> db.mycoll.find();
{ "_id" : ObjectId("505bcf9b2421140a6b8490dd"), "data" : "anothervalue", "key" : "1" }
>
Did anyone run into this?
If it matters, my resource at MongoLab is running mongod v2.0.7 (my shell is 2.0.3).
Not exactly sure what I am missing.
I am waiting to hear from their support (I will post here when I hear back) but wanted to check with you fine folks here as well just in case.
Thank you.
This looks to be a limitation of not having admin privileges to the mongod process. You might file a ticket with 10gen as it doesn't seem like a necessary limitation.
When I run Mongo in auth mode on my laptop I need to authenticate as a user in the admin database in order to see an "n" other than 0 or the "updatedExisting" field. When I authenticate as a user in any other database I get similar results to what you're seeing in MongoLab production.
(Full disclosure: I work for MongoLab. As a side note, I don't see the support ticket you mention in our system. We'd be happy to work with you directly if you'd like. You can reach us at support#mongolab.com or http://support.mongolab.com)