Invalid access on Mongo 3.0 - mongodb

Problem happened in Development site. Following query crashes server with Invalid access at 0x20:
db['2015-04-13'].group({
key:{id:1},
cond:{created_at:{$gte: new Date('2015-04-13')}},
reduce:function (curr, resul) {},
initial: {}
})
Traceback:
mongod(_ZN2v88internal2OS8AllocateEmPmb+0xD7) [0x11dbe57]
mongod(_ZN2v88internal28CreateTranscendentalFunctionENS0_19TranscendentalCache4TypeE+0x26) [0x12799f6]
mongod(_ZN2v88internal22init_fast_sin_functionEv+0xE) [0x11dca1e]
mongod(_ZN2v88internal14POSIXPostSetUpEv+0x9) [0x11dd009]
mongod(_ZN2v88internal2V828InitializeOncePerProcessImplEv+0x3E) [0x12551de]
mongod(_ZN2v88internal12CallOnceImplEPlPFvPvES2_+0x52) [0x11c2c12]
mongod(_ZN2v88internal2V810InitializeEPNS0_12DeserializerE+0x11) [0x1255911]
mongod(_ZN2v86LockerC1EPNS_7IsolateE+0x61) [0x12597c1]
So far i know:
Problem occurs only when mongod runs as it's own user (mongod).
If mongod started as root on same data folder, query passes and return results. Number of documents in collection is fairly small (around 20k), but there is decent number of keys for each - 50 in average, and 300 at most, most of them Strings with very few BSONs. MongoDB version is 3.0.2, query was passed as though local client with same version as server, as though 2.4.0 Robomongo client on remote machine - error appears in both cases.

Related

mongo MongoCursorNotFoundException in long-running query loop

I have a simple query loop that gets a MongoCursorNotFoundException after processing about 44,000 of 96,945 documents in around 93 minutes.
MongoIterable<MasterDocument> query = masterCollection.find().noCursorTimeout(true);
for (MasterDocument masterDocument : query) { ... do some stuff ... }
The "do some stuff" part takes a while, which is why the entire loop takes so long.
My problem is that I get this exception after handling maybe half of the documents in the collection.
I am running both the client application and the mongod server locally on my Windows 10 laptop, accessing the server via localhost.
The server log shows lots of messages like this:
{"t":{"$date":"2021-01-04T20:21:35.510-08:00"},"s":"I", "c":"COMMAND", "id":51803, "ctx":"conn27","msg":"Slow query","attr":{"type":"command","ns":"master_database.MasterCollection","command":{"find":"MasterCollection","filter":{"hashCode":1753339282},"$db":"master_database","lsid":{"id":{"$uuid":"6a252f51-2c6e-4c01-ae03-1a80aab109e0"}}},"planSummary":"COLLSCAN","keysExamined":0,"docsExamined":96944,"cursorExhausted":true,"numYields":96,"nreturned":0,"queryHash":"DBC59907","planCacheKey":"DBC59907","reslen":121,"locks":{"ReplicationStateTransition":{"acquireCount":{"w":97}},"Global":{"acquireCount":{"r":97}},"Database":{"acquireCount":{"r":97}},"Collection":{"acquireCount":{"r":97}},"Mutex":{"acquireCount":{"r":1}}},"storage":{},"protocol":"op_msg","durationMillis":147}}
The last of these messages is followed by:
{"t":{"$date":"2021-01-04T20:21:35.521-08:00"},"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn27","msg":"Connection ended","attr":{"remote":"127.0.0.1:58990","connectionId":27,"connectionCount":14}}
{"t":{"$date":"2021-01-04T20:21:35.522-08:00"},"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn26","msg":"Connection ended","attr":{"remote":"127.0.0.1:58989","connectionId":26,"connectionCount":13}}
{"t":{"$date":"2021-01-04T20:21:35.922-08:00"},"s":"I", "c":"-", "id":20883, "ctx":"conn25","msg":"Interrupted operation as its client disconnected","attr":{"opId":310196}}
I have tried:
Using "noCursorTimeout(true)" on the query cursor (as shown above)
Starting the server with "mongod --setParameter localLogicalSessionTimeoutMinutes=240". This last seems to have caused additional log messages that say "error":"Location13111: wrong type for field (expireAfterSeconds) long != int"
I am using mongod 4.4 and the latest mongo java api.
You may need to increase the default cursor idle timeout to bigger value in all shards and mongos:
check the parameter(default is 10 min = 600000 ms ):
use admin
db.runCommand({getParameter:1, cursorTimeoutMillis: 1})
and update to bigger value:
use admin
db.runCommand({setParameter:1, cursorTimeoutMillis: 600000000 })
also the COLSCAN in your logs indicate that you dont use indexes in your query , maybe you need to create one on "hashCode" ...
Thanks for the response.
It turned out that my application ran to completion once I started mongod with "--setParameter localLogicalSessionTimeoutMinutes=240, despite the error message that I saw in the console log.
You are absolutely right that I should have an index on "hashCode". (I had one before but forgot to recreate it after recreating the collection.)

Spring Data Embedded Mongo: 'unknown top level operator: $expr' on server

when I run any query containing $expr operation against Embedded Mongo I get the following error:
UncategorizedMongoDbException: Query failed with error code 2 and error message 'unknown top level operator: $expr' on server
The command runs fine against my local instance of mongo.
This is the version of embedded mongo I'm using: testCompile('de.flapdoodle.embed:de.flapdoodle.embed.mongo:2.1.1')
This is the query for reference:
Criteria.where("$expr").ne(Arrays.asList("$val.a", "$val.b"))
Found it.
flapdoodle was downloading a version of Mongodb that didn't have that feature by default.
You can override the default version by specifying the following in your
src/test/resources/application.properties
spring.mongodb.embedded.version=3.6.4
spring.mongodb.embedded.features=SYNC_DELAY,NO_HTTP_INTERFACE_ARG,ONLY_WITH_SSL

How to read from a replicaset mongo by mongodb-erlang

1. {ok,P}= mongoc:connect({rs, <<"dev_mongodb">>, [ "dev_mongodb001:27017", "dev_mongodb002:27017"]}, [{name, mongopool}, {register, mongotopology}, { rp_mode, primary},{ rp_tags, [{tag,1}]}], [{login, <<"root">>}, {password, <<"mongoadmin">>}, {database, <<"admin">>}]).
2. {ok, Pool} = mc_topology:get_pool(P, []).
3. mongoc:find(Pool, {<<"DoctorLBS">>, <<"mongoMessage">>}, #{<<"type">> => <<"5">>}).
I used latest version in github, and got an error at step 3.
It seems my selector is not valid, is there any example of how to use mongodb-erlang ?
My mongodb version is 3.2.6, auth type is SCRAM-SHA1.
mongoc:find(Pool, <<"mongoMessage">>, #{<<"type">> => <<"5">>}).
I tried this in rs and single mode, still got this error.
Is there any other simple way to connect and read?
I just need to read some data once from mongo when my erlang program start, no other actions.
Todays version of mongo does not support tuple colldb due to new query api introduced in mongo 2.6
You should connect to DoctorLBS database instead, and than use
mongoc:find(Pool, <<"mongoMessage">>, #{<<"type">> => <<"5">>}).

mongodb grails simple application times out

I'm having an issue with mongodb 2.6.5 and grails 2.4.4 that I can't resolve. For the sake of isolating the problem I created a simple 2.4.4 grails app, installed the grails mongodb plugin (compile ":mongodb:3.0.2"), commented out the hibernate dependencies, added my mongodb datasource, and set up a simple domain class (com.nerds.Nerd). When I generate-all and then start the app and navigate to the NerdController CRUD page I get the following error every time:
MongoTimeoutException occurred when processing request: [GET] /MONGO/nerd/index
Timed out while waiting to connect after 10000 ms. Stacktrace follows:
com.mongodb.MongoTimeoutException: Timed out while waiting to connect after 10000 ms
I can access mongo via http using http://localhost:28017/
I have also tested manually adding data and querying from mongo. This all works fine.
In the debug log prior to the timeout it looks like GORM aquired a mongo session and then tried rolling back a transaction.
DatastoreTransactionManager:128 - Found thread-bound Session [org.grails.datastore.mapping.mongo.MongoSession#e47ee6] for Datastore transaction
DatastoreTransactionManager:128 - Creating new transaction with name [null]: PROPAGATION_REQUIRED,ISOLATION_DEFAULT,readOnly
DatastoreTransactionManager:128 - Initiating transaction rollback
DatastoreTransactionManager:128 - Rolling back Datastore transaction on Session [org.grails.datastore.mapping.mongo.MongoSession#e47ee6]
DatastoreTransactionManager:128 - Resuming suspended transaction after completion of inner transaction
Any insight would be helpful. Thanks
edit: The mongo datasource is pretty simple. I'm using the correct port.
From the mongo log:
014-11-18T13:10:13.388-0900 [initandlisten] MongoDB starting : pid=17275 port=27017 dbpath=/var/lib/mongodb 32-bit host=enterprise
from DataSource.groovy
grails { mongo { host = 'localhost' port = 27017 databaseName = 'mydb' } }
I'm fairly certain the issue was on the mongod side. I stopped the mongo daemon, put it into high verbose debug mode (using mongod -vvvv command), and when I tried to replicate the issue while watching the console output, the issue did not happen. I'm not entirely sure what the exact cause of the timeout was, but its not happening now. Thanks for the responses.

Mongodump and mongorestore; field not found

I'm trying to dump a database from another server (this works fine), then restore it on a new server (this does not work fine).
I first run:
mongodump --host -d
This creates a folder dump/db which contains all of the bson documents.
Then in the dump folder, I'm running:
mongorestore -d dbname db
This works and iterates through the files, but I get this error on dbname.system.users
Wed May 23 02:08:05 { key: { _id: 1 }, ns: "dbname.system.users", name: "_id_" }
Error creating index dbname.system.usersassertion: 13111 field not found, expected type 16
Any ideas how to resolve this?
If it realy different versions, use --noIndexRestore option. And create all index after that.
Any chance the source and destination are different versions?
In any case, to get around this, restore the collections individually using the -c flag to the target DB and then build the indexes afterward. The system collection is the one used for indexes, so it is fairly easy to recreate - try it last once everything else has been restore, and if it still fails you can always just recreate the relevant indexes.
The issue could also caused by this bug in older versions of Mongo (In my case it was 2.0.8):
https://jira.mongodb.org/browse/SERVER-7181
Basically, you get 13111 field not found, expected type 16 error when it should actually be prompting you to enter your authentication details.
And example of how I fixed it:
root#precise64:/# mongorestore /backups/demand/ondemand.05-24-2013T114223/
connected to: 127.0.0.1
[REDACTED]
Fri May 24 11:48:15 going into namespace [test.system.indexes]
Fri May 24 11:48:15 { key: { _id: 1 }, ns: "test.system.users", name: "_id_" }
Error creating index test.system.usersassertion: 13111 field not found, expected type 16
# Error when not giving username and password
root#precise64:/# mongorestore -u fakeuser -p fakepassword /backups/demand/ondemand.05-24-2013T114223/
connected to: 127.0.0.1
[REDACTED]
Fri May 24 11:57:11 /backups/demand/ondemand.05-24-2013T114223/test/system.users.bson
Fri May 24 11:57:11 going into namespace [test.system.users]
1 objects found
# Works fine when giving username and password! :)
Hope that helps anyone who's issue doesn't get fixed by the previous 2 replies!
This can also happen if you are trying to mongorestore into MongoDB 2.6+ and the dump you are trying to restore contains a system.users table in any database other than admin. In MongoDB 2.2 and 2.4 the system.userscollections could occur in any database. The auth schema migration associated with MongoDB 2.6 moved all users into the system.users table in the admin database, but left behind the system.users tables in the other databases (MongoDB 2.6 just ignores these). This seems to cause this assertion when importing into MongoDB 2.6.