Mongoexport or Mongodump in large collection - mongodb

When I do a mongodump on the existing collection the dump stops on 23% of the dump. The file is about 1.2 GB.
I tried also mongoexport. But this also stops at 23%.
The dump/export always stops at 21257107 records.
It this normal?
(OS is Ubuntu 14.04 LTS)
mongodump -h localhost:27020 -d SensorData -c Values -o Dump
connected to: localhost:27020
2016-02-15T09:41:25.099+0100 DATABASE: SensorData to Dump/SensorData
2016-02-15T09:41:25.101+0100 SensorData.Values to Dump/SensorData/Values.bson
2016-02-15T09:41:28.178+0100 Collection File Writing Progress: 434100/91183458 0% (documents)
2016-02-15T09:41:31.051+0100 Collection File Writing Progress: 940300/91183458 1% (documents)
-------------------
2016-02-15T09:47:34.002+0100 Collection File Writing Progress: 21155100/91183458 23% (documents)
2016-02-15T09:47:37.339+0100 21257107 documents
2016-02-15T09:47:37.339+0100 Metadata for SensorData.Values to Dump/SensorData/Values.metadata.json
mongoexport -host localhost:27020 --db SensorData --collection Values -vv --out filename.json
2016-02-15T10:51:33.507+0100 creating new connection to:localhost:27020
2016-02-15T10:51:33.507+0100 [ConnectBG] BackgroundJob starting: ConnectBG
2016-02-15T10:51:33.508+0100 connected to server localhost:27020 (127.0.0.1)
2016-02-15T10:51:33.508+0100 connected connection!
connected to: localhost:27020
exported 21257107 records

Related

Mongorestore Timeout

I am executing mongorestore using this command.
 
mongorestore --uri mongodb+srv://{mongoUser}:{mongoPass}#{mongoHost} --ssl --authenticationDatabase {mongoAuth} --db {db} --gzip --archive={fileLoc} --drop
 
I am restoring multiple collections at a time. 3 other collections were successful but on the last one which is the biggest one as well, I am getting this error. 
Failed: webhook-activity.user: error creating indexes for webhook-activity.user: createIndex error: connection() error occurred during connection handshake: dial tcp: i/o timeout
Details
mongorestore version: 100.5.4
mongo version: 5.0.12
I am using the same command for all my collections.
 

MongoImport error: Failed: error connecting to db server: no reachable servers, openssl error: Host validation error

When I'm trying to import json to my MongoDB which is password authenticated, encrypted and TLS/SSL based connection, I'm getting error.
This is the mongoImport Im writting:
mongoimport --verbose --ssl --sslCAFile "C:\server\cert\rootCA.pem" --sslPEMKeyFile "C:\server\cert\server.pem" --sslFIPSMode --host 127.0.0.1 --port 27017 --username databaseAdmin --password password123 --authenticationDatabase admin --db test_coll --collection blocks --file "C:\data\blocks.json"
And I got the following error message:
2018-07-20T15:21:27.365+0530 filesize: 6392 bytes
2018-07-20T15:21:27.366+0530 using fields:
2018-07-20T15:21:30.368+0530 [........................] test_coll.blocks
0B/6.24KB (0.0%)
2018-07-20T15:21:30.928+0530 [........................] test_coll.blocks
0B/6.24KB (0.0%)
2018-07-20T15:21:30.928+0530 Failed: error connecting to db server: no reachable servers, openssl error: Host validation error
2018-07-20T15:21:30.928+0530 imported 0 documents
Hostname in their certificates should match the specified hostname. So, I updated my hostname to localhost.
Now, my mongoimport command looks like:
mongoimport --verbose --ssl --sslCAFile "C:\server\cert\rootCA.pem" --sslPEMKeyFile "C:\server\cert\server.pem" --sslFIPSMode --host localhost --port 27017 --username databaseAdmin --password password123 --authenticationDatabase admin --db test_coll --collection blocks --file "C:\data\blocks.json"
And now it works.

mongoDB : Authentication of user is working fine, but getting "Unauthorized not authorized on admin to execute command" in the logs

I've followed the steps as mentioned in How do I add an admin user to Mongo in 2.6?
At first, "auth=true" in the /etc/mongod.conf file is commented out so that authentication is not done and I could create the following users in respective dbs.
Admin:
use admin;
db.createUser({user: "mongoRoot", pwd: "password", roles: [{role: "root", db: "admin"}]});
db.createUser({user: "mongoAdmin", pwd: "password", roles: ["readWrite"]});
db.createUser({user: "siteUserAdmin", pwd: "password", roles: [{role: "userAdminAnyDatabase", db: "admin"}]});
db.createUser({user: "mongoDBAdmin", pwd: "password", roles: [{role: "dbAdmin", db: "admin"}]});
db.createUser({user: "mongoDBOwner", pwd: "password", roles: [{role: "dbOwner", db: "admin"}]});
db.createUser({user: "mongoWrite", pwd: "password", roles: [{role: "readWrite",db: "mongo_database"}]}); (Added in admin so that by giving the command from the command-line 'mongo mongo_database --port 27018 -u mongoWrite -p password --authenticationDatabase admin', the user mongoWrite is able to login as done in https://gist.github.com/tamoyal/10441108)
db.createUser({user: "mongoRead", pwd: "password", roles: [{role: "read", db: "mongo_database"}]}); (Added in admin so that by giving the command from the command-line 'mongo mongo_database --port 27018 -u mongoRead -p password --authenticationDatabase admin', the user mongoRead is able to login as done in https://gist.github.com/tamoyal/10441108)
Config:
use config;
db.createUser({user: "mongoConfig", pwd: "password", roles: [{role: "readWrite", db: "config"}]});
Test:
use test;
db.createUser({user: "mongoTest", pwd: "password", roles: [{role: "readWrite", db: "test"}]});
mongo_database:
use mongo_database;
db.createUser({user: "mongoWrite", pwd: "password", roles: [{role: "readWrite",db: "mongo_database"}]});
db.createUser({user: "mongoRead", pwd: "password", roles: [{role: "read", db: "mongo_database"}]});
db.createUser({user: "mongoAdmin", pwd: "password", roles: [{role: "readWrite", db: "mongo_database"}]});
After making sure that all the required users are added, turning on the authentication by uncommenting "auth=true" in the /etc/mongod.conf file and restarting the mongodb.
[ec2-user#ip-xxx-xx-xx-xx ~]$ mongo mongo_database --port 27018 -u mongoWrite -p password --authenticationDatabase admin
MongoDB shell version: 2.6.10
connecting to: 127.0.0.1:27018/mongo_database
rs0:PRIMARY> db.test.insert({"Hello":"World"});
WriteResult({ "nInserted" : 1 })
rs0:PRIMARY> exit
bye
[ec2-user#ip-xxx-xx-xx-xx ~]$ mongo mongo_database --port 27018 -u mongoRead -p password --authenticationDatabase admin
MongoDB shell version: 2.6.10
connecting to: 127.0.0.1:27018/mongo_database
rs0:PRIMARY> db.test.insert({"Hello":"World"});
WriteResult({
"writeError" : {
"code" : 13,
"errmsg" : "not authorized on mongo_database to execute command { insert: \"test\", documents: [ { _id: ObjectId('559bba6ead81843e121c5ac7'), Hello: \"World\" } ], ordered: true }"
}
})
rs0:PRIMARY>
Everything works fine till this point. The only issue that am encountering is that my log file is getting bombarded with the following 2 lines at almost tens of thousand lines per minute and within no time, my disk is running out of space.
2015-07-07T11:40:28.340+0000 [conn3] Unauthorized not authorized on admin to execute command { writebacklisten: ObjectId('55913d82b47aa336e4f971c2') }
2015-07-07T11:40:28.340+0000 [conn2] Unauthorized not authorized on admin to execute command { writebacklisten: ObjectId('55923232e292bbe6ca406e4e') }
Just to give an idea, in a span of 10 seconds, 10 MB worth of log file is generated consisting of just the above mentioned 2 lines.
[ec2-user#ip-xxx-xx-xx-xx ~]$ date
Tue Jul 7 11:44:01 UTC 2015
[ec2-user#ip-xxx-xx-xx-xx ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvdh 4.8G 388M 4.2G 9% /log
[ec2-user#ip-xxx-xx-xx-xx ~]$ date
Tue Jul 7 11:44:14 UTC 2015
[ec2-user#ip-xxx-xx-xx-xx ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvdh 4.8G 398M 4.2G 9% /log
To my knowledge, the authentication seems to be working fine. Just that the logs are getting filled at super sonic speed. What am I doing wrong? Please help. Thanks In Advance.
The excessive logging was from the config servers and even after adding the authentication to the config servers with authentication turned on, it wouldn't stop. Upgraded to mongo 3.0.4 for replica sets, turned on the authentication on replica sets and upgraded mongo to 3.0.4 on config servers and it started working fine without any issues (Same steps on mongo 2.6.x would result in the issue I mentioned above). So, we planned to upgrade to 3.0.4 in order to bypass this issue. Hope, it will be helpful to someone.

Mongoexport with cluster throws i/o timeout error

Just upgraded to mongo 3.0, but mongoexport gives us a the following error: "Failed: read tcp 127.0.0.1:27020: i/o timeout" after outputting some documents (not always the same amount). mongoexport is connecting to a sharded cluster of 4 standalone mongod servers with 3 mongod config servers
[root#SRV]$ mongoexport --host
localhost:27022,localhost:27021,localhost:27020 --db horus
--collection users --type json --fields _id | wc -l
2015-03-09T12:41:19.198-0600 connected to:
localhost:27022,localhost:27021,localhost:27020
2015-03-09T12:41:22.570-0600 Failed: read tcp 127.0.0.1:27020: i/o
timeout
15322
The versions we are using are:
[root#MONGODB01-SRV]# mongo --version MongoDB shell version: 3.0.0
[root#SRV]$ mongoexport --version mongoexport version: 3.0.0 git
version: e35a2e87876251835fcb60f5eb0c29baca04bc5e
[root#SRV]$ mongos --version MongoS version 3.0.0 starting: pid=47359
port=27017 64-bit host=SRV (--help for usage) git version:
a841fd6394365954886924a35076691b4d149168 OpenSSL version: OpenSSL
1.0.1e-fips 11 Feb 2013 build sys info: Linux ip-10-181-61-91 2.6.32-220.el6.x86_64 #1 SMP Wed Nov 9 08:03:13 EST 2011 x86_64 BOOST_LIB_VERSION=1_49
Tried with a 2.6 mongoexport in another server against our mongod3.0 and mongos3.0 and works fine
This is an old question but I wanted to answer. Maybe this answer will help one of us. It might be caused by someone else trying to write to the collection you are writing. I had a smilar problem. After a long research I realised that a user with higher role was trying to write in the sametime and because his/her role is more important than mine ones request were done and mine are given IO exception.
Try closing the ports first: eg. killall -9 node

Importing TSV to MongoDB on meteor servers

I am experimenting with meteor and have deployed my app to meteor's servers. The app is a simple dynamic filtering engine where the data came from a TSV file. On my home machine, I used mongoimport (localhost:3001) to import the TSV into a local db but on the meteor servers, I received a fresh new empty database. I'm wondering how to import a TSV to a database hosted on meteors servers. I'm sure there are security peculiarities dealing with a publicly provided hosting service and guess that's my issue.
I have logged into my developer account on the meteor servers via the terminal on OSX, yielding server information like: production-db-c1.meteor.io:27017/myApp_meteor_com
With that server info, I follow on with ./mongoimport like so: (not a cut and paste so ignore typos)
./mongoimport --host myApp_meteor_com/production-db-c1.meteor.io:27017 --collection myCollection -u User -p Pass --type tsv --headerline --file blah.tsv
The User/Pass info I obtained by first adding myself as a user with 'readWrite' access and then querying the db.system.users collection on the meteor servers for my app. There were three users in the collection, one of which was named, "myApp_meteor_com", one I added myself and another that apparently was the connected terminal client. I tried each user/pass combinations in the ./mongoimport string above.
*cough It's not really called 'myApp' but you get the idea
Here's the long winded echo from the terminal after executing mongoimport
Thu Mar 6 12:38:38.412 kern.sched unavailable
Thu Mar 6 12:38:38.415 starting new replica set monitor for replica set myApp_meteor_com with seed of production-db-c1.meteor.io:27017
Thu Mar 6 12:38:38.467 successfully connected to seed production-db-c1.meteor.io:27017 for replica set myApp_meteor_com
Thu Mar 6 12:38:38.518 warning: node: production-db-c1.meteor.io:27017 isn't a part of set: myApp_meteor_com ismaster: { setName: "production-c", ismaster: true, secondary: false, hosts: [ "production-db-c1.meteor.io:27017", "production-db-c3.meteor.io:27017", "production-db-c2.meteor.io:27017" ], arbiters: [ "production-dbarb-c2.meteor.io:27017", "production-dbarb-c1.meteor.io:27017" ], primary: "production-db-c1.meteor.io:27017", me: "production-db-c1.meteor.io:27017", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1394131118501), ok: 1.0 }
Thu Mar 6 12:38:40.518 warning: No primary detected for set myApp_meteor_com
Thu Mar 6 12:38:40.519 All nodes for set myApp_meteor_com are down. This has happened for 1 checks in a row. Polling will stop after 29 more failed checks
Thu Mar 6 12:38:40.519 replica set monitor for replica set myApp_meteor_com started, address is myApp_meteor_com/
couldn't connect to [myApp_meteor_com/production-db-c1.meteor.io:27017] connect failed to replica set myApp_meteor_com/production-db-c1.meteor.io:27017
Any help from meteor/mongodb experts is greatly appreciated.
Highlighting these to make it clear:
Thu Mar 6 12:38:38.518 warning: node: production-db-c1.meteor.io:27017 isn't a part of set: myApp_meteor_com ismaster: { setName: "production-c", ismaster: true, (...)
Thu Mar 6 12:38:40.518 warning: No primary detected for set myApp_meteor_com
You are connecting to the primary, so that is okay. But you have the wrong name for the replica set. The first warning tells you the name of the replica set ("production-c") that this host is a member of and also dumps the configuration list of members.
A better connection argument would be:
./mongoimport --host production-c/production-db-c1.meteor.io,production-db-c3.meteor.io, production-db-c2.meteor.io --collection myCollection -u User-p Pass --type tsv --headerline --file blah.tsv
As this contains some seed list info in the event your current primary was switched to a secondary.
The above helped me out quite a bit, but I ran into authentication issues. So after a bit more research into how the meteor servers issue temporary user/ passwords, I finally was able to import a TSV file into my meteor deployed app's mongoDB.
The "key" for me was to use:
meteor mongo --url myApp
in order to gain a user/pass. It is my understanding that when this command is run, a new user/pass is created each time on the meteor servers and is only good for a very short while (60 secs?). When I ran that, I received this echo at the command prompt
mongodb://client-faddb071:a74c4b2a-15bc-2dcf-dc58-b5369c9ebee3#production-db-c1.meteor.io:27017/myApp_meteor`_com
From that info I was able to extract the username: "client-faddb071" and the password "a74c4b2a-15bc-2dcf-dc58-b5369c9ebee3"
and then in another terminal window (because the user/pass doesn't last long), I was ready to go with the mongo import command:
> ./mongoimport --host production-db-c1.meteor.io:27017 --username client-faddb071 --password a74c4b2a-15bc-2dcf-dc58-b5369c9ebee3 --db myApp_meteor_com --collection myCollection --drop --type tsv --headerline --file /path/to/file.tsv
That worked verbatim for me and 3888 records from the TSV were successfully loaded into my meteor hosted mongoDB. Thanks all for the input, it all contributed to my final knowledge and success.