While performing a simple mongodump, I encountered this error/warning/info message
Tue May 8 04:22:30 skipping collection: myDb.myCollection.$myCollection.myProp_1
Tue May 8 04:22:30 skipping collection: myDb.myCollection2.$myCollection2._id_1
I was wondering what it means? and whether I should worry that my dump is not 100% a copy of the source?
The command line used was:
./mongodump -v -h localhost
Related
we are facing error during postgres upgrade version from 11 to 12 .
Please suggest what I do to resolve this issue.
Error:
Performing Consistency Checks on Old Live Server
------------------------------------------------
Checking cluster versions ok
old and new pg_controldata WAL segment sizes are invalid or do not match
Failure, exiting
bash-4.1$
.....................
command:
/usr/pgsql-12/bin/pg_upgrade --old-datadir /var/lib/pgsql/11/data/ --new-datadir /var/lib/pgsql/12/data/ --old-bindir /usr/pgsql-11/bin/ --new-bindir /usr/pgsql-12/bin/ --check
C:\Users\krishnava\Downloads\git>heroku pg:backups:restore "https://s3.amazonaws.com/backup_xxx"
DATABASE_URL
! WARNING: Destructive Action
! This command will affect the app gcesalem
! To proceed, type gcesalem or re-run this command with --confirm appname
> appname
Starting restore of https://s3.amazonaws.com/backup_xxx
to postgresql-round-xxx... done
Use Ctrl-C at any time to stop monitoring progress; the backup will continue restoring.
Use heroku pg:backups to check progress.
Stop a running restore with heroku pg:backups:cancel.
Restoring... !
! An error occurred and the backup did not finish.
!
! waiting for restore to complete
! pg_restore finished with errors
! waiting for download to complete
! download finished with errors
! please check the source URL and ensure it is publicly accessible
!
! Run heroku pg:backups:info r006 for more details.
Info
C:\Users\krishnava\Downloads\git>heroku pg:backups:info r006
=== Backup r006
Database: BACKUP
Started at: 2019-07-16 15:34:40 +0000
Finished at: 2019-07-16 15:34:40 +0000
Status: Failed
Type: Manual
Backup Size: 0.00B (0% compression)
=== Backup Logs
2019-07-16 15:34:40 +0000 pg_restore: [archiver] did not find magic string in file header
2019-07-16 15:34:40 +0000 waiting for restore to complete
2019-07-16 15:34:40 +0000 pg_restore finished with errors
2019-07-16 15:34:40 +0000 waiting for download to complete
2019-07-16 15:34:40 +0000 download finished with errors
2019-07-16 15:34:40 +0000 please check the source URL and ensure it is publicly accessible- -
Instead of doing like this, You can export a copy of the local database and import it to Heroku.
For export from local database,
pg_dump <DATABASE_NAME> > <FILENAME>.sql
This will ask you to enter your database password. But in Windows, this will ask the User password, because the default user name is the system user name. For this you have to specify your username
pg_dump -U <USER_NAME> <DATABASE_NAME> > <FILENAME>.sql
For your case the command will be like this:
pg_dump -U postgres gce > gce.sql
After exporting the local database, you can upload this directly to heruko.
heroku pg:psql --app <APP_NAME> < gce.sql
Just upgraded to mongo 3.0, but mongoexport gives us a the following error: "Failed: read tcp 127.0.0.1:27020: i/o timeout" after outputting some documents (not always the same amount). mongoexport is connecting to a sharded cluster of 4 standalone mongod servers with 3 mongod config servers
[root#SRV]$ mongoexport --host
localhost:27022,localhost:27021,localhost:27020 --db horus
--collection users --type json --fields _id | wc -l
2015-03-09T12:41:19.198-0600 connected to:
localhost:27022,localhost:27021,localhost:27020
2015-03-09T12:41:22.570-0600 Failed: read tcp 127.0.0.1:27020: i/o
timeout
15322
The versions we are using are:
[root#MONGODB01-SRV]# mongo --version MongoDB shell version: 3.0.0
[root#SRV]$ mongoexport --version mongoexport version: 3.0.0 git
version: e35a2e87876251835fcb60f5eb0c29baca04bc5e
[root#SRV]$ mongos --version MongoS version 3.0.0 starting: pid=47359
port=27017 64-bit host=SRV (--help for usage) git version:
a841fd6394365954886924a35076691b4d149168 OpenSSL version: OpenSSL
1.0.1e-fips 11 Feb 2013 build sys info: Linux ip-10-181-61-91 2.6.32-220.el6.x86_64 #1 SMP Wed Nov 9 08:03:13 EST 2011 x86_64 BOOST_LIB_VERSION=1_49
Tried with a 2.6 mongoexport in another server against our mongod3.0 and mongos3.0 and works fine
This is an old question but I wanted to answer. Maybe this answer will help one of us. It might be caused by someone else trying to write to the collection you are writing. I had a smilar problem. After a long research I realised that a user with higher role was trying to write in the sametime and because his/her role is more important than mine ones request were done and mine are given IO exception.
Try closing the ports first: eg. killall -9 node
I am experimenting with meteor and have deployed my app to meteor's servers. The app is a simple dynamic filtering engine where the data came from a TSV file. On my home machine, I used mongoimport (localhost:3001) to import the TSV into a local db but on the meteor servers, I received a fresh new empty database. I'm wondering how to import a TSV to a database hosted on meteors servers. I'm sure there are security peculiarities dealing with a publicly provided hosting service and guess that's my issue.
I have logged into my developer account on the meteor servers via the terminal on OSX, yielding server information like: production-db-c1.meteor.io:27017/myApp_meteor_com
With that server info, I follow on with ./mongoimport like so: (not a cut and paste so ignore typos)
./mongoimport --host myApp_meteor_com/production-db-c1.meteor.io:27017 --collection myCollection -u User -p Pass --type tsv --headerline --file blah.tsv
The User/Pass info I obtained by first adding myself as a user with 'readWrite' access and then querying the db.system.users collection on the meteor servers for my app. There were three users in the collection, one of which was named, "myApp_meteor_com", one I added myself and another that apparently was the connected terminal client. I tried each user/pass combinations in the ./mongoimport string above.
*cough It's not really called 'myApp' but you get the idea
Here's the long winded echo from the terminal after executing mongoimport
Thu Mar 6 12:38:38.412 kern.sched unavailable
Thu Mar 6 12:38:38.415 starting new replica set monitor for replica set myApp_meteor_com with seed of production-db-c1.meteor.io:27017
Thu Mar 6 12:38:38.467 successfully connected to seed production-db-c1.meteor.io:27017 for replica set myApp_meteor_com
Thu Mar 6 12:38:38.518 warning: node: production-db-c1.meteor.io:27017 isn't a part of set: myApp_meteor_com ismaster: { setName: "production-c", ismaster: true, secondary: false, hosts: [ "production-db-c1.meteor.io:27017", "production-db-c3.meteor.io:27017", "production-db-c2.meteor.io:27017" ], arbiters: [ "production-dbarb-c2.meteor.io:27017", "production-dbarb-c1.meteor.io:27017" ], primary: "production-db-c1.meteor.io:27017", me: "production-db-c1.meteor.io:27017", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1394131118501), ok: 1.0 }
Thu Mar 6 12:38:40.518 warning: No primary detected for set myApp_meteor_com
Thu Mar 6 12:38:40.519 All nodes for set myApp_meteor_com are down. This has happened for 1 checks in a row. Polling will stop after 29 more failed checks
Thu Mar 6 12:38:40.519 replica set monitor for replica set myApp_meteor_com started, address is myApp_meteor_com/
couldn't connect to [myApp_meteor_com/production-db-c1.meteor.io:27017] connect failed to replica set myApp_meteor_com/production-db-c1.meteor.io:27017
Any help from meteor/mongodb experts is greatly appreciated.
Highlighting these to make it clear:
Thu Mar 6 12:38:38.518 warning: node: production-db-c1.meteor.io:27017 isn't a part of set: myApp_meteor_com ismaster: { setName: "production-c", ismaster: true, (...)
Thu Mar 6 12:38:40.518 warning: No primary detected for set myApp_meteor_com
You are connecting to the primary, so that is okay. But you have the wrong name for the replica set. The first warning tells you the name of the replica set ("production-c") that this host is a member of and also dumps the configuration list of members.
A better connection argument would be:
./mongoimport --host production-c/production-db-c1.meteor.io,production-db-c3.meteor.io, production-db-c2.meteor.io --collection myCollection -u User-p Pass --type tsv --headerline --file blah.tsv
As this contains some seed list info in the event your current primary was switched to a secondary.
The above helped me out quite a bit, but I ran into authentication issues. So after a bit more research into how the meteor servers issue temporary user/ passwords, I finally was able to import a TSV file into my meteor deployed app's mongoDB.
The "key" for me was to use:
meteor mongo --url myApp
in order to gain a user/pass. It is my understanding that when this command is run, a new user/pass is created each time on the meteor servers and is only good for a very short while (60 secs?). When I ran that, I received this echo at the command prompt
mongodb://client-faddb071:a74c4b2a-15bc-2dcf-dc58-b5369c9ebee3#production-db-c1.meteor.io:27017/myApp_meteor`_com
From that info I was able to extract the username: "client-faddb071" and the password "a74c4b2a-15bc-2dcf-dc58-b5369c9ebee3"
and then in another terminal window (because the user/pass doesn't last long), I was ready to go with the mongo import command:
> ./mongoimport --host production-db-c1.meteor.io:27017 --username client-faddb071 --password a74c4b2a-15bc-2dcf-dc58-b5369c9ebee3 --db myApp_meteor_com --collection myCollection --drop --type tsv --headerline --file /path/to/file.tsv
That worked verbatim for me and 3888 records from the TSV were successfully loaded into my meteor hosted mongoDB. Thanks all for the input, it all contributed to my final knowledge and success.
I started seeing this in my mongodb production database logs
ns:my_app_production.artists query:{ $query: {}, $orderby: { semester: 1, name: 1 } }
Wed Jul 25 19:20:59 [conn199] Assertion: 10334:Invalid BSONObj spec size: -286331154 (EEEEEEEE) first element:_id: "agelio-batle"
Wed Jul 25 19:20:59 [conn199] assertion 10334 Invalid BSONObj spec size: -286331154 (EEEEEEEE) first element:_id: "agelio-batle"
first i tried running a repair
mongo --repair
...
Wed Jul 25 22:20:39 [initandlisten] Assertion: 10334:Invalid BSONObj spec size: -286331154 (EEEEEEEE) first element:_id: "agelio-batle"
0x467eaa 0x4183ca 0x62dd82 0x643478 0x532b22 0x64d196 0x6578b7 0x65ac31 0x65cd75 0x65d6d9 0x51a419 0x6195f5 0x61b0c5 0x61bd3d 0x4914c8 0x47ad9a 0x5e2e7c 0x5e60b9 0x5e78ad 0x6346a2
[0x467eaa]
# several more what i assume memory addresses omitted
[0x6346a2]
Wed Jul 25 22:20:39 [initandlisten] assertion 10334 Invalid BSONObj spec size: -286331154 (EEEEEEEE) first element:_id: "agelio-batle" ns:my_app_production.artists query:{}
Wed Jul 25 22:20:39 [initandlisten] exception in initAndListen std::exception: nextSafe(): { $err: "Invalid BSONObj spec size: -286331154 (EEEEEEEE) first element:_id: "a...", code: 10334 }, terminating
Wed Jul 25 22:20:39 dbexit:
...
i also tried running db.repairDatabase(); from the mongo shell with the same results.
I have never seen this before. Typically with mongodb a repair fixes most problems so im not sure how to proceed or troubleshoot this. any ideas?
If an overall repair fails you can mongodump out the individual collections with a --repair option and attempt to isolate the issues. You can even pass queries in to filter out the corrupt data from a corrupted collection, essentially working around the bad data, but it is an incremental and often slow process. This is why it is always recommended to take backups and run in replica sets to avoid the scenarion where you are left with a potentially corrupt data set.
That said, if you have no way to restore from a backup or another replica set member, then you can try something like (with the database shut down):
mongodump --dbpath /path/to/source/data/files --repair --db <dbname> --out /path/to/repaired
If that does not work, then to skip the index read (which might be tripping you up):
mongodump --forceTableScan --dbpath /path/to/source/data/files --repair --db <dbname> --out /path/to/repaired
Thanks to Ren in the comments, the other thing you can try is dropping/rebuilding the index. The table scan option (mongodump walks _id index by default) will avoid using the indexes for the dump. So assuming you get the path and the database names right, the second option should work.