I am using mongodump(version 2.4.14) to create a db backup that I restore on another system using mongorestore(version 2.4.14) but all the records are not being dumped and restored in the target mongo(2.4.14) instance.
I have tried:
Restoring the db to two separate instances of mongo and the problem persists on both.
Mongoexport with queries - Mongoexport document export count and db.collection.count does not match.
While trying to debug this, I came across this link in which others came across the same problem but no solution for the problem is mentioned.
I am looking for a any help in regards to finding out what the problem might be, how can I debug this further.
Update - Mongoexport/import a particular query response and importing it to a freshly created database works fine. The issue arises only when importing to existing db.
Related
I am trying to restore my database dump using mongorestore command. And got to know that some ObjectIds are updating automatically after restoring and am facing data dependency issues with that. Does ObjectIds updates itself while restoring? If yes, is there any alternative to stop that issue?
We are creating Meteor-based Mongo database manager and we need the ability to "unmount" (remove from system) all collections when we switch databases.
Example:
I'm managing database called dbA. We have all collections for that database created using Mongo.Collection() on server and on client side.
I want to switch database to dbB. I need to unmount all Collections of dbA and mount those of dbB. Reason: dbB could have a collection of the same name as dbA (and usually does)
Is there a way to do this?
Thanks!
You may be able to accomplish this by publishing the necessary data from the new database.
Here's a discussion from a similar question on the Meteor Forums (note the proposed solution at the end):
https://forums.meteor.com/t/switch-database-while-meteor-is-running/4361/5
hi i think you can do with
db.copyDatabase()
run the shell command in the backend from meteor server and execute the copy database command. and after the database is copied you can remove the previous collection.
more detail about copyDatabase() is here
https://docs.mongodb.org/manual/reference/method/db.copyDatabase/
I want to copy from one mongo db to another db on the same server. Mongo version is 2.6.3 on Win 2008 64bit.
I ran the command:
mongo localhost:27017/admin -u <> -p <> --eval "db.copyDatabase('db_master','db_copy1')"
This worked and created db_copy1 with all the users in it. I did db.getUsers() on db_copy1 and it returned all users. All was fine.
Then I went on to copy the database db_copy1 to db_copy2 using the same command above (with different database names obviously). But the resultant db_copy2 had no users in it.
Fairly new to mongo, so quite possible I have missed something.
Thanks in advance for all your help!
Vikram
One of the things I love about Mongodb is that rather than mess about with commands like that you can just copy the files.
Just go to the directory with the data files in it and copy them to the dbpath for your new database. If you don't want a certain database, don't copy the files with that database name!
I am using MongoDB version 1.6.5
One of my collection has 973525 records.
when I try to export this collection mongodb exports only 101 records.
I can't figure out the problem .
any one knows the solution for it.
This sounds like corruption. If your server has not shutdown cleanly that could be the cause. Have you had system crashes where you didn't do a repair?
You can try to do a dump with mongodump --dbpath if you shut down the server first.
Note: MongoExport/Import will not be able to restore all the data since json can't represent all possible data types.
When using Cucumber with Capybara, I have to load test database data from SQL data dump.
Unfortunately it takes 10s for each scenario, which slows down the tests.
I have found something like: http://wiki.postgresql.org/wiki/Binary_Replication_Tutorial#How_to_Replicate
Do you think binary replication will be quicker then using SQL files?
Is there anything I can do to make the restore quicker (I restore just data, not structure)?
What approaches would you recommend to try?
You could try to put your test data into a "template" database (e.g. mydb_template)
To prepare the test scenario, you simply drop your database using DROP DATABASE mydb and recreate based on the template: CREATE DATABASE mydb TEMPLATE = mydb_template;.
Of course you'll need to connect to e.g. template0 or the postgres database in order to be able to drop mydb.
I think this could be faster than importing a dump.
I recall a discussion on the PG mailing list regarding this approach and some performance problems with large "templates" that were fixed with 9.0.
(I restore just data, not structure)
COPY is always fastest for importing just data. Other answer deals with restoring a whole database.