Getting my head around mongo atm. I am trying to copy a complete database from a server to my pc:
db.copyDatabase(fromdb, todb, fromhost)
The fromHost db contains 4 collections with rows in it. For some reason the local version of this db has all the collections but are empty:
db1 0.000GB
db2 0.000GB
What am I missing why are the rows empty?
Q: Why are the rows empty?
A: It looks like something went wrong.
If you haven't already, I would try db.getLastError() to see if there's any error message.
I would also look at this link:
How do I copy a database from one MongoDB server to another?
If you are using --auth, you'll need to include your username/password
in there...
Also you must be on the "destination" server when you run the command.
db.copyDatabase(<from_db>, <to_db>, <from_hostname>, <username>,
<password>);
If all that doesn't work, you might want to try something like
creating a slave of the database you want to copy ...
Finally, review the materials on the MongoDb "copyDatabase" man page:
https://docs.mongodb.org/manual/reference/method/db.copyDatabase/
Please post back with any additional details (e.g. error message).
And, if you get it working, please post back what was wrong, and how you fixed it!
Good luck!
Related
Is there a possible way of writing database migrations for Parse Server?
My use case is: I want some tables with pre-populated data whenever I connect my application to a fresh mongodb server. Say for example, setup a staging environment, setup a local development environment etc etc.
I could not really find anything in the docs.
Am i going in the right direction or am I missing something?
Part of the Parse.com -> Parse-Server migration is that you now have to manage your own database. Parse-Server gives you the necessary tools to connect to your database, but you have to do things like manage indexes (mlab gives you weekly tips on what indexes you could add for improvement!), uploading large amounts of data, etc.
So, if your question is "Does Parse-Server do this?" No, and they won't.
If your question is "Can this be done?" Well, yes! MongoDB has an upload feature that takes in JSON or CSV, if I recall correctly. I know you can upload a collection, I'm not sure if you can do multiple collections at a time. One caveat is that you need to set createdAt and updatedAt and objectId yourself for this, but yes you can do this.
If you aren't too familiar with working with raw data that mongo needs, you could always setup the tables you want if they aren't too big, export all the data, and then use that data export as the import for all the fresh databases thereafter. Only issue there is that updatedAt and createdAt will show old dates on new instances.
I have a mongo database db.ns, db.0, db.1, ... db.7
Accidentally I remove all the data from a collection, but in the database files (explorer with vim) it's all (or part of) the data.
After trying to recover the data moving to another mongodb instance, or mongod --restore, also, I try with the mongodump, but the collection appears empty.
I try to recover from scratch, directly from the files. I try with bsondump for each one, and for a single file (cat db.ns db.1 ... > bigDB) but nothing.
I don't know what other ways are from recover the data from a mongo database file.
Any suggestion?? Thx!!!
[SOLVED]
I will try to explain what I do to "solved" the problem.
First. Theory.
In this SlideShare, can see a little of how files MongoDB database work.
http://www.slideshare.net/mdirolf/inside-mongodb-the-internals-of-an-opensource-database
Options:
When you remove accidentally a collection:
the first thing that you have to do is quickly copy all the database (normally in /data/db or /var/lib/mongodb) and stop the service.
Remove the journal directory try to recover from this copy and pray ;D
You can see more about that, here:
mongodb recovery removed records
In my case, this did not work for me.
In Journaly case, mongo no update its database files directly only their indexes.
So that, you can access to the files (appointed as database.ns, database.0, database.1 ...) and try to recover this.
These files are as cleaved BSONs and binary. So, you can open, and see all the information
In my case, I create a easy function in PHP that first read the file, and explode the file in smallers files.
Before, takes one to one and apply some regular expresions to remove Hexadecimal values, explode the info into the registers (you can see the "_id" key to do that) and do some others task to clean the info.
And finally, I have to process manually all the preprocessed info to obtain all the information.
I think, I have lost, at least, the 15-25% of the information. But I prefer to think that I have recovered the 75% of the lost info.
Caution:
This is not a easy and secure way to solve this problem. In my case, the db only recive information, and not modify or update this.
With this method, a lot of information will be lost, Mongo IDs, integers, dates, can't be recovered.
The proccess is 100% manually, you can spend your time on automating certain tasks, but will depend on your database structure.
On my PostgreSQL 8.0 database, I started receiving a "ERROR: could not open relation 1663/17269/16691: No such file or directory" message, and now my data is inaccessible.
Any ideas on how to recover at least some of the data? Professional support is an option.
Regards.
RP
If you want your data back in a hurry and it's worth something to you, then the professional support option should be simple enough.
Some things to check, now that you've got a full backup of all your database (that's base, pg_clog, pg_xlog and all the other folders at that level).
Does that file actually exist? It might be a permissions problem rather than the file actualy going missing.
Check your anti-virus/security packages - have they mistakenly quarantined the file? If you can exclude PostgreSQL's database directories from scans/active scans that's worthwhile too.
Make a note of everything you can remember about when this happened and what happened just before. This will help with troubleshooting for you or a consultant.
Check the logs likewise - this error will be logged, find the first occurrence and see if there's anything odd before.
Double-check you really do have all your existing files backed up, and restart PostgreSQL.
Try connecting as user postgres to database postgres or database template1. If that works then the file is one of your database files rather than the global list of users or some such.
Try creating an empty file with the right name (and permissions - check the other files). If you are really lucky it's just an index. Otherwise it could be a data table you can live without. Then you can dump other tables individually.
OK - if you're here then you can connect to your DB. Those numbers in the file-path are PostgreSQL's OIDs identifying system objects. You can try a couple of useful queries here. These two queries should give you the IDs of the databases and then the object with the missing file. This is useful information for your professional too.
SELECT oid, datname, dattablespace FROM pg_database;
SELECT * FROM pg_class WHERE relfilenode = 16691;
Remember make sure you have the filesystem backup before tinkering.
Today I've been working on a performance test with MongoDB. Once I managed to use all the left space of my hard disk so the test was halted at the middle. So I removed some of the files and restarted the test after a db.dropDatabase();. But I noticed that the results of db.collection.stats(); seems to be wrong now.
My question is, how can I make MongoDB reset / recalculate statistics of a collection?
Sounds like mongodb is keeping space for the data and indexes it "knows" you will need when you run the test again, even though there is no data there at the moment.
What files did you delete? If you really don't need the data, you could stop mongod, and delete the other files corresponding to the database - but this is only safe if you are running in a test environment, and not sharing your database.
I think you're looking for db.collectionName.drop() function, then to reimport your collection using mongoimport --db dbName --collection collectionName --file fileName to view whether or not those values are correct, quick guess though is that they are correct.
I'm at an impasse that probably has a simple solution, but I can't see it. I've done everything in the Sphinx documentation up to the point of the Quick Tour, but when I test the search using test.php in PuTTy, it returns zero results.
I've put in all my correct database info in sphinx.conf and I've assembled the SQL query. I'm not getting any errors at all, just that it says it's returning 0 results every time I search.
Is it looking at my databases? Let me know if you need to see any code. searchd is running (as far as I can tell).
Sphinx has 2 different phases:
1) Indexing
2) Searching
I belive from your question that you skipped by mistake part where you need to index data (run indexer) so searching would have data to search through. In indexing part sphinx will take all of data from your db and search will actually be searching that and not your DB.
Make sure that indexer --all showing that it found and indexed actual documents.
Besides the API there is another convenient method to test sphinx using SphinxQL
Add line "listen = 9306:mysql41" line in searchd section in sphinx.conf as described in http://astellar.com/2011/12/replacing-mysql-full-text-search-with-sphinx/ and start the daemon.
Then run
mysql -h0 -P 9306
and then fire the query against sphinx
SELECT * FROM <your_sphinx_index>;
Hope that helps!