MongoDb restoring broken Database - mongodb

Running MongoDB Server on Windows.
I had a big Db, with backup etc. but retarded as I am, instead of using the Shell to delete some entries, I first copied them to another directory, and then deleted them via explorer. Of course, nothing did work, because MongoDb did miss some entries and did not even start to work properly, with an I/O Error in the Log(File not found). So, I copied the files back where they belonged, again via explorer, retried it, and now I still get the ErrorMsg in the Log, that some file is missing. The weird thing is, that file never existed in those folders, I deleted...
Well so now, at least I still have a backup dump made with mongodump, but I can not restore the dump, because to restore, I have to start the MongoDbServer which will not start, because some folders of my DB entries are missing(the service will start, I can't access the server instance though), and to include the missing folders I have to use mongorestore... Some bad loop, I got going there...
So I created a new DB and wanted to restore my old dump in the new DB, but now I get a invalid header error when using mongorestore --gzip --archive -d test "dump_path"
Any help, how to resolve my problem?

Solved it... I created a new DB, started the mongodbserver but this time, instead of writing mongorestore --gzip --archive filename dumppath, which exited with an too many arguments error, you have to type:
mongorestore --gzip --archive="filename" dumppath, and then everything works as one would expect...

Related

Heroku Postgres pg:backups Restore From Copy

I suspect I already know the answer here, but wanted to confirm. When you use the command line interface for Postgres backups, the command heroku pg:backups returns a list of Backups, Restores, and Copies. Apparently, my daily backups routine stopped working somewhere along the way, so my backups list was empty. I did, however, see this:
My suspicion is that this is merely a historical record showing that a copy was performed, however, I was hopeful that maybe I would be able to recover the database from that point in time. The restore command doesn't work with that ID, though. It gives me:
Can anyone confirm that this copy can or cannot be recovered?

Unable to restore cloud sql backup

I followed this link to restore my backup
https://cloud.google.com/sql/docs/mysql/backup-recovery/restoring
and i've tried restoring on multiple instances too
but in every instance's this error comes up in logs
Couldn't repair table: mysql.general_log
Failed to write to mysql.general_log: Incorrect key file for table 'general_log'; try to repair it
First, address the error. Your general query log is enabled, but the install default is disabled. If you do not need the table enabled, then once everything is working, disable it. I would suggest take a fresh backup and then:
A. Repair the table using the mysqlcheck -r YourDB general_log command. (If this is an ISAM table use myisamchk instead.)
B. If that does not repair the table, first try mysqlcheck -r YourDB to repair the whole database (sometimes more than just the table needs to be repaired.)
C. If the restore still doesn't work than there are a couple of possibilities: the database may be corrupted or the backup file is corrupted. You don't mention any other errors, so I do not suspect the whole database is corrupted.
D. To check on the corrupted file, you can create a fresh database instance and try your restore there. If that does not work you can try restoring a data table to confirm if the backup file is usable.
Be prepared for the possibility your backup file is corrupt.

Can I restore data from mongo oplog?

my mongodb was hacked today, all data was deleted, and hacker requires some amount to get it back, I will not pay him, cause I know he will not send me back my database.
But I have had oplog turn on, I see it contains over 300 000 documents, saving all operations.
Is there any tool that can restore my data from this logs?
Depending on how far back your oplog is, you may be able to restore the deployment. I would recommend taking a backup of the current state of your dbpath just in case.
Note that there are many variables in play for doing a restore like this, so success is never a guarantee. It can be done using mongodump and mongorestore, but only if your oplog goes back to the beginning of time (i.e. when the deployment was first created). If it does, you may be able to restore your data. If it does not, you'll see errors during the process.
Secure your deployment before doing anything else. This situation arises due to a lack of security. There are extensive security features available in MongoDB. Check out the Security Checklist page for details.
Dump the oplog collection using mongodump --host <old_host> --username <user> --password <pwd> -d local -c oplog.rs -o oplogDump.
Check the content of the oplog to determine the timestamp when the offending drop operation occur by using bsondump oplogDump/local/oplog.rs.bson. You're looking for a line that looks approximately like this:
{"ts":{"$timestamp":{"t":1502172266,"i":1}},"t":{"$numberLong":"1"},"h":{"$numberLong":"7041819298365940282"},"v":2,"op":"c","ns":"test.$cmd","o":{"dropDatabase":1}}
This line means that a dropDatabase() command was executed on the test database.
Keep note of the t value in {"$timestamp":{"t":1502172266,"i":1}}.
Restore to a secure new deployment using mongorestore --host <new_host> --username <user> --password <pwd> --oplogReplay --oplogLimit=1502172266 --oplogFile=oplogDump/local/oplog.rs.bson oplogDump
Note the parameter to oplogLimit, which is basically telling mongorestore to stop replaying the oplog once it hit that timestamp (which is the timestamp of the dropDatabase command in Step 3.
The oplogFile parameter is new to MongoDB 3.4. For older versions, you would need to copy the oplogDump/local/oplog.rs.bson to the root of the dump directory to a file named oplog.bson, e.g. oplogDump/oplog.bson and remove the oplogFile parameter from the example command above.
After Step 4, if your oplog goes back to the beginning of time and you stop the oplog replay at the right time, hopefully you should see your data at the point just before the dropDatabase command was executed.

Restore dump locally without pre-existing database

I have received a backup file from a customer and wish to restore it.
I've tried executing the following on the command line (which runs for ages, seems to get to the end, but does not produce a new database in my localhost server).
pg_restore --create -h localhost c:\temp\myBackup.backup
I've also tried to run the restore through pgAdmin4 but it reports that indexes already exist if I tried to create an empty database and restore into it and I can't seem to locate the correct options to restore the database and create a new one (without having first selected an empty database).
Any pointers would be greatly appreciated. I'm happy to use any method.

MongoDb - copy one database to another keeping the users also

I want to copy from one mongo db to another db on the same server. Mongo version is 2.6.3 on Win 2008 64bit.
I ran the command:
mongo localhost:27017/admin -u <> -p <> --eval "db.copyDatabase('db_master','db_copy1')"
This worked and created db_copy1 with all the users in it. I did db.getUsers() on db_copy1 and it returned all users. All was fine.
Then I went on to copy the database db_copy1 to db_copy2 using the same command above (with different database names obviously). But the resultant db_copy2 had no users in it.
Fairly new to mongo, so quite possible I have missed something.
Thanks in advance for all your help!
Vikram
One of the things I love about Mongodb is that rather than mess about with commands like that you can just copy the files.
Just go to the directory with the data files in it and copy them to the dbpath for your new database. If you don't want a certain database, don't copy the files with that database name!