So this morning MAMP would not open. Said the Table could not b found. To try and restore everything, I restored from a Time Machine backup. The databases were restored, but found the data I entered the last three days was missing from the table. I followed the procedure correctly to restore from Time Machine here:
https://dba.stackexchange.com/questions/16875/mysql-how-to-restore-table-stored-in-a-frm-and-a-ibd-file
Luckily, There was not much data entered th last few days, so i could use the old upload csv files to restore those correctly. When I looked in Time Machine at the
MAMP/db/mysql files, the ibdata1, ib_logfile0 and ib_logfile1 had a modified and last opened date four days ago.
My question is there anybody here with experience in MAMP + Time Machine that can explain why last night's backups didn't have the data I entered the last three days? I'm using Innodb storage for my local database. I do realize doing mysqldump every night is the best method of preventing this kind of problem.
Thanks
Related
We are creating dump file of multiple database at every one hour but the issue is storage and time of restoration. We have multiple location create dump file and using vpn to get the file. file become large day by day. Please suggest me any solution
I want to log test data into a postgresql database. This will be a new table each day, name then something like testtable_16122017.
At the end of each day this table will be backed up to another SQL server. So I dont need synchronisation and all that, since the table will be created once, filled with data and never changed again.
Can this be done with Slony, or maybe I just have to put a few lines in the testrig software, that at the end of each day this table will be pushed into backup database?
Why all that, there is a Labview PC at the testrig and an SQL server in a LAN So if anything else breaks down, we still get the test data. And for furhter analysis and backup it will be sent to another server later on. Also I have to worry less about performance.
Thanks!
I just noticed something strange with MongoDb physical files on Windows.
When i update/insert data in my MongoDB database, the modify date of the related *.0 files in 'data' folder is not updated.
I understand that the size is pre-allocated so it stays the same, but why the modify date is not at least updated ?
This is quite an issue for me cause i store the mongo files in Google drive, so nothing is updated due to this weird behavior.
Does anyone have an explanation ?
Thank you
As an answer to my own question, i came with the solution of using **mongodump** to keep my data up to date on Google Drive.
I scheduled a task to dump frequently my mongo database into same data folder, so this folder is backed up, it just need some extra work using **mongorestore** on other machines but it's quite easy steps.
I am restoring a mysql database with perl on a remote server with about 30 million records. It's taking > 2 days & looking at my network connections I am not fully utilizing my uplink bandwidth. I will need to do this at least 1x per week. Is there a way to fork a mysqldump (I'm using perl) so that I can take full advantage of my bandwidth (I don't mind if I'm choked off for a bit...I just need to get this done faster).
Can't you upload the whole dump to the remote server and start the restore there?
A restore of a mysqldump is just the execution of a long series of commands that would restore your database from scratch. If the execution path for that is; 1) send command 2) remote system executes command 3) remote system replies that the command is complete 4) send next command, then you are spending most of your time waiting on network latency.
I do know that most SQL hosts will allow you to upload a dump file specifically to avoid the kinds of restore time that you're talking about. The company that takes my money each month even has a web-based form that you can use to restore a database from a file that has been uploaded via sftp. Poke around your hosting service's documentation. They should have something similar. If nothing else (and you're comfortable on the command line) you can upload it directly to your account and do it from a shell there.
mk-parallel-dump and mk-parallel-restore are designed to do what you want, but in my testing mk-parallel-dump was actually slower than plain old mysqldump. Your mileage may vary.
(I would guess the biggest factor would be the number of spindles your data files reside on, which in my case, 1, was not especially conducive to parallelization.)
First caveat: mk-parallel-* writes a bunch of files, and figuring out when it's safe to start sending them (and when you're done receiving them) may be a little tricky. I believe that's left as an exercise for the reader, sorry.
Second caveat: mk-parallel-dump is specifically advertised as not being for backups. Because "At the time of this release there is a bug that prevents --lock-tables from working correctly," it's really only useful for databases that you know will not change, e.g., a slave that you can STOP SLAVE on with no repercussions, and then START SLAVE once mk-parallel-dump is done.
I think a better solution than parallelizing a dump may be this:
If you're doing your mysqldump on a weekly basis, you can just do it once (dumping with --single-transaction (which you should be doing anyway) and --master-data=n) and then start a slave that connects over an ssh tunnel to the remote master, so the slave is continually updated. The disadvantage is that if you want to clone a local copy (perhaps to make a backup) you will need enough disk to keep an extra copy around. The advantage is that a week's worth of (query-based) replication log is probably quite a bit smaller than resending the data, and also it arrives gradually so you don't clog your pipe.
How big is your database in total? What kind of tables are you using?
A big risk with backups using mysqldump has to do with table locking, and updates to tables during the backup process.
The mysqldump backup process basically works as follows:
For each table {
Lock table as Read-Only
Dump table to disk
Unlock table
}
The danger is that if you run an INSERT/UPDATE/DELETE query that affects multiple tables while your backup is running, your backup may not capture the results of your query properly. This is a very real risk when your backup takes hours to complete and you're dealing with an active database. Imagine - your code runs a series of queries that update tables A,B, and C. The backup process currently has table B locked.
The update to A will not be captured, as this table was already backed up.
The update to B will not be captured, as the table is currently locked for writing.
The update to C will be captured, because the backup has not reached C yet.
This is an easy way to destroy referential integrity in your database.
Your backup process needs to be atomic, and transactional. If you can't shut down the entire database to writes during the backup process, you're risking disaster.
Also - there must be something wrong here. At a previous company, we were running nightly backups of a 450G Mysql DB (largest table had 150M rows), and it took less than 6 hours for the backup to complete.
Two thoughts:
Do you have a slave database? Run the backup from there - Stop replication (preventing RW risk), run the backup, restart replication.
Are your tables using InnoDB? Consider investing in InnoDBhotbackup, which solves this problem, as the backup process leverages the journaling that is part of the InnoDB storage engine.
Forgot to make a backup. Now I have harddrive with databases and new system with empty postgres. Can I somehow restore databases? by simple copy of files etc?
If you have the full data directory of your old postgresql system (and if it was the same version, or differing only in a revision number) you can just try to put it in place of your data directory in your new postgresql installation. (Of course, stop postgres server before doing this).
It's basically the same procedure used when upgrading postgresql, when there is no need to do backup-restore.
Edit: As pointed out in the comments, I assume not only same (or almost same) version, but same architecture (32 - 64 bits , Linux - Windows, etc)
In addition to the leonbloy's answer, you could try pg_migrator, especially if you need to upgrade from 8.3 to 8.4 (and 9.0 eventually).
In your case you have the files, but if you haven't, Maybe, only maybe, you can do something with the logs of the database, you can try to see the log of the statements in the database normally in /var/log/postgresql/postgresql.log, if it is there or close to it, and if log_statements = 'mod' or 'all' is set up before, you can recovery some of your data.
Table by table, by searching by insert into in this tables in all or recent history of database. You can cut text with some Unix tools to get only the statements and put a ";" at the end of each statement, and another important queries like delete, etc.
But you must to do it table by table, and data must be there, and database don't runned up too much time without backups.
In certain cases you just need the last operation or something like this to save the day.
This, however, its just for Apolo 13 disasters moment and never can replace a good backup.