I am trying to restore a database using mysqldump, but nothing seems to happen. I get some output on the screen, but the program stops before it imports anything and does not report any errors.
I am trying to restore a dump using the syntax
mysqldump --log-error=/root/dumplog --verbose --user=myuser mydatabasename < /root/dump.sql
I get no entries in the MySQL log, and in dumplog, all I get is this:
-- Connecting to localhost...
-- Disconnecting from localhost...
The dump file is like 15mb
You don't use mysqldump to restore. To restore you would do:
mysql -uUser -p dbname < /path/to/file.sql
Related
My last working database back up of an Odoo13CE system was a full one, including the file store. I'm getting timeouts when trying to restore "a copy" via Odoo database manager page. Thought I could just do a partial restore (dump.sql & manifest.json), dump the filestore, recompress and upload and that brought everything down to its knees (Errored w/" no *.dump file found). So logged into server and dropped my failed restore and restarted odoo service and all is back to somewhat normal, with the database I want to replace active.
Is there a way to convert that .sql to a .dump or some other way to get my .sql to be added to my pgdb? I'm fairly green re: psql so if I'm missing something simple, please feel free to shove it down my throat.
TIA
to restore sql back up file to a new database:
psql YOUR_DATABASE_NAME < YOUR_FILENAME
You can read more about restoring/back up Postgres Db here: https://www.postgresql.org/docs/11/backup-dump.html
Restoring the heavy size database(with file store) you have to increase the limit of the server to continue your process.
Add the parameter on your path
--limit-time-cpu=6000 --limit-time-real=12000
Restore the SQL File
psql database_name < your_file.sql
Restore the Dump File
pg_restore -d database_name < your_file.dump
My Goal is to have an automatic database backup that will be sent to my s3 backet
Jelastic has a good documentation how to run the pg_dump inside the database node/container, but in order to obtain the backup file you have to do it manually using an FTP add-ons!
But As I said earlier my goal is to send the backup file automatically to my s3 backet, what I tried to do is to run the pg_dump from my app node instead of postgresql node (hopefully I can have some control from the app side), the command I run basically looks like this:
PGPASSWORD="my_database_password" pg_dump --host "nodeXXXX-XXXXX.jelastic.XXXXX.net"
-U my_db_username -p "5432" -f sql_backup.sql "database_name" 2> $LOG_FILE
The output of my log file is :
pg_dump: server version: 10.3; pg_dump version: 9.4.10
pg_dump: aborting because of server version mismatch
The issue here is that the database node has a different pg_dump version than the nginx/app node, so the backup can't be performed! I looked around but can't find an easy way to solve this. Am open to any alternative way that helps to achieve my initial goal.
A TYPO3 installation has 57 tables in it's database named typo3.
Creating a dump using the mysqldump program by the command
mysqldump --host=127.0.0.1 --password=<PASSWORD> --protocol=tcp --port=3306 --user=<ROOT-USER> --lock-all-tables --databases typo3 > dump.sql
contains only 47 tables.
The same result occurs if the database connection is done via socket and also if the "--lock-all-tables" option is left out.
How to make a complete dump containing all the tables?
The missing tables are
index_config
index_debug
index_fulltext
index_grlist
index_phash
index_rel
index_section
index_stat_search
index_stat_word
index_words
I'm not sure, why all the index_* tables are not dumped with mysqldump, but I can suggest you to use SypexDumper. In most of the cases, when mysqldump or phpMyAdmin dumps were not possible, it saved my day.
I use this for dumping: mysqldump -u server_dbuser -p -h localhost server_db > dbdump.sql
Have you tried from phpmyadmin?
Are the index_ tables needed, or can the be rebuilt anytime anyway?
Thanks to Urs and Viktor Livakivskyi for their intention to help! The problem was caused by an individual script triggered by cron.
It might be a dead simple question yet I still wanted to ask. I've created a Node.js application and deployed it on Heroku. I've also set up the database connection without having any trouble as well.
However, I cannot get the load the local data in my MongoDB to MongoLab I use on heroku. I've searched google and could not find a useful solution so I ended up trying these commands;
mongodump
And:
mongorestore -h mydburl:mydbport -d mydbname -u myusername -p mypassword --db Collect.1
Now when I run the command mongorestore, I received the error;
ERROR: multiple occurrences
Import BSON files into MongoDB.
When I take a look at the DB file for MongoDB I've specified and used during the local development, I see that there are files Collect.0, Collect.1 and Collect.ns. Now I know that my db name is 'Collect' since when I use the shell I always type `use Collect`. So I specified the db as Collect.1 in command line but I still receive the same errors. Should I remove all the other collect files or there is another way around?
You can't use 'mongorestore' against the raw database files. 'mongorestore' is meant to work off of a dump file generated by 'mongodump'. First us 'mongodump' to dump your local database and then use 'mongorestore' to restore that dump file.
If you go to the Tools tab in the MongoLab UI for your database, and click 'Import / Export' you can see an example of each command with the correct params for your database.
Email us at support#mongolab.com if you continue to have trouble.
-will
This can done by two steps.
1.Dump the database
mongodump -d mylocal_db_name -o dump/
2.Restore the database
mongorestore -h xyz.mongolab.com:12345 -d remote_db_name -u username -p password dump/mylocal_db_name/
I've backed up all of my mysql databases using this command(I'm running mysql 5 on debian linux) :
mysqldump --user="root" --password="pass" --routines --triggers --events --all-databases > dbs_backup.sql
Then I shutted down my mysql server to change innodb configuration according to this link; After restarting the server, & when I want to import dump output using this command:
mysql -u root -p pass < dbs_backup.sql
I get some syntax errors on middle of this file (it executes lot of queries & some databases imports successfully, but the error occurs only while creating some stored procedures). I wonder why this happens, cause the server has no major change & the dumped databases all was fine & worked well before dumping.
what can cause this problem ???