I need help understanding the syntax for BACKUP in MySQL Workbench - mysql-workbench

So, I am relatively new to MySQL and recently, I was asked to create a query that utilizes the BACKUP command in order to copy a table to a given destination folder. I was provided text from an SQL tutorial in w3schools.com, however, when I attempted to follow the format, I was informed that "BACKUP is not valid at this position, expecting: EOF, BEGIN, CATCH, CHECKSUM, COMMIT, DEALLOCATE,..". So, I was wondering, what is the proper syntax for using the BACKUP command in a query?
I have attempted several actions in order to resolve the issue, but none of them were successful. I have tried;
1# Executing a query with and without the underlying table saved in a file folder.
2# Using BACKUP for a database in case the problem was with tables.
3# Starting with BEGIN, DO, and mysqldump.
4# Removing TABLE.
5# Adding an opening parenthesis after the name of the table and a closing parenthesis after the name of the destination.
I do not feel comfortable sharing my own table and destination folder, but here is what I was supposed to use for reference. My code follows the same format;
What I was supposed to use for reference

BACKUP DATABASE Is not part of MySQL syntax. I believe you may be thinking of the SQL Server statement.
For MySQL you will likely want to use the mysqldump utility (which is a separate concept from SQL queries). Or possibly some solution involving the SELECT ... INTO OUTFILE variant of the SELECT...INTO statement.

Related

How to export executed statements from Oracle SQLDeveloper?

There is a statements logging in Oracle SQLDeveloper:
Is there any way to export them as plain text or log them to file?
UPD: The reason I want to collect statements to file is for easy diff (to compare expected vs truncated export). I have a schema which export is not completely performed by 'Tools -> Database export'. Indexes, constraints, packages and synonyms are missing in resulting file while they are obviously present in database and visible in SQLDeveloper.
No, just copy and paste.
You could always do a client based jdbc trace or a database session trace if you wanted that to go to a file.

Create foreign key to table that doesn't exist in postgres

I've also asked this question in DBA exchange, but not sure if that is the right place.
I'm trying to put together a solution where I have a bunch of script files that contain my schema creation. Something similar to Visual Studio's SQL Project, but for Postgres.
My problem is that the files won't necessarily be read in the correct order, for instance the first file/script/table read has a dependency on the 2nd file/script/table.
I was hoping there was some way I could disable the check to see if the referred table/column does exist, creating all the tables, and then re-enabling the check.
I'm running 9.2.24
My 2nd solution would be to order the files in such a way that it does only create a table when the dependencies already exist, but my above mention method would be preferred.

Command Line Interface (CLI) for SQLDeveloper

I rely on SQLDeveloper to edit and export a schema.
It works like a charm, and I can run import with sqlplus.
I have tried using sqlplus to generate the same schema export, with no result.
I cannot use the Oracle expdp tool, because I need an ASCII file to be able to diff it.
So the only option I have is SQLDeveloper.
I would like to automate the export (data + DDL) with a cron job on a Linux box, but I can't find a way to use SQLDeveloper from a command line to generate the export.
Any clue?
Short answer: no.
For just the schema side of things you may want checkout show create table equivalent in oracle sql which will get you the SQL source of the DDL.
Are you sure you want an ASCII file for the automated export of an entire DB though? I would be surprised if you really want to diff an entire export of a DB. This SO Answer may help a little though.
If you really want to get a full data dump plus DDL you will have to write your own script that gets the DDL as described in the first link and then select * and process each result into a sql insert.

Using Foreign Data Wrappers in PostgreSQL (variable filename)

I'm running PostgreSQL 9.3 and want to import some daily generated csv files into specific tables.
I started playing with FDW (Foreign Data Wrapper) and pointed to a specific csv, where I can query and append/upsert to a table.
But I have two more needs:
- The file generation date and source branch is present in the filename, and only there.
I need to get this information and insert also in the table.
- As expected, the files names are not fixed, so the FDW doesn't know where to get the information.
I thought about solving this using some unix tools (although my Postgres runs on windows), basically for each file in a list (from a previously created index), the script would rename the file and pass the branch and date as parameters to a psql.exe command line, where the import would be from a fixed name in FDW.
This would work, but this script sound a bit like a hack and not a very "elegant" solution.
Does anyone has an better suggestion?
Thanks!

How to Recover PostgreSQL 8.0 Database

On my PostgreSQL 8.0 database, I started receiving a "ERROR: could not open relation 1663/17269/16691: No such file or directory" message, and now my data is inaccessible.
Any ideas on how to recover at least some of the data? Professional support is an option.
Regards.
RP
If you want your data back in a hurry and it's worth something to you, then the professional support option should be simple enough.
Some things to check, now that you've got a full backup of all your database (that's base, pg_clog, pg_xlog and all the other folders at that level).
Does that file actually exist? It might be a permissions problem rather than the file actualy going missing.
Check your anti-virus/security packages - have they mistakenly quarantined the file? If you can exclude PostgreSQL's database directories from scans/active scans that's worthwhile too.
Make a note of everything you can remember about when this happened and what happened just before. This will help with troubleshooting for you or a consultant.
Check the logs likewise - this error will be logged, find the first occurrence and see if there's anything odd before.
Double-check you really do have all your existing files backed up, and restart PostgreSQL.
Try connecting as user postgres to database postgres or database template1. If that works then the file is one of your database files rather than the global list of users or some such.
Try creating an empty file with the right name (and permissions - check the other files). If you are really lucky it's just an index. Otherwise it could be a data table you can live without. Then you can dump other tables individually.
OK - if you're here then you can connect to your DB. Those numbers in the file-path are PostgreSQL's OIDs identifying system objects. You can try a couple of useful queries here. These two queries should give you the IDs of the databases and then the object with the missing file. This is useful information for your professional too.
SELECT oid, datname, dattablespace FROM pg_database;
SELECT * FROM pg_class WHERE relfilenode = 16691;
Remember make sure you have the filesystem backup before tinkering.