Restore from PostgreSQL custom dump file using Liquibase - postgresql

Our team currently uses a custom database version control system. We are contemplating on moving to Liquibase.
Currently, after each release, we pg_dump the schema (without production data) into a custom (--format=c) data file. This data file is restored into development instances as part of the build (through a maven plugin provided by the custom DB version control system). We would like to continue using the custom data file format (since it restores faster resulting in faster development builds).
I get the impression that Liquibase supports restoring from a plain text SQL file but not from a custom format file. Is this correct?

It is correct that it does't support a custom formats, just CSV. You can use executeCommand, however, to call out to whatever program can load data using the files.

Related

Export OSB resources without using export wizard on JDeveloper

Using JDeveloper in order to create and manage Oracle Service Bus 12c resources, I am able to export the required resources into a .jar file using the Resources Export Wizard of JDeveloper, selecting one by one those needed, under the tree of each project.
What I want to do though is find a way to export a .jar file based on resources list, given in a file of a commonly used format (JSON, CSV etc), as it can be time saving for a large number of resources. My first thought was to search if JDeveloper provides such way or attempt do this programmatically, yet my search on this has not given me any information of how-to.
Is there an alternative way of doing this?
If you have Oracle OSB 11.1.1.7.0 or higher you can automate the compilation process for OSB at project level using configjar, here's a whole example of an implementation which include: compilation using configjar, automating the task retrieving the code from GIT using Jenkins and a python script.
You can also do it using ANT, here's a good document of Oracle explaining that. (I've tried it, but found easier to use configjar, this is the only option for versions below 11.1.1.7.0).
After creating any of those compilation methods you can create a CSV file, parse it with python and loop the compilation.

Is there an added value for a "file-to-file" Project transfer vs copying the files directly?

We have been using EA's API ProjectTransfer function to do a backup of our projects automatically (we have some projects on the filesystem as well as one project in a DBMS)
However there are some caveats to this function: We cannot run our scripts unattended(as a task running daily). Meaning the user has to be logged on for the script to run since EA cannot be run unattended.
Also, we have noticed a bug in which the Accept Windows Authentication option does not carry with a Project transfer.
This is why we decided to move our scripts to simply copying the files for backup. (And rely on the dbms team for backing up the DBMS repository)
Should we be simply copying the files for backing up the projects? Or is there something important ProjectTransfer is doing?
No, there is no added value. As long as you do a file copy. The project transfer is more meant on a RDBMS-EAP level which can not simply be done with a file copy. For RDBMS-transfers with the same database type you can/should also use database backups as transfer method.

How to upload file to PostgreSQL database using flyway?

I use in Windows 7 IntelliJ IDEA 12, JDK 7, MyBatis, Spring 3 in order to create REST web application (Maven project with flyway-maven-plugin). I use Flyway in order to cope with sql migrations. Now I need to load some files to PostgreSQL 9.2 database. I've found this thread: https://dba.stackexchange.com/questions/1742/how-to-insert-file-data-into-a-postgresql-bytea-column
I'd like to use bytea_import from that thread. This custom function requires path to the uploaded file (it is in resources folder). How can I correctly set relative path to such file? What is considered as a current folder during migrations?
Not sure about bytea_import (if you get it working, let me know!), but you should be able to achieve this easily using Java-based migrations.
You can use Java-based migrations. If you still want to use SQL-based migrations, then use Flyway placeholders. Save required path in placeholder using *.pom properties. Example:
<flyway.placeholders.rtfPath>${project.build.outputDirectory}/rtf</flyway.placeholders.rtfPath>
Then use rtfPath in your SQL migration file in order to generate the full path to your uploaded file.

Any way to deploy database in PhpStorm?

I can deploy files via FTP to the remote host. Is there any way to deploy database along with files? I use a CMS so when I change something in the control panel it'll be written to the db. I don't want to do double work or do it manually (it's buggy way, huh).
PHPStorm has a view 'Database' which can view all structure of your DB. Also you can run a SQL Script from local file to remote DB. Perhaps work with triggers etc...
PHPStorm Database

What is the best deployment practice when using MODX?

It is convenient when you have DEVELOPMENT version of application on your local machine and you may deploy it on STAGE server for testing (it's optional) and then deploy it on PRODUCTION server. You can do this relatively easily when there is a fine discretion of code and data in the project (for example, if we store all the code and settings in project files and data in database).
MODX stores templates, snippets, etc. in database. Yes, we can move this code to static files and then we can use version control system for tracking changes of these items. But these ones have representation rows in database too. It means we must update database as before if we added or removed some items.
Looks like we can also get some troubles if we just copied files of extensions instead of making installation by package manager (because extensions often have its own tables in DB).
Another problem is that applications on DEV and PROD have different settings stored in files (configs) and database (user accounts, e.g.).
I do not still see the clear way to organize iterative DEV-STAGE-PROD development cycle. So, my questions are:
Which files and database tables should (or must) I copy when deploying?
What is the mode (replace, ignore) I should do that in?
What is the easiest and fastest way to do that?
My biggest concern here is having to deal with database.
P.S. I'm talking about "Revolution" version of MODX if it matters.
The database should not store any path information at all, previous versions did in the modx_workspaces table, but that has since disappeared [as of 2.2.4 I believe].
If you are concerned about the url changes [dev.mysite.com / stage.mysite.com / production...] don't be - this is all in the .htaccess file [there used to be a site_url system setting, but it also seems to have disappeared.]
The only file you need to worry about is the core/config/config.inc.php ~ create 3 different files with the different paths or just replace them when you migrate.
my process for moving/updating/migrating modx sites is:
clear the cache!!
tar cvfz httpdocs.tar.gz httpdocs/
mysqldump -u -p the_database > export.sql
move the files, tar xvfz & import the database.
It's a good idea to check the modx_workspaves table and if you have used an older version of gallery, check that as well, but most plugins & developers seem to be used to NOT storing path information in code & DB tables.
Of course if you have hardened your installation there are a few more steps, but nothing major. [see the "hardening Modx article on rtfm.modx.com]
I think what you're looking for is this plugin (depending on your version of modx):
https://github.com/digitalbutter/MODX-Mirror
https://github.com/digitalbutter/FEM
All Chunks, Snippets etc. are located on disk. Any changes made to the files will trigger the appropriate database changes without the need to do a complete SQL Import/Reimport. This will allow for any Version Control System / Distributed Development Environment / Automated Deployment.