Stored procedure for PostgreSQL on Liquibase Community - postgresql

I read on Wikipedia that you need the Commercial version of Liquibase to deal with stored procedures. Can anybody please comment on this?
Thanks
https://en.wikipedia.org/wiki/Liquibase

No, you don't.
I typically put the code to create the function and procedure into a SQL script and then use to run it. The changeSet itself is defined as runOnChange=true so I only need to edit the file to make Liquibase apply the changeset
<changeSet id="1" author="foo" runOnChange="true">
<sqlFile path="procs/create_function.sql"
relativeToChangelogFile="true"/>
</changeSet>
I do the same with views and materialized views.

Liquibase community manager here.
As described in the answer by #a_horse_with_no_name, it is entirely possible to write a Liquibase changelog that creates stored procedures that will work just fine in the free version.
To do that you can use the XML changelog syntax with a <sql> or <sqlFile> tag, or you could use a formatted sql changelog.
The thing that the Pro version of Liquibase introduces is the ability to use the generateChangeLog and diffChangeLog commands to "reverse-engineer" stored logic (including stored procedures) from an existing database, generating XML changelogs that use the <createProcedure> tag.

Yes. There are two options that are available for stored procedures/stored logic: Liquibase Pro or Datical. You can get a free trial for Liquibase Pro to try them out to make sure they work for you at www.liquibase.org.

Related

What happens to postgreSQL when I install pipelineDB extension?

I would like to compare pipelineDB and PostgreSQL.
Reading the documentation, I found out that pipelineDB is the extension of PostgreSQL.
Then I'm getting curious : What am I have to do with PostgreSQL to compare pipelineDB?
Does system regards Postgres as pipelineDB?
Or, is there any option to switch Postgres and Pipeline extension?
There will be happen nothing. You will use Pipeline functionality via new special functions and database objects. It is great on extensions, so after installing you can use PostgreSQL as usual, and if you want, you can use special functions provided by extension.
Streams are implemented as foreign tables supported Pipelinedb - so this extension has full control over inserting, reading data to/from this object.

db2look from SQL

Is it possible to get the table structure like db2look from SQL?
Or the only way is from command line? Thus, by wrapping a external stored procedure in C I could call the db2look, but that is not what I am looking for.
Clarification added later:
I want to know which tables have the non logged option from SQL.
It is possible to create the table structure from regular SQL and the public DB2 catalog - however, it is complex and requires some deeper skills.
The metadata is available in the DB2 catalog views in the SYSCAT schema. For a regular table you would first start off by looking into the values in SYSCAT.TABLES and SYSCAT.COLUMNS. From there you would need to branch off to other views depending on what table and column options you are after, whether time-travel tables, special partitioning rules, or many other options are involved.
Serge Rielau published an article on developerWorks called Backup and restore SQL schemas for DB2 Universal Database that provides a set of stored procedures that will do exactly what you're looking for.
The article is quite old (2006) so you may need to put some time in to update the procedures to be able to handle features that were added to DB2 since the date of publication, but the procedures may work for you now and are a nice jumping off point.

How to identify recently changed SPs in postgresql?

I want to get the list of stored procedures which were recently changed.
In MS SQL Server, there are system tables which store those information and we can easily retrieve what has changed. Similarly I want to find most recent changed SPs and tables in PostgreSql
Thanks
You can use an EVENT TRIGGER for logging. More information about how to create and use event triggers, can be found in the manual and www.youlikeprogramming.com.
You need at least version 9.3.

What are ways to include sizable Postgres table imports in Flyway migrations?

We have a series of modifications to a Postgres database, which can generally be written all in SQL. So it seems Flyway would be a great fit to automate these.
However, they also include imports from files to tables, such as
COPY mytable FROM '${PWD}/mydata.sql';
And secondarily, we'd like not to rely on Postgres' use of file paths like this, which apparently must reside on the server. It should be possible to run any migration from a remote client -- as in Amazon's RDS documentation (last section).
Are there good approaches to handling this kind of scenario already in Flyway? Or alternate approaches to avoid this issue altogether?
Currently, it looks like it'd work to implement the whole migration in Java and use the Postgres driver's CopyManager to import the data. However, that means most of our migrations have to be done in Java, which seems much clumsier. (As far as I can tell, hybrid Java+SQL migrations are not expected?)
Am new to looking at Flyway so thought I'd ask what other alternatives might exist with Flyway, since I'd expect it's pretty common to import a table during a migration.
Starting with Flyway 3.1, you can use COPY FROM STDIN statements within your migration files to accomplish this. The SQL execution engine will automatically use PostgreSQL's CopyManager to transfer the data.

How to migrate database from SAP DB to PostGres?

Any idea how to go about doing that through a tool(preferred). Any alternate ways to do that.
You can check out the migration studio from EnterpriseDB here, although I have no experience with it.
There is no comparison to doing it yourself though - if you're not familiar with Postgres then this will get you familiar, and if you are, then aside from the data entry aspect, this should be old hat.
Use maxdb tools to generate a SQL text export of the database. Then import this file in PostgreSQL, luckily you won't need prior processing of the data dump.