I'm trying to use liquibase to track changes to a postgresql database using dropwizard-migrations. I'd like to be able to run the migration on the existing production database instead of rebuilding from scratch. Right now I'm testing in staging. I've created a changeset with a precondition.
<changeSet id="3" author="me">
<preConditions onFail="CONTINUE">
<not>
<sequenceExists sequenceName="emails_id_seq"/>
</not>
</preConditions>
<createSequence sequenceName="emails_id_seq" startValue="1" incrementBy="1" />
</changeSet>
My goal is to skip applying the changeset if the sequence is already there. Seems straightforward, but it's not working.
ERROR [2013-09-13 22:19:22,564] liquibase: Change Set migrations.xml::3::me failed. Error: Error executing SQL CREATE SEQUENCE emails_id_seq START WITH 1 INCREMENT BY 1: ERROR: relation "emails_id_seq" already exists
! liquibase.exception.DatabaseException: Error executing SQL CREATE SEQUENCE emails_id_seq START WITH 1 INCREMENT BY 1: ERROR: relation "emails_id_seq" already exists
I've tried using the MARK_RAN instead of CONTINUE too. No luck with that.
A much simpler way to apply your changesets to an existing database, without execution, is to use the changelogSync command.
The following commands demonstrate how to extract a changelog and then sync it with the current database:
liquibase --changeLogFile=mydb.xml generateChangeLog
liquibase --changeLogFile=mydb.xml changelogSync
What the sync command does is create all the entries in the changelog table, so that the liquibase file can now be used as normal to update the database:
liquibase --changeLogFile=mydb.xml update
I solved this problem using the sqlCheck instruction :
<changeSet id="sys-0" context="structural">
<preConditions onFail="MARK_RAN"><sqlCheck expectedResult="0">SELECT count(c.relname) FROM pg_class c WHERE c.relkind = 'S' and c.relname = 'hibernate_sequence'</sqlCheck></preConditions>
<!-- <preConditions><not><sequenceExists schemaName="public" sequenceName="hibernate_sequence"/></not></preConditions> -->
<createSequence schemaName="public" sequenceName="hibernate_sequence"/>
</changeSet>
(tested on liquibase 2.0.1 version)
I did the same that you want to do with a view, and for me it works:
Maybe gives you some idea:
<changeSet author="e-ballo" id="DropViewsAndcreateSynonyms" context="dev,int,uat,prod">
<preConditions onFail="CONTINUE" >
<viewExists viewName="PMV_PACKAGE_ITEMS" schemaName="ZON"/>
<viewExists viewName="PMV_SUBSPLAN_INSTALLTYPES" schemaName="ZON"/>
</preConditions>
<dropView schemaName="ZON" viewName="PMV_PACKAGE_ITEMS" />
<dropView schemaName="ZON" viewName="PMV_SUBSPLAN_INSTALLTYPES" />
<sqlFile path="environment-synonyms.sql" relativeToChangelogFile="true" splitStatements="true" stripComments="true"/>
</changeSet>
I hope it helps
I solved this by running the dropwizard-migrations "fast-forward" command as follows:
java -jar hello-world.jar db fast-forward helloworld.yml
This will mark the next changeset as applied without actually applying it. You may have to do this one time per changeset you want to fast-forward. There is also an --all flag if you want to fast forward everything.
More details can be found here: http://dropwizard.codahale.com/manual/migrations/
Also Remember to check they was no caches, for me I have working with OpenMRS modules but due to caches in openMRS my pre-conditions never got in effects and this causes me to think that my codes are failing instead they never got executed
Related
When I execute rollback from command line it shows successfully executed. But on database there are no changes applied. I am using PostgreSQL DB. The command I used for rollback is
java -jar C:\Users\Ranjith.s\.m2\repository\org\liquibase\liquibase-core\3.5.5\liquibase-core-3.5.5.jar --changeLogFile=src\main\resources\db\changelog\db.changelog-master.xml --url=jdbc:postgresql://localhost/transformation_as_a_service --classpath=C:\softwares\liquibase\lib\postgresql-42.2.11.jar --username=transformation_as_a_service_admin --password=transformation_as_a_service_admin --logLevel=debug rollback version_2020.3.002
I am pasting changelog file for reference
<changeSet id="2020-03-UPDATE-PRIMARY-KEY-TO-COLUMN-PID" author="TAAS">
<preConditions onFail="HALT">
<columnExists tableName="TB_TRANSFORMATION" columnName="PID"/>
</preConditions>
<dropPrimaryKey constraintName="pk_tb_transformation" tableName="TB_TRANSFORMATION"/>
<addUniqueConstraint tableName="TB_TRANSFORMATION" columnNames="ID" constraintName="idx_taas_id" />
<addPrimaryKey tableName="TB_TRANSFORMATION" columnNames="PID" constraintName="idx_taas_pid"/>
<rollback>
<dropPrimaryKey tableName="TB_TRANSFORMATION"/>
<dropColumn tableName="TB_TRANSFORMATION" columnName="PID"/>
<addPrimaryKey tableName="TB_TRANSFORMATION" columnNames="ID" constraintName="idx_taas_id"/>
</rollback>
</changeSet>
<changeSet author="TAAS" id="tag_version_2020.3.002">
<tagDatabase tag="version_2020.3.002" />
</changeSet>
If that is your complete changelog, and you ran liquibase update before running the liquibase rollback command, then it is working as designed. The idea is that you would run update and it would deploy those changes. You then continue on, adding more changesets and deploying them using the update command. But on one of those deployments, you discover a problem and decide that you need to go back to a know good version, so you use the rollback command with the tag, and it rolls back everything AFTER the tag.
I have a project where the database is redshift and I am using Postgrex adapter in my phoenix project, locally I am using postgresql, and everything is working fine, but when I deploy and try to run migrations, I am getting this error.
15:39:27.201 [error] Could not create schema migrations table. This error usually happens due to the following:
* The database does not exist
* The "schema_migrations" table, which Ecto uses for managing
migrations, was defined by another library
* There is a deadlock while migrating (such as using concurrent
indexes with a migration_lock)
To fix the first issue, run "mix ecto.create".
To address the second, you can run "mix ecto.drop" followed by
"mix ecto.create". Alternatively you may configure Ecto to use
another table for managing migrations:
config :my_service, MyService.Repo,
migration_source: "some_other_table_for_schema_migrations"
The full error report is shown below.
▸ Given the following expression: Elixir.MyService.StartupTasks.init()
▸ The remote call failed with:
▸ ** (exit) %Postgrex.Error{connection_id: 5598, message: nil, postgres: %{code: :feature_not_supported, file: "/home/ec2-user/padb/src/pg/src/backend/commands/tablecmds.c", line: "3690", message: "timestamp or timestamp with time zone column do not support precision.", pg_code: "0A000", routine: "xen_type_size_from_attr", severity: "ERROR"}, query: nil}
▸ (ecto_sql) lib/ecto/adapters/sql.ex:629: Ecto.Adapters.SQL.raise_sql_call_error/1
▸ (elixir) lib/enum.ex:1336: Enum."-map/2-lists^map/1-0-"/2
▸ (ecto_sql) lib/ecto/adapters/sql.ex:716: Ecto.Adapters.SQL.execute_ddl/4
▸ (ecto_sql) lib/ecto/migrator.ex:633: Ecto.Migrator.verbose_schema_migration/3
▸ (ecto_sql) lib/ecto/migrator.ex:477: Ecto.Migrator.lock_for_migrations/4
▸ (ecto_sql) lib/ecto/migrator.ex:401: Ecto.Migrator.run/4
▸ (my_service) lib/my_service/startup_tasks.ex:11: MyService.StartupTasks.migrate/0
▸ (stdlib) erl_eval.erl:680: :erl_eval.do_apply/6
It seems that redshift does not support some of the data types that postgres supports, so is there a better way to go about this or can I, create my own schema migrations table with another time stamp?
There are limitations that the driver cannot outcome, since the principle of working compared to postgres database differs, here is the documentation to the ecto adapter.
In documentation is stated:
We highly recommend reading the Designing Tables section from the AWS
Redshift documentation.
If you want to continue to use postgres on local, then you will need to create 2 separate repos and respectively migrations. Here are the commands you can use to migrate a separate repo.
I recommend however to get a dev instance of redshift and use it for development, since the method of working with databases like redshift is different and you can easily make a mistake.
While trying to generate database using liquibase changelog in Postgres, I am getting following error:
forIndexName is not allowed on postgresql
Following is my change log which is generating same error:
<changeSet author="chintan.patel" id="CP0001">
<createIndex indexName="PK_USER" tableName="USER" unique="true">
<column name="FIRSTNAME"/>
<column name="MIDDLENAME"/>
<column name="LASTNAME"/>
</createIndex>
<addPrimaryKey columnNames="FIRSTNAME, MIDDLENAME, LASTNAME" constraintName="PK_USER" forIndexName="PK_USER" tableName="USER"/>
</changeSet>
This same change set works fine in Oracle.
Please suggest me, whats wrong here.
Index:
CREATE INDEX guild_name_lower_ops
ON guilds
USING btree
(lower(name::text) COLLATE pg_catalog."default" varchar_pattern_ops);
Generated changeset:
<changeSet author="Vlad (generated)" id="1450262497286-89">
<createIndex indexName="guild_name_lower_ops" tableName="guilds" />
</changeSet>
And it doesn't pass "status" command check with "columns is empty" error message.
Why it's not exported? Are there any workarounds so I can still use liquibase with my db?
Liquibase does not currently figure this out for Postgres. The workaround is to alter the XML after it is generated so that when creating new databases, the indexes are created properly.
I keep having the following error in MobileFirst Platform 6.3:
Runtime: org.apache.commons.dbcp.SQLNestedException: Cannot create
PoolableConnectionFactory (DB2 SQL Error: SQLCODE=-142,
SQLSTATE=42612, SQLERRMC=null, DRIVER=4.19.26)
This is my adapter code:
var test2 = WL.Server.createSQLStatement("SELECT * FROM WSDIWC.WBPTRR1");
function getCEID(cnum) {
return WL.Server.invokeSQLStatement({
preparedStatement : test2,
parameters : []
});
}
And adapter XML:
<connectivity>
<connectionPolicy xsi:type="sql:SQLConnectionPolicy">
<!-- Example for using a JNDI data source, replace with actual data source
name -->
<!-- <dataSourceJNDIName>${training-jndi-name}</dataSourceJNDIName> -->
<!-- Example for using MySQL connector, do not forget to put the MySQL
connector library in the project's lib folder -->
<dataSourceDefinition>
<driverClass>com.ibm.db2.jcc.DB2Driver</driverClass>
<url>jdbc:db2://***</url>
<user>**</user>
<password>**</password>
</dataSourceDefinition>
</connectionPolicy>
</connectivity>
I have remove the url, user and password.
Hope you help me out on clarification about the current problem.
I already know that the sql is not accepted since it's just a simple query.
I have also research about z/OS DB2 that it has issue with same error code sqlcode=-142. http://answers.splunk.com/answers/117024/splunk-db-connect-db2.html
While you say that this is a "simple query", the exception error code mentions the following:
-142
THE SQL STATEMENT IS NOT SUPPORTED
Explanation
An SQL statement was detected that is not supported by the database.
The statement might be valid for other IBM® relational database
products or it might be valid in another context. For example,
statements such as VALUES and SIGNAL or RESIGNAL SQLSTATE can be used
only in certain contexts, such as in a trigger body or in an SQL
Procedure.
System action
The statement cannot be processed.
Programmer response
Change the syntax of the SQL statement or remove the statement from
the program.
You should review the DB2 SQL guidelines for how to achieve what you want to achieve, and also explain that in the question if you'd like further assistance. For example, are you sure "WSDIWC.WBPTRR1" is actually available?
I encountered this same problem with JDBC connections to mainframe DB2 in MobileFirst 6.3. Connections to DB2 LUW worked fine. It appears that default pool validationQuery is valid for DB2 LUW but not DB2 z/OS.
You can work around the bug by doing the data source configuration in the Liberty profile server.xml. From the Eclipse Servers view, expand MobileFirst Development Server and edit the Server Configuration. Add the driver and data source there; for example:
<library id="db2jcc">
<fileset dir="whereever" includes="db2jcc4.jar db2jcc_license_cisuz.jar"/>
</library>
<dataSource id="db2" jndiName="jdbc/db2">
<jdbcDriver libraryRef="db2jcc"/>
<properties.db2.jcc databaseName="mydb" portNumber="5021"
serverName="myserver" user="myuser" password="mypw" />
</dataSource>
Then reference it in your adapter XML under connectionPolicy:
<dataSourceJNDIName>jdbc/db2</dataSourceJNDIName>
A benefit of configuring data sources in server.xml (vs the adapter XML) is you have access to all data source, JDBC, and JCC properties. So if the connection pool gives you other problems, you can customize it or switch to another data source type, such as type="javax.sql.DataSource".