I am using liquibase and mongo to execute a rename-migration like that:
<ext:runCommand>
<ext:command><![CDATA[
{
renameCollection: "XXX.foo",
to: "XXX.bar"
}
]]></ext:command>
</ext:runCommand>
The renaming happens within the bounds of the existing DB, so cross-db migrations are not relevant to my usecase. My problem is that I do now know XXX in advance. My liquibase migration is intended to run in multiple environmets and each one uses its unique version of XXX.
Also, liquibase limits me to runCommand/adminCommand semantics, and the spec for them clearly says that I should provide full namespaces for that, which I cannot have.
Of course I could create multiple liquibase change sets, one for each environment, and hardcode the proper namespace for each one. But I would like to avoid that option since it will does not scale very well.
Is there any way to rename a mongo collection (using runCommand/adminCommand semantics), in a namespace agnostic way?
Enter the database name as a parameter at the time of executing liquibase update and then use liquibase changelog property substitution in the changeset with the specified parameter. Should solve the problem.
Adding the way Alkis has achieved it as mentioned in comments :
Just for the record, I use standalone liquibase, and I had to call
liquibaseInstance.getChangeLogParameters().set("databaseName",
myRuntimeDetectedName);
Related
We have about 200 user define functions in DB2. These UDF are generated by datastudio into a single script file.
When we create a new DB, we need to run the script file several times because some UDF are dependent on other UDF and cannot be create until the precedent functions are created first.
Is there a way to generate a script file so that the order they are deployed take into account this dependency. Or is there some other technique to arrange the order efficiently?
Many thanks in advance.
That problem should only happen if the setting of auto_reval is not correct. See "Creating and maintaining database objects" for details.
Db2 allows to create objects in an "unsorted" order. Only when the object is used (accessed), the objects and its depending objects are checked. The behavior was introduced a long time ago. Only some old, migrated databases keep auto_reval=disabled. Some environments might set it based on some configuration scripts.
if you still run into issues, try setting auto_reval=DEFERRED_FORCE.
The db2look system command can generate DDL by by object creation time with the -ct option, so that can help if you don't want to use the auto_reval method.
I'm developing a system with database version control in LiquiBase. The system is still in pre-alpha development and there are a lot of changes that were reverted or supplemented by other changes (tables removed, columns added and removed).
The current change set reflects the whole development history with many failed experiments, and this whole is rollouted when initializing the database.
Because there is NO release version, I can start from scratch and pull actual DB state in single XML changeset.
Is there a way to tell LiquiBase to merge all change sets into one file, or the only way to do that is per hand?
Just use your existing database to generate change log that will be used from now on. For this you can use generateChangeLog command from command line, it will generate the changelog file with all the changeSets that represent your current state of the database. You can use this file in your project as initial db creation file, to be used on an empty database. Here's a link to docs.
There is a page in the Liquibase docs which discusses this scenario in detail:
http://www.liquibase.org/documentation/trimming_changelogs.html
To summarise, they recommend that you don't bother since consolidating your changelogs is both risky and low-reward.
If you do want to push ahead with this, then restarting the changelog using generateChangeLog, as suggested by #veljkost, is probably the easiest way. This is documented at http://www.liquibase.org/documentation/existing_project.html
Hence I didn't find automatic solution for this problem in case the changelog is already deployed on several databases in different states, I will describe here my solution for this problem:
Generate changelog of your current development state of database using liquibase generate changelog, like:
mvn liquibase:generateChangeLog -Dliquibase.outputChangeLogFile=current_state.yml
Audit generated changelog, check whether it looks good (liquibase is not perfect, it often generates stupid statements). Also if you have in your schema some static data, like dictionaries or so, which were previosuly populated using liquibase, you have to add these to generated changelog as well, you can export data from your database using generate changelog command mentioned above with -Dliquibase.diffTypes=data property.
Now to prevent the execution of generated changelog (it will obviously fail on existing databases, on prod, test, and other developers local envs), you can do this using for example liquibase changelogSync, or using liquibase contexts, but all this options require you to do some manual work on every database. You can achieve automatic result by adding the preConditions statements for your changeSets.
For changesets intended to run on empty databases (changelogs you generated in step 1. above) you can add something like this:
preConditions:
- onFail: MARK_RAN
- not:
- tableExists:
tableName: t_project
Where t_project is the table name that existed before (most likely this should be table added in first changeSet, so every database which runned at least one changeSet will have this). This will mark generated changelog as run on environments with existing schema, and will run generated changelog for every new database you would like to migrate.
Unfortunatelly you have to adjust all legacy changesets as well (I didn't found better solution yet, I did this change using regex and sed), you have to add something like that:
preConditions:
- onFail: MARK_RAN
- tableExists:
tableName: t_project
So opposite condition, from above one. With this all databases which runned at least one changeset in past, will continue to migrate (EXECUTED status of changesets) until changeset generated in step 1. above, and mark generated changesets with MARK_RAN. And for new databases, all previous changesets will be skipped, and first executed will be one generated in step 1. above.
With this solution you can push your merged changelog at any time, and every environment and developer won't have any problem with manual syncing or so.
Scenario: A computed property needs to available for RAW methods. The IsComputed property set in the model will not work as its value will not be available to RAW methods.
Attempted Solution: Create a computed column directly on the SQL table as opposed to setting the IsComputed property in the model. Specify that CodefluentEntities not overwrite the computed column. I would than expect the BOM to read the computed SQL field no differently than if it was a normal database field.
Problem: I can't figure out how to prevent Codefluent Entities from overwriting the computed column. I attempted to use the production flags as well as setting produce="false" for the property in the .cfp. Neither worked.
Question: Is it possible to prevent Codefluent Entities from overwriting my computed column and if so, how?
The solution youre looking for is here
You can execute whatever custom T-SQL scripts you like, the only premise is to give the script a specific name so the Producer knows when to execute it.
i.e. if you want your custom script to execute after the tables are generated, name your script
after_[ProjectName]_tables.
Save your custom t-sql file alongside the codefluent generated files and build the project.
In my specific case, i had to enable full-text index in one of my table columns, i wrote the SQL script for the functionality, saved it as
`after_[ProjectName]_relations_add`
Heres how they look in my file directory
file directory
Alternate Solution: An alternate solution is to execute the following the TSQL script after the SQL Producer finishes generating.
ALTER TABLE PunchCard DROP COLUMN PunchCard_CompanyCodeCalculated
GO
ALTER TABLE PunchCard
ADD PunchCard_CompanyCodeCalculated AS CASE
WHEN PunchCard_CompanyCodeAdjusted IS NOT NULL THEN PunchCard_CompanyCodeAdjusted
ELSE PunchCard_CompanyCode
END
GO
Additional Configuration Needed to Make Solution Work: In order for this solution to work one must also configure the BOM so that it does not attempt to save the data associated with the computed columns. This can be done through Model using the advanced properties. In my case I selected the CompanyCodeCalculated property. Went to advanced settings. And set the Save setting to False.
Question: Somewhere in the Knowledge Center there is a passing reference on how to automate the execution SQL Scripts after the SQL Producer finishes but I can not find it. Anybody now how this is done?
Post Usage Comments: Just wanted to let people know I implemented this approach and am so far happy with the results.
Has anyone worked around the problem that Activiti (5.18.0, and I tried the 6 beta, too) won't use the database schema resp. table prefix on Postgres?
On startup, it will not find the tables, if they aren't in the public schema (or another schema that's in the search_path). After that I think it's ok.
There seem to be two bug reports, but that issue doesn't seem solved.
https://activiti.atlassian.net/browse/ACT-1708
https://activiti.atlassian.net/browse/ACT-1968
I tried different solutions, one of them was setting the search_path for the database with activiti as its first entry, but it seems that parts of a Postgres library used change the search_path dynamically, so that sooner or later Activiti will complain again.
I'm talking about the integration of the Activiti ProcessEngine in my own application.
After debugging the source code of Activity (actually, I cloned https://github.com/Activiti/Activiti (yesterday!?) and used the branch activiti6) I found that I was actually lacking an attribute.
So if you set the schema and the prefix, then you have to set the "tablePrefixIsSchema" attribute to "true"!
processEngineConfiguration.setDatabaseSchema("activiti");
processEngineConfiguration.setDatabaseTablePrefix("activiti.");
/**
* NOTE!
*
* If we set the prefix (for whatever reasons) and it's the same as the
* schema, then the following attribute has to be set to "true"!!!
*/
processEngineConfiguration.setTablePrefixIsSchema(true);
This solved the whole issue.
I'll try now with Activiti 5.18 and update this solution. (I use Postgresql 9.3 and the driver 9.3-1103-jdbc41.)
The original error was (if others run into this):
org.activiti.engine.ActivitiException: Activiti database problem: Tables missing for component(s) engine, history, identity
at org.activiti.engine.impl.db.DbSqlSession.dbSchemaCheckVersion(DbSqlSession.java:925)
I have several database names which exist on local, dev and live servers.
I want to ensure a potentially dangerous T-SQL script will always use the local db and not any other db by accident.
I can't seem to use the [USE] keyword with the local instance name followed by the db name.
It seems pretty trivial but I can't seem to get it to work.
I've tried this but no luck:
USE [MYMACHINE/SQLEXPRESS].[DBNAME]
The instance is going to be determined through your connection/connection string. You connect to a specific instance and then all subsequent T-SQL will be executed against that instance and that instance alone.
The current answer is not correct for the question asked. As you can specify a specific LocalDB file via the USE command in T-SQL. You just have to specify the fully qualified path name, which is what you will also see in the dropdown for the database list.
USE [C:\MyPath\MyData.mdf]
GO