Activiti 5.18.0 on Postgres won't use the schema - postgresql

Has anyone worked around the problem that Activiti (5.18.0, and I tried the 6 beta, too) won't use the database schema resp. table prefix on Postgres?
On startup, it will not find the tables, if they aren't in the public schema (or another schema that's in the search_path). After that I think it's ok.
There seem to be two bug reports, but that issue doesn't seem solved.
https://activiti.atlassian.net/browse/ACT-1708
https://activiti.atlassian.net/browse/ACT-1968
I tried different solutions, one of them was setting the search_path for the database with activiti as its first entry, but it seems that parts of a Postgres library used change the search_path dynamically, so that sooner or later Activiti will complain again.
I'm talking about the integration of the Activiti ProcessEngine in my own application.

After debugging the source code of Activity (actually, I cloned https://github.com/Activiti/Activiti (yesterday!?) and used the branch activiti6) I found that I was actually lacking an attribute.
So if you set the schema and the prefix, then you have to set the "tablePrefixIsSchema" attribute to "true"!
processEngineConfiguration.setDatabaseSchema("activiti");
processEngineConfiguration.setDatabaseTablePrefix("activiti.");
/**
* NOTE!
*
* If we set the prefix (for whatever reasons) and it's the same as the
* schema, then the following attribute has to be set to "true"!!!
*/
processEngineConfiguration.setTablePrefixIsSchema(true);
This solved the whole issue.
I'll try now with Activiti 5.18 and update this solution. (I use Postgresql 9.3 and the driver 9.3-1103-jdbc41.)
The original error was (if others run into this):
org.activiti.engine.ActivitiException: Activiti database problem: Tables missing for component(s) engine, history, identity
at org.activiti.engine.impl.db.DbSqlSession.dbSchemaCheckVersion(DbSqlSession.java:925)

Related

Talend 8.0.3 - Component tDBSCD (MSSQL/MySQL)

I'm currently learning the SCD methodology and tried to apply it with Talend (TOS 8.0.3 for DI) and I noticed something with the tDBSCD component. I tried using the tDBSCD with 2 different databases, one with MySQL and the other one with SQLServer.
So the issue I have is when using the tDBSCD component with the MySQL database (configuration) I have the option "Action on table" but not when I configure it for MSSQL.
Is the "Action on table" option (for tDBSCD component) only available for MySQL (and possibly other databases, I didn't check them all) and if you know why ?
Thanks.
You can actually compare the 2 components:
MySQL:
https://github.com/Talend/tdi-studio-se/tree/master/main/plugins/org.talend.designer.components.localprovider/components/tMysqlSCD
MsSQL: https://github.com/Talend/tdi-studio-se/tree/master/main/plugins/org.talend.designer.components.localprovider/components/tMSSqlSCD
You can see that MySQL has a TABLE_ACTION entry: https://github.com/Talend/tdi-studio-se/blob/master/main/plugins/org.talend.designer.components.localprovider/components/tMysqlSCD/tMysqlSCD_java.xml#L154
And the code generation also includes the tableAction snippet: https://github.com/Talend/tdi-studio-se/blob/master/main/plugins/org.talend.designer.components.localprovider/components/tMysqlSCD/tMysqlSCD_begin.javajet#L113
SQL server doesn't have the UI element defined neither it includes the tableAction snippet, that's the reason it's not visible.
If you look at the snippet you can see it only covers MySQL and Oracle.
https://github.com/Talend/tdi-studio-se/blob/master/main/plugins/org.talend.designer.components.localprovider/components/templates/_tableActionForSCD.javajet
With that being said there's a tCreateTable component that you could use with your schema to create tables.

Rename mongo collection without knowing its namespace

I am using liquibase and mongo to execute a rename-migration like that:
<ext:runCommand>
<ext:command><![CDATA[
{
renameCollection: "XXX.foo",
to: "XXX.bar"
}
]]></ext:command>
</ext:runCommand>
The renaming happens within the bounds of the existing DB, so cross-db migrations are not relevant to my usecase. My problem is that I do now know XXX in advance. My liquibase migration is intended to run in multiple environmets and each one uses its unique version of XXX.
Also, liquibase limits me to runCommand/adminCommand semantics, and the spec for them clearly says that I should provide full namespaces for that, which I cannot have.
Of course I could create multiple liquibase change sets, one for each environment, and hardcode the proper namespace for each one. But I would like to avoid that option since it will does not scale very well.
Is there any way to rename a mongo collection (using runCommand/adminCommand semantics), in a namespace agnostic way?
Enter the database name as a parameter at the time of executing liquibase update and then use liquibase changelog property substitution in the changeset with the specified parameter. Should solve the problem.
Adding the way Alkis has achieved it as mentioned in comments :
Just for the record, I use standalone liquibase, and I had to call
liquibaseInstance.getChangeLogParameters().set("databaseName",
myRuntimeDetectedName);

DBeaver will not display certain schemas correctly in the Database Navigator

I'm using DBeaver 5.2.5.201811181655 with IBM DB2/400 v7r3.
I'm trying to see a schema called WRKCERTO, but Database Navigator will not show it. The schema is there and I have rights to it, and I'm able to run SQL scripts with its objects, such as SELECT * FROM WRKCERTO.DAILYT and it works.
To make matters stranger, when WRKCERTO is the only schema in the filters, the contents of a schema which I cannot identify are shown under the connection as if the connection is their parent. It doesn't show any schema as a node in the tree between the connection & Tables, Views, etc. The tables are familiar, but I cannot determine their exact schema, and as such also cannot query any of them because DBeaver doesn't know what schema to use.
The behavior of the Projects window is the same.
If I connect with SquirrelSQL 3.8.1 everything looks ok. I can see WRKCERTO along with all my other schemas as if nothing is different.
The screenshot below shows the issue. The schema I use most is F_CERTOB, which is visible under the connection ASP7, which currently has two schema filters: F_CERTOB and WRKCERTO. But as shown, WRKCERTO...isn't.
The connection TEST is an exact copy of ASP7, but its only filter is WRKCERTO. And as mentioned above, the items under the connection name cannot be identified.
I've gone through the DBeaver settings, but I cannot find any way to change this behavior. AND...this is the first time I've tried to use WRKCERTO. I tried to access it for the first time only a couple days ago, so it seems unlikely there are bad bits of information about it floating around in my system, or in DBeaver.
What information can I provide to help diagnose this issue...?
Please check below url.Similar issue mentioned with some solution.
You may also want to try this and let me know if it works or not.
https://dbeaver.io/forum/viewtopic.php?f=2&t=911

Merging LiquiBase changesets

I'm developing a system with database version control in LiquiBase. The system is still in pre-alpha development and there are a lot of changes that were reverted or supplemented by other changes (tables removed, columns added and removed).
The current change set reflects the whole development history with many failed experiments, and this whole is rollouted when initializing the database.
Because there is NO release version, I can start from scratch and pull actual DB state in single XML changeset.
Is there a way to tell LiquiBase to merge all change sets into one file, or the only way to do that is per hand?
Just use your existing database to generate change log that will be used from now on. For this you can use generateChangeLog command from command line, it will generate the changelog file with all the changeSets that represent your current state of the database. You can use this file in your project as initial db creation file, to be used on an empty database. Here's a link to docs.
There is a page in the Liquibase docs which discusses this scenario in detail:
http://www.liquibase.org/documentation/trimming_changelogs.html
To summarise, they recommend that you don't bother since consolidating your changelogs is both risky and low-reward.
If you do want to push ahead with this, then restarting the changelog using generateChangeLog, as suggested by #veljkost, is probably the easiest way. This is documented at http://www.liquibase.org/documentation/existing_project.html
Hence I didn't find automatic solution for this problem in case the changelog is already deployed on several databases in different states, I will describe here my solution for this problem:
Generate changelog of your current development state of database using liquibase generate changelog, like:
mvn liquibase:generateChangeLog -Dliquibase.outputChangeLogFile=current_state.yml
Audit generated changelog, check whether it looks good (liquibase is not perfect, it often generates stupid statements). Also if you have in your schema some static data, like dictionaries or so, which were previosuly populated using liquibase, you have to add these to generated changelog as well, you can export data from your database using generate changelog command mentioned above with -Dliquibase.diffTypes=data property.
Now to prevent the execution of generated changelog (it will obviously fail on existing databases, on prod, test, and other developers local envs), you can do this using for example liquibase changelogSync, or using liquibase contexts, but all this options require you to do some manual work on every database. You can achieve automatic result by adding the preConditions statements for your changeSets.
For changesets intended to run on empty databases (changelogs you generated in step 1. above) you can add something like this:
preConditions:
- onFail: MARK_RAN
- not:
- tableExists:
tableName: t_project
Where t_project is the table name that existed before (most likely this should be table added in first changeSet, so every database which runned at least one changeSet will have this). This will mark generated changelog as run on environments with existing schema, and will run generated changelog for every new database you would like to migrate.
Unfortunatelly you have to adjust all legacy changesets as well (I didn't found better solution yet, I did this change using regex and sed), you have to add something like that:
preConditions:
- onFail: MARK_RAN
- tableExists:
tableName: t_project
So opposite condition, from above one. With this all databases which runned at least one changeset in past, will continue to migrate (EXECUTED status of changesets) until changeset generated in step 1. above, and mark generated changesets with MARK_RAN. And for new databases, all previous changesets will be skipped, and first executed will be one generated in step 1. above.
With this solution you can push your merged changelog at any time, and every environment and developer won't have any problem with manual syncing or so.

Managing database changes

I'm starting to move more logic into the database, using triggers, views, functions, CTEs, etc. When plv8/json comes out for postgres, I can see myself putting lots of logic in there.
I'm having problems with the "standard" way of doing database migrations in sequel and activerecord. Both sequel and activerecord let you put arbitrary sql code into timestamped files. When each file is ran, a schema_versions table is updated with the filename (or timestamp in the filename), which keeps record of which migrations have been applied to the current database.
If a lot of coding is being done at the database level, that means that modifications to existing views, functions, etc follow the below pattern:
Migration 1 defines a function and a view that uses that function.
-- Migration 1
create function calculate(x int) returns int as $$
return x + 1;
$$ language sql;
create view foos as (
select something, calculate(something) from a_table
);
Requirements change, and I need to change a function type. In Migration 2 I have to drop all objects that depend on foo, and recreate them by copying their entire body -- even if there weren't any changes in most of the other code!
-- Migration 2
-- Have to drop all views and functions that depend on the
-- `calculate(int)` function.
drop view foos;
create or replace calculate(x bigint) returns bigint as $$
return x + 1;
$$ language sql;
-- I could do `drop function calculate(int) cascade`,
-- but I might accidentally drop some objects that wouldn't get recreated below.
-- Now I have to recreate foo.
create view foos as (
select something, calculate(something) from a_table
);
If I'm building a system based on views and functions and triggers, my migrations would be filled with duplicated code, and it's difficult to find the latest version of the code. You might say "don't do that!", but for my purposes (e-commerce, shipping, transactions), I'm finding it's a lot easier and faster to have the database ensure the integrity of the data by doing the logic inside the database.
You can (of course) dump the current database schema (which includes all the code definitions), but I think you lose comments. And you wouldn't generally want to edit a giant file that contains the whole schema.
Any ideas on how to solve this problem?
My best idea is to how the sql code contained in their own canonical files (app/sql/orders/shipping.sql, app/sql/orders/creation.sql, etc). Everyone develops directly on these. Whenever it's time for a release, then you'd want to make a new migration file, look at all the changed code since the previous release, figure out the dependency chain of the database objects that need to be dropped and recreated, and then copy the sql from the canonical sql files into a new sequel/activerecord migration file. But it's a pain. :/
Thoughts are very welcome. I hope I explained this well enough, I'm cutting back on my caffeine intake and I'm a little groggy atm.
Oh, I asked a similar question on Stack Overflow: Changing the type of a column used in other views The answer was a function that let me pass in:
sql code to run
database views to drop and recreate
The function would retrieve the view definition, drop the views, run the sql code, then recreate the view definition (in reverse order of dropping). Perhaps a system of functions like this would help solve the problem of having to copy/paste sql code into the migration files.
I'd recommend liquibase.
You create files which track the changes to your database and these will be run into the database in the correct migration order.
You might find Dave Wheeler's blog-posts interesting starting from here:
http://justatheory.com/computers/databases/simple-sql-change-management.html
My rate of database change is fairly small but I tend to be careless and make small changes to the schema directly, so I've had to come up with a fair bit of infrastructure to catch when I've done so. The basic elements are:
A makefile that can rebuild a development database from scratch
A set of schema-files separated into "modules" (lookups_schema.sql, lookup_data.sql)
A set of update files that transition from one revision to the next
I don't usually have the corresponding downgrade scripts, some people do
A script to populate my database with a plausible amount of test data
Crucially, a test suite via pgTAP that checks my various functions, views and also the upgrade scripts. The upgrade tests can be run against a live database too.
If you have a separate instance of PostgreSQL set up with fsync turned off / on ramdisk etc then rebuilding the whole DB and populating it can take seconds (if you don't have too much test data).
Start with #1, #2, then add #6 (pgTAP is very cool), then the rest. The crucial thing is a test suite that checks your in-database code.
There are tools that try to automate schema changes for you, but they are really only good at adding a new column to a table and that sort of thing. Once you have code in your db then they're not much help.