Activiti BPM Platform - Select a specific schema for creating tables? - postgresql

I've just started working with activity and integrated it in my project (postgres based) in an embedded way (sample spring configuration file snip)
(...)
<!-- Activiti components -->
<bean id="processEngineConfiguration" class="org.activiti.spring.SpringProcessEngineConfiguration">
<property name="dataSource" ref="dataSource" />
<property name="transactionManager" ref="transactionManager" />
<property name="databaseSchemaUpdate" value="true" />
<property name="jobExecutorActivate" value="false" />
</bean>
<bean id="processEngine" class="org.activiti.spring.ProcessEngineFactoryBean">
<property name="processEngineConfiguration" ref="processEngineConfiguration" />
</bean>
<bean id="repositoryService" factory-bean="processEngine" factory-method="getRepositoryService" />
<bean id="runtimeService" factory-bean="processEngine" factory-method="getRuntimeService" />
<bean id="taskService" factory-bean="processEngine" factory-method="getTaskService" />
<bean id="historyService" factory-bean="processEngine" factory-method="getHistoryService" />
<bean id="managementService" factory-bean="processEngine" factory-method="getManagementService" />
(...)
It works well and create a lot of tables on my application schema at startup.
My problem is : tables are created in the 'public' schema in my postgres database. I would have preferred to put those tables in a separate schema, say 'activity'.
Fact is that after browsing the documentation / the net for almost two hours, I didn't found any way to change the default schema target creation behavior.
Any help... greatly appreciated ! ;)

Since Postgres 9.4 JDBC driver you can specify the default schema in the JDBC url like this:
jdbc:postgresql://localhost:5432/mydatabase?currentSchema=myschema
With this URL, all Activiti tables are created in the myschema schema instead of the default one in the search path, usually public.
Sources: this response on Stack Overflow and the latest documentation.

Related

Connections remain in idle status and increase untill reached max connections limit

I have a web-app using apache-camel to submit routes which execute some postgresql select and insert.
I'm not using any DAO, so I haven't a code where begin and close connections, I believed that connections life-cycle was managed by Spring but it seems not working.
The problem is that everytime my route executes, I see one more connection which remains IDLE, so previous IDLE connections are not being reused, this takes to the "too many client connections problem"
In my route I have:
<bean id="configLocation" class="org.springframework.core.io.FileSystemResource">
<constructor-arg type="java.lang.String" value="..../src/main/resources/config/test.xml" />
</bean>
<bean id="dataSourcePostgres" class="org.apache.ibatis.datasource.pooled.PooledDataSource">
<property name="driver" value="org.postgresql.Driver" />
<property name="url" value="jdbc:postgresql://localhost:5432/postgres" />
<property name="username" value="postgres" />
<property name="password" value="postgres" />
</bean>
<bean id="postgresTrivenetaSessionFactory" class="org.mybatis.spring.SqlSessionFactoryBean">
<property name="dataSource" ref="dataSourcePostgres" />
<property name="configLocation" ref="configLocation" />
</bean>
Here they are some sample queries:
<select id="selectTest" resultType="java.util.LinkedHashMap">
select * from test;
</select>
<insert id="insertTest" parameterType="java.util.LinkedHashMap" useGeneratedKeys="true" keyProperty="id" keyColumn="id">
INSERT INTO test(note,regop_id)
VALUES (#{note},#{idKey});
</insert>
I tried even adding this:
<bean id="transactionManager"
class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="dataSourcePostgresTriveneta" />
</bean>
At last I found the problem, it was that the DataSource is never closed automatically at the end of a Camel route.
So, each time that Camel route executed, it left an open Datasource, then all the created IDLE connections (their number obviously depends from the DataSource configuration and its usage) remained and accumulate over and over.
The final solution was to add a bean created ad hoc at the end of the Camel route, taking the DataSource as argument and closing it, that's all.

How to disable automatic Bean Validation in JPA entities

I'm using Bean Validation to check constraints on my model, but I don't know how to configure it so it only validates when I want it to. I found on that I could put this tag in my persistence.xml, <validation-mode>NONE</validation-mode> but it doesn't work.
I appreciate any kind of help.
I remember that i also had problems with that, here is my working example:
<bean id="entityManagerFactory"
class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
<property name="persistenceUnitName" value="punit" />
<property name="jpaPropertyMap">
<map>
<entry key="javax.persistence.validation.mode" value="none"/>
</map>
</property>
</bean>

Additional properties files for spring-batch-admin

I have a web application using spring-batch and I'm now integrating spring-batch-admin for basic administration.
The problem is that the jobs configuration files (which are shared with the configuration of the existing application) use properties from files in my application's classpath, but spring-batch-admin's context is not able to load them.
The quick solution was to override the placeholderProperties bean in spring-batch-admin just to add my properties files:
<bean id="placeholderProperties" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="locations">
<list>
<value>classpath:/org/springframework/batch/admin/bootstrap/batch.properties</value>
<value>classpath:batch-default.properties</value>
<value>classpath:batch-${ENVIRONMENT:hsql}.properties</value>
<value>classpath:/path/to/jobs-config.properties</value> <!-- adding my properties here -->
</list>
</property>
<property name="systemPropertiesModeName" value="SYSTEM_PROPERTIES_MODE_OVERRIDE" />
<property name="ignoreResourceNotFound" value="true" />
<property name="ignoreUnresolvablePlaceholders" value="false" />
<property name="order" value="1" />
</bean>
I don't want to move my properties to one of spring-batch-admin's default files. Is there a simpler way to do this?
Answering my own question here...
As described in the documentation, every job configuration file placed under META-INF/spring/batch/jobs/*.xml is loaded by spring-batch-admin as a child context and property placeholders from the parent (i.e. this default bean) are inherited, but the child context can always create its own placeholder bean.
Given that, in my case, the job configuration files are shared with an existing application and use properties from the application classpath, the solution is to create a new job file in META-INF/spring/batch/jobs/*.xml specific for spring-batch-admin:
<!-- placeholder bean with additional properties for the child context -->
<bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="locations">
<list>
<value>classpath:/path/to/job-config.properties</value>
</list>
</property>
<property name="systemPropertiesModeName" value="SYSTEM_PROPERTIES_MODE_OVERRIDE" />
<property name="ignoreResourceNotFound" value="true" />
<property name="ignoreUnresolvablePlaceholders" value="false" />
<property name="order" value="1" />
</bean>
<!-- external job configuration file is imported -->
<import resource="classpath*:/path/to/job.xml" />

Error with Spring ldap pooling

i build async jersey web services, and now i need to make some operations with ldap.
I have configure Spring beam.xml in this mode:
<bean id="contextSourceTarget" class="org.springframework.ldap.core.support.LdapContextSource">
<property name="url" value="${ldap.url}" />
<property name="base" value="${ldap.base}" />
<property name="userDn" value="${ldap.userDn}" />
<property name="password" value="${ldap.password}" />
<property name="pooled" value="false" />
</bean>
<bean id="contextSource"
class="org.springframework.ldap.pool.factory.PoolingContextSource">
<property name="contextSource" ref="contextSourceTarget" />
</bean>
<bean id="ldapTemplate" class="org.springframework.ldap.core.LdapTemplate">
<constructor-arg ref="contextSource" />
</bean>
<bean id="ldapTreeBuilder" class="com.me.ldap.LdapTreeBuilder">
<constructor-arg ref="ldapTemplate" />
</bean>
<bean id="personDao" class="com.me.ldap.PersonDaoImpl">
<property name="ldapTemplate" ref="ldapTemplate" />
</bean>
But when i try to use ldap i have this error:
Error creating bean with name 'contextSource' defined in class path resource [config/Beans.xml]: Instantiation of bean failed; nested exception is java.lang.NoClassDefFoundError: org/apache/commons/pool/KeyedPoolableObjectFactory
In my project i have commons-pool2-2.2.jar lib, but still i have this error..i try to add commons-pool2-2.2.jar in TOMCAT_PATH/lib but not works..
UPDATE:
If i put commons-pool-1.6.jar it works.. but if i want to use pool2 how i can do? only i must change class inn commons-pool2-2.2.jar?
Updated Answer:
Since at least Spring LDAP 2.3.2 you can now use commons-pool2. Spring LDAP now provides two classes:
For commons-pool 1.x:
org.springframework.ldap.pool.factory.PoolingContextSource
For commons-pool 2.x:
org.springframework.ldap.pool2.factory.PooledContextSource
Details can be found here:
https://github.com/spring-projects/spring-ldap/issues/351#issuecomment-586551591
Original Answer:
Unfortunately Spring-Ldap uses commons-pool and not commons-pool2. As you have found the class org.apache.commons.pool.KeyedPoolableObjectFactory does not exist in commons-pool2 (it has a different package structure), hence the error.
There is a Jira issue for the Spring-ldap project asking them to upgrade/support commons-pool2:
https://jira.spring.io/browse/LDAP-316
Until that has been completed you will have to use commons-pool 1.6.

Usage of CustomEditor with BeanWrapperFieldExtractor just like with BeanWrapperFieldSetMapper

I have written a simple Spring Batch application that reads a CSV file, does some transforming and writes a modified CSV to the disk.
The reading of the file into domain objects works like a charm. I use DelimitedLineTokenizer to tokenize the lines and a BeanWrapperFieldSetMapper to feed the values into a bean:
<bean id="reader" class="org.springframework.batch.item.file.FlatFileItemReader" scope="step">
<property name="resource" value="#{jobParameters['inputResource']}" />
<property name="linesToSkip" value="1" />
<property name="lineMapper">
<bean class="org.springframework.batch.item.file.mapping.DefaultLineMapper">
<property name="lineTokenizer">
<bean class="org.springframework.batch.item.file.transform.DelimitedLineTokenizer">
<property name="delimiter" value=";" />
<property name="names"
value="ID,NAME,DESCRIPTION,PRICE,DATE" />
</bean>
</property>
<property name="fieldSetMapper">
<bean class="org.springframework.batch.item.file.mapping.BeanWrapperFieldSetMapper">
<property name="targetType" value="myapp.MyDomainObject" />
<property name="customEditors">
<map>
<entry key="java.util.Date" value-ref="dateEditor" />
<entry key="java.math.BigDecimal" value-ref="numberEditor" />
</map>
</property>
</bean>
</property>
</bean>
</property>
</bean>
I especially like the features of BeanWrapperFieldSetMapper to "guess" the field names and the possibility to define CustomEditors which I use to define the special date and number formats used in the input file.
Now I would like to write the modified file in the same format like the input file.
I use the following configuration:
<bean id="writer" class="org.springframework.batch.item.file.FlatFileItemWriter" scope="step">
<property name="resource" value="#{jobParameters['outputResource']}" />
<property name="lineAggregator">
<bean class="org.springframework.batch.item.file.transform.DelimitedLineAggregator">
<property name="delimiter" value=";" />
<property name="fieldExtractor">
<bean class="org.springframework.batch.item.file.transform.BeanWrapperFieldExtractor">
<property name="names" value="id,name,description,price,date" />
</bean>
</property>
</bean>
</property>
</bean>
There are two things I miss with this configuration:
BeanWrapperFieldSetMapper allowed me to set CustomEditors, but BeanWrapperFieldExtractor has no such possibility. Is there a way to use these?
Is there a way to define the headings in the first line of the file? I have not found any way to write an initial line that is not a bean... It would be great to use the same names here as in BeanWrapperFieldSetMapper such that BeanWrapperFieldExtractor writes the inital line and guesses the bean property namens as BeanWrapperFieldSetMapper does.
The process to load files is so comfortable in Spring Batch. Why is the writing of files so different? Am I missing something?
I have to use Spring Batch 2.1.x because we are using Spring 3.0.x . Therefor an upgrade to 2.2.x would not be an option.
Which is your need? Extract field property as text? You can
use a FormatterLineAggregator if you needs are not too complicated
write your own CustomEditorsFieldExtractor (better)
Generate a complex domain object composed by original domain object and by text-formatted object and use last one as parameter of writer (but breaks your current processor/writer)
Use FlatFileItemWriter.headerCallback: if setted allow custom header write
Writing - in your case - seems a pain respect read process because spring-batch's reading components fits your needs. Standard components fits more used use-case and they cover a lot of scenario. Let us write a custom FieldExtractor sometimes! :)