Additional properties files for spring-batch-admin - spring-batch

I have a web application using spring-batch and I'm now integrating spring-batch-admin for basic administration.
The problem is that the jobs configuration files (which are shared with the configuration of the existing application) use properties from files in my application's classpath, but spring-batch-admin's context is not able to load them.
The quick solution was to override the placeholderProperties bean in spring-batch-admin just to add my properties files:
<bean id="placeholderProperties" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="locations">
<list>
<value>classpath:/org/springframework/batch/admin/bootstrap/batch.properties</value>
<value>classpath:batch-default.properties</value>
<value>classpath:batch-${ENVIRONMENT:hsql}.properties</value>
<value>classpath:/path/to/jobs-config.properties</value> <!-- adding my properties here -->
</list>
</property>
<property name="systemPropertiesModeName" value="SYSTEM_PROPERTIES_MODE_OVERRIDE" />
<property name="ignoreResourceNotFound" value="true" />
<property name="ignoreUnresolvablePlaceholders" value="false" />
<property name="order" value="1" />
</bean>
I don't want to move my properties to one of spring-batch-admin's default files. Is there a simpler way to do this?

Answering my own question here...
As described in the documentation, every job configuration file placed under META-INF/spring/batch/jobs/*.xml is loaded by spring-batch-admin as a child context and property placeholders from the parent (i.e. this default bean) are inherited, but the child context can always create its own placeholder bean.
Given that, in my case, the job configuration files are shared with an existing application and use properties from the application classpath, the solution is to create a new job file in META-INF/spring/batch/jobs/*.xml specific for spring-batch-admin:
<!-- placeholder bean with additional properties for the child context -->
<bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="locations">
<list>
<value>classpath:/path/to/job-config.properties</value>
</list>
</property>
<property name="systemPropertiesModeName" value="SYSTEM_PROPERTIES_MODE_OVERRIDE" />
<property name="ignoreResourceNotFound" value="true" />
<property name="ignoreUnresolvablePlaceholders" value="false" />
<property name="order" value="1" />
</bean>
<!-- external job configuration file is imported -->
<import resource="classpath*:/path/to/job.xml" />

Related

AffinityKeyMapped not working with Ignite 2.4/2.5/2.6 and Scala

Using Scala 2.11.7/Java 1.8.0_161/RHEL7.
We have two caches whose elements share the same affinity key. The affinity key is defined as follows:
case class IgniteTradePayloadKey(
#(AffinityKeyMapped#field)
tradeKey: TradeKey,
... other fields
) extends Serializable
case class IgniteDealPayloadKey(
#(AffinityKeyMapped#field)
tradeKey: TradeKey,
child: Int,
... other fields
) extends Serializable
Those are used as key to two ignite caches (Trades and Deals). We want instances of Trades and Deals to be collocated, as we perform computations using both. Think of them as having a parent/child relationship, and we would like to keep the parent and its children in the same node because our calculations require both. Parents are uniquely identified by TradeKey, so we use that in both caches to control affinity. Note that they are also used as part of the Ignite key itself. They are not part of the value.
This worked beautifully with Ignite 1.7; we then tried to upgrade to a more recent version of Ignite (we tried 2.4, 2.5 and 2.6), and without any code change whatsoever, there are children that are no longer collocated with their parents. Reverted back to 1.7 to be sure, and collocation works. We tried to override the affinity function with something simple (just a hash on the TradeKey), and again, it works with 1.7, but not with any of the 2.X versions listed above.
What are we missing?
Configuration as requested (apologies for the massive file). We tried with and without defining our own affinity function, with the same results.
<beans xmlns="http://www.springframework.org/schema/beans"
<bean id="propertyConfigurer"
class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="systemPropertiesModeName" value="SYSTEM_PROPERTIES_MODE_FALLBACK"/>
<property name="searchSystemEnvironment" value="true"/>
</bean>
<bean id="CLIENT_MODE" class="java.lang.String">
<constructor-arg value="${IGNITE_CLIENT_MODE:false}" />
</bean>
<!-- Ignite common configuration -->
<bean abstract="true" id="common.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
<property name="gridName" value="MTS Trades Cache Grid" />
<property name="failureDetectionTimeout" value="60000"/>
<property name="clientFailureDetectionTimeout" value="60000"/>
<property name="peerClassLoadingEnabled" value="true"/>
<property name="clientMode" ref="CLIENT_MODE"/>
<property name="rebalanceThreadPoolSize" value="4"/>
<property name="deploymentMode" value="CONTINUOUS"/>
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="localPort" value="47700"/>
<property name="localPortRange" value="20"/>
<!-- Setting up IP finder for this cluster -->
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
<property name="addresses">
<list>
<value>127.0.0.1:47700..47720</value>
</list>
</property>
</bean>
</property>
</bean>
</property>
<property name="communicationSpi">
<bean class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi">
<property name="localPort" value="49100"/>
<property name="sharedMemoryPort" value="-1" />
<property name="messageQueueLimit" value="1024"/>
</bean>
</property>
<!-- Cache configuration -->
<property name="cacheConfiguration">
<list>
<!-- deals -->
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<property name="name" value="dealPayloads" />
<property name="cacheMode" value="PARTITIONED" />
<property name="backups" value="0" />
<property name="OnheapCacheEnabled" value="true"/>
<property name="queryEntities">
<list>
<bean class="org.apache.ignite.cache.QueryEntity">
<!-- setting indexed type's key class -->
<property name="keyType" value="com.company.ignite.IgniteDealPayloadKey" />
<!-- setting indexed type's value class -->
<property name="valueType" value="com.company.ignite.IgniteDealPayload" />
</bean>
</list>
</property>
<property name="affinity">
<bean class="com.company.ignite.affinity.IgniteAffinityFunction">
<property name="partitions" value="1024"/>
</bean>
</property>
<property name="atomicityMode" value="ATOMIC" />
<property name="rebalanceMode" value="ASYNC" />
<property name="copyOnRead" value="false" />
<!-- Set rebalance batch size to 8 MB. -->
<property name="rebalanceBatchSize" value="#{8 * 1024 * 1024}"/>
<!-- Explicitly disable rebalance throttling. -->
<property name="rebalanceThrottle" value="0"/>
<!-- Set 4 threads for rebalancing. -->
<property name="rebalanceThreadPoolSize" value="4"/>
</bean>
<!-- trade versions -->
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<property name="name" value="tradePayloads" />
<property name="cacheMode" value="PARTITIONED" />
<property name="backups" value="0" />
<property name="OnheapCacheEnabled" value="true"/>
<property name="queryEntities">
<list>
<bean class="org.apache.ignite.cache.QueryEntity">
<!-- setting indexed type's key class -->
<property name="keyType" value="com.company.ignite.IgniteTradePayloadKey" />
<!-- setting indexed type's value class -->
<property name="valueType" value="com.company.ignite.IgniteTradePayload" />
</bean>
</list>
</property>
<property name="affinity">
<bean class="com.company.ignite.affinity.IgniteAffinityFunction">
<property name="partitions" value="1024"/>
</bean>
</property>
<property name="atomicityMode" value="ATOMIC" />
<property name="rebalanceMode" value="ASYNC" />
<property name="copyOnRead" value="false" />
<!-- Set rebalance batch size to 8 MB. -->
<property name="rebalanceBatchSize" value="#{8 * 1024 * 1024}"/>
<!-- Explicitly disable rebalance throttling. -->
<property name="rebalanceThrottle" value="0"/>
<!-- Set 4 threads for rebalancing. -->
<property name="rebalanceThreadPoolSize" value="4"/>
</bean>
</list>
</property>
</bean>
</beans>
In addition, this is the relevant exception:
[22:28:24,418][INFO][grid-timeout-worker-#23%MTS Trades Cache Grid%][IgniteKernal%MTS Trades Cache Grid] FreeList [name=MTS Trades Cache Grid, buckets=256, dataPages=9658, reusePages=0]
[22:28:57,335][INFO][pub-#314%MTS Trades Cache Grid%][GridDeploymentLocalStore] Class locally deployed: class com.company.pt.tradesrouter.routing.ComputeJob
[22:28:57,705][SEVERE][pub-#314%MTS Trades Cache Grid%][GridJobWorker] Failed to execute job [jobId=48df049c461-b3ba568d-6a39-4296-b03f-0c046e7cf3f7, ses=GridJobSessionImpl [ses=GridTaskSessionImpl [taskName=com.company.pt.tradesrouter.routing.ComputeJob, dep=GridDeployment [ts=1532382444003, depMode=CONTINUOUS, clsLdr=sun.misc.Launcher$AppClassLoader#764c12b6, clsLdrId=a717c19c461-e5241c47-40d4-4085-a7fa-4f1916275b2e, userVer=0, loc=true, sampleClsName=o.a.i.i.processors.cache.distributed.dht.preloader.GridDhtPartitionFullMap, pendingUndeploy=false, undeployed=false, usage=2], taskClsName=com.company.pt.tradesrouter.routing.ComputeJob, sesId=38df049c461-b3ba568d-6a39-4296-b03f-0c046e7cf3f7, startTime=1532384937062, endTime=1532388537300, taskNodeId=b3ba568d-6a39-4296-b03f-0c046e7cf3f7, clsLdr=sun.misc.Launcher$AppClassLoader#764c12b6, closed=false, cpSpi=null, failSpi=null, loadSpi=null, usage=1, fullSup=false, internal=false, topPred=null, subjId=b3ba568d-6a39-4296-b03f-0c046e7cf3f7, mapFut=IgniteFuture [orig=GridFutureAdapter [ignoreInterrupts=false, state=INIT, res=null, hash=1635946755]], execName=null], jobId=48df049c461-b3ba568d-6a39-4296-b03f-0c046e7cf3f7]]
class org.apache.ignite.IgniteException: null
at org.apache.ignite.internal.processors.closure.GridClosureProcessor$C2.execute(GridClosureProcessor.java:1858)
at org.apache.ignite.internal.processors.job.GridJobWorker$2.call(GridJobWorker.java:566)
at org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6623)
at org.apache.ignite.internal.processors.job.GridJobWorker.execute0(GridJobWorker.java:560)
at org.apache.ignite.internal.processors.job.GridJobWorker.body(GridJobWorker.java:489)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at org.apache.ignite.internal.processors.job.GridJobProcessor.processJobExecuteRequest(GridJobProcessor.java:1189)
at org.apache.ignite.internal.processors.job.GridJobProcessor$JobExecutionListener.onMessage(GridJobProcessor.java:1921)
at org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1555)
at org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1183)
at org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:126)
at org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1090)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NullPointerException
at com.company.pt.tradesrouter.routing.ComputeJob$$anonfun$1$$anonfun$2.apply(ComputeJob.scala:49)
at com.company.pt.tradesrouter.routing.ComputeJob$$anonfun$1$$anonfun$2.apply(ComputeJob.scala:47)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
at scala.collection.immutable.Set$Set1.foreach(Set.scala:79)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:245)
at scala.collection.AbstractSet.scala$collection$SetLike$$super$map(Set.scala:47)
at scala.collection.SetLike$class.map(SetLike.scala:92)
at scala.collection.AbstractSet.map(Set.scala:47)
at com.company.pt.tradesrouter.routing.ComputeJob$$anonfun$1.apply(ComputeJob.scala:47)
at com.company.pt.tradesrouter.routing.ComputeJob$$anonfun$1.apply(ComputeJob.scala:44)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:221)
at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428)
at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:245)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at com.company.pt.tradesrouter.routing.ComputeJob.call(ComputeJob.scala:44)
at com.company.pt.tradesrouter.routing.ComputeJob.call(ComputeJob.scala:21)
at org.apache.ignite.internal.processors.closure.GridClosureProcessor$C2.execute(GridClosureProcessor.java:1855)
... 14 more
Most probably, you hit a bug, that was introduced in Ignite 2.0: https://issues.apache.org/jira/browse/IGNITE-5795
Because of this bug #AffinityKeyMapped annotation is ignored in classes, that are used in query entity configuration. As far as I can see, this is exactly your case.
It's going to be fixed in Ignite 2.7.
There is a workaround for this problem: you can list the problematic classes in BinaryConfiguration#classNames configuration property. Binary configuration should be specified as IgniteConfiguration#binaryConfiguration. This configuration should be the same on all nodes. You may also need to configure CacheConfiguration#keyConfiguration for your cache.

Error with Spring ldap pooling

i build async jersey web services, and now i need to make some operations with ldap.
I have configure Spring beam.xml in this mode:
<bean id="contextSourceTarget" class="org.springframework.ldap.core.support.LdapContextSource">
<property name="url" value="${ldap.url}" />
<property name="base" value="${ldap.base}" />
<property name="userDn" value="${ldap.userDn}" />
<property name="password" value="${ldap.password}" />
<property name="pooled" value="false" />
</bean>
<bean id="contextSource"
class="org.springframework.ldap.pool.factory.PoolingContextSource">
<property name="contextSource" ref="contextSourceTarget" />
</bean>
<bean id="ldapTemplate" class="org.springframework.ldap.core.LdapTemplate">
<constructor-arg ref="contextSource" />
</bean>
<bean id="ldapTreeBuilder" class="com.me.ldap.LdapTreeBuilder">
<constructor-arg ref="ldapTemplate" />
</bean>
<bean id="personDao" class="com.me.ldap.PersonDaoImpl">
<property name="ldapTemplate" ref="ldapTemplate" />
</bean>
But when i try to use ldap i have this error:
Error creating bean with name 'contextSource' defined in class path resource [config/Beans.xml]: Instantiation of bean failed; nested exception is java.lang.NoClassDefFoundError: org/apache/commons/pool/KeyedPoolableObjectFactory
In my project i have commons-pool2-2.2.jar lib, but still i have this error..i try to add commons-pool2-2.2.jar in TOMCAT_PATH/lib but not works..
UPDATE:
If i put commons-pool-1.6.jar it works.. but if i want to use pool2 how i can do? only i must change class inn commons-pool2-2.2.jar?
Updated Answer:
Since at least Spring LDAP 2.3.2 you can now use commons-pool2. Spring LDAP now provides two classes:
For commons-pool 1.x:
org.springframework.ldap.pool.factory.PoolingContextSource
For commons-pool 2.x:
org.springframework.ldap.pool2.factory.PooledContextSource
Details can be found here:
https://github.com/spring-projects/spring-ldap/issues/351#issuecomment-586551591
Original Answer:
Unfortunately Spring-Ldap uses commons-pool and not commons-pool2. As you have found the class org.apache.commons.pool.KeyedPoolableObjectFactory does not exist in commons-pool2 (it has a different package structure), hence the error.
There is a Jira issue for the Spring-ldap project asking them to upgrade/support commons-pool2:
https://jira.spring.io/browse/LDAP-316
Until that has been completed you will have to use commons-pool 1.6.

Naming output file given specific field in input file header

I am relatively new to Spring Batch.
I have an input file with a header. This header contains several fields, one of which I am interested in (YYYYMM data).
Here is my config for this :
<bean id="detaillesHeaderReaderCallback" class="fr.generali.ede.daemon.batch.dstaff.detailles.DetaillesHeaderReaderCallback" >
<property name="headerTokenizer" ref="headerTokenizer" />
<property name="fieldSetMapper" ref="fieldSetMapperHeaderLog07" />
<!-- need to write moisComptable to ChunkContext -->
<property name="chunkContext" value="#{chunkExecutionContext}" />
</bean>
<bean id="headerTokenizer"
class="org.springframework.batch.item.file.transform.FixedLengthTokenizer">
<property name="names" value="dummy1,moisComptable,dummy2" />
<property name="columns" value="1-22,23-28,29-146" />
</bean>
After which, in the next step of the job, I want to generate an output file whose name is composed of a static part and that header field :
<bean id="fileItemWriterLog07" class="org.springframework.batch.item.file.FlatFileItemWriter">
<property name="resource"
value="file:${batch.coherence.out.path}/DSTAF007_LOG_#{jobExecutionContext['moisComptable']}.txt" />
<property name="shouldDeleteIfExists" value="true" />
<property name="headerCallback" ref="DetaillesHeaderWriterCallbackLog07" />
...
</bean/>
(I have two jobs because I first write to a database, and then read from it.)
As one would guess this doesn't work, the config file is flowed so I get BeanCreationExceptions. But this gives an idea of what I want to achieve.
I have no exception on the ChunkContext (yet ?) but one on the writer resource. Here is the exception :
Field or property 'jobExecutionContext' cannot be found on object of type 'org.springframework.beans.factory.config.BeanExpressionContext'
Does anyone have an idea about how to proceed ?
Thanks in advance.

In-memory Job-Explorer definition in Spring batch

I was trying to share My in-memory jobRepository to the jobExplorer. But it throws an error as,
Nested exception is
org.springframework.beans.ConversionNotSupportedException:
Failed to convert property value of type '$Proxy1 implementing
org.springframework.batch.core.repository.JobRepository,org.
springframework.aop.SpringProxy,org.springframework.aop.framework.Advised'
to required type
Even i tried putting '&' sign before jobRepository when passing to jobExplorer for sharing.But attempt end in vain.
I am using Spring Batch 2.2.1
Is the dependency for jobExplorer is only database not in-memory?
Definition is,
<bean id="jobRepository"
class="com.test.repository.BatchRepositoryFactoryBean">
<property name="cache" ref="cache" />
<property name="transactionManager" ref="transactionManager" />
</bean>
<bean id="jobOperator" class="test.batch.LauncherTest.TestBatchOperator">
<property name="jobExplorer" ref="jobExplorer" />
<property name="jobRepository" ref="jobRepository" />
<property name="jobRegistry" ref="jobRegistry" />
<property name="jobLauncher" ref="jobLauncher" />
</bean>
<bean id="jobExplorer" class="test.batch.LauncherTest.TestBatchExplorerFactoryBean">
<property name="repositoryFactory" ref="&jobRepository" />
</bean>
<bean id="transactionManager"
class="org.springframework.batch.support.transaction.ResourcelessTransactionManager" />
<bean id="jobLauncher" class="com.scb.smartbatch.core.BatchLauncher">
<property name="jobRepository" ref="jobRepository" />
</bean>
<!-- To store Batch details -->
<bean id="jobRegistry" class="com.scb.smartbatch.repository.SmartBatchRegistry" />
<bean id="jobRegistryBeanPostProcessor"
class="org.springframework.batch.core.configuration.support.JobRegistryBeanPostProcessor">
<property name="jobRegistry" ref="jobRegistry" />
</bean>
<!--Runtime cache of batch executions -->
<bean id="cache" class="com.scb.cache.TCRuntimeCache" />
thanks for your valuable inputs.
But I used '&' before the job repository reference, which allowed me to use it for my job explorer as a shared resource.
problem solved.
kudos.
Usually you have to wire interface instead of implementation.
Else, probably, you have to add <aop:config proxy-target-class="true"> to create CGLIB-based proxy instead of standard Java-based proxy.
Read Spring official documentation about that

Usage of CustomEditor with BeanWrapperFieldExtractor just like with BeanWrapperFieldSetMapper

I have written a simple Spring Batch application that reads a CSV file, does some transforming and writes a modified CSV to the disk.
The reading of the file into domain objects works like a charm. I use DelimitedLineTokenizer to tokenize the lines and a BeanWrapperFieldSetMapper to feed the values into a bean:
<bean id="reader" class="org.springframework.batch.item.file.FlatFileItemReader" scope="step">
<property name="resource" value="#{jobParameters['inputResource']}" />
<property name="linesToSkip" value="1" />
<property name="lineMapper">
<bean class="org.springframework.batch.item.file.mapping.DefaultLineMapper">
<property name="lineTokenizer">
<bean class="org.springframework.batch.item.file.transform.DelimitedLineTokenizer">
<property name="delimiter" value=";" />
<property name="names"
value="ID,NAME,DESCRIPTION,PRICE,DATE" />
</bean>
</property>
<property name="fieldSetMapper">
<bean class="org.springframework.batch.item.file.mapping.BeanWrapperFieldSetMapper">
<property name="targetType" value="myapp.MyDomainObject" />
<property name="customEditors">
<map>
<entry key="java.util.Date" value-ref="dateEditor" />
<entry key="java.math.BigDecimal" value-ref="numberEditor" />
</map>
</property>
</bean>
</property>
</bean>
</property>
</bean>
I especially like the features of BeanWrapperFieldSetMapper to "guess" the field names and the possibility to define CustomEditors which I use to define the special date and number formats used in the input file.
Now I would like to write the modified file in the same format like the input file.
I use the following configuration:
<bean id="writer" class="org.springframework.batch.item.file.FlatFileItemWriter" scope="step">
<property name="resource" value="#{jobParameters['outputResource']}" />
<property name="lineAggregator">
<bean class="org.springframework.batch.item.file.transform.DelimitedLineAggregator">
<property name="delimiter" value=";" />
<property name="fieldExtractor">
<bean class="org.springframework.batch.item.file.transform.BeanWrapperFieldExtractor">
<property name="names" value="id,name,description,price,date" />
</bean>
</property>
</bean>
</property>
</bean>
There are two things I miss with this configuration:
BeanWrapperFieldSetMapper allowed me to set CustomEditors, but BeanWrapperFieldExtractor has no such possibility. Is there a way to use these?
Is there a way to define the headings in the first line of the file? I have not found any way to write an initial line that is not a bean... It would be great to use the same names here as in BeanWrapperFieldSetMapper such that BeanWrapperFieldExtractor writes the inital line and guesses the bean property namens as BeanWrapperFieldSetMapper does.
The process to load files is so comfortable in Spring Batch. Why is the writing of files so different? Am I missing something?
I have to use Spring Batch 2.1.x because we are using Spring 3.0.x . Therefor an upgrade to 2.2.x would not be an option.
Which is your need? Extract field property as text? You can
use a FormatterLineAggregator if you needs are not too complicated
write your own CustomEditorsFieldExtractor (better)
Generate a complex domain object composed by original domain object and by text-formatted object and use last one as parameter of writer (but breaks your current processor/writer)
Use FlatFileItemWriter.headerCallback: if setted allow custom header write
Writing - in your case - seems a pain respect read process because spring-batch's reading components fits your needs. Standard components fits more used use-case and they cover a lot of scenario. Let us write a custom FieldExtractor sometimes! :)