Invoking Stored Procedure using Spring JdbcBatchItemWriter - spring-batch

I would like to execute a Stored Procedure using spring JdbcBatchItemWriter. My current code looks like :
<bean id="xyzWriter" class="org.springframework.batch.item.database.JdbcBatchItemWriter">
......
<property name="sql" value="update abc where x=:paramX" />
......
</bean>
I would like to replace this update sql query with a Stored Proc call. I would like to handle it in the xml file itself. Any help is really appreciated.
Thanks

Did you tried running SP through JdbcBatchItemWriter?
because I also had same requirement and i just tried and it worked for me
<bean id="trackItemWriter" class="org.springframework.batch.item.database.JdbcBatchItemWriter">
<property name="dataSource" ref="mySQLDatasource"/>
<property name="itemPreparedStatementSetter">
<bean class="com.MyDataPreparedStatmentSetter"/>
</property>
<property name="assertUpdates" value="false" />
<property name="sql" value="Call my_Stored_Proc (?,?,?,?)"/>
</bean>
Hope it helps.

Related

Spring Batch SQL Command with JobParameters

I am new to spring-batch, here i am getting some data from DB using following reader statements. Here i need to pass value dynamically(thru arguments).
<bean id="ItemReader"
class="org.springframework.batch.item.database.JdbcCursorItemReader">
<property name="dataSource" ref="dataSource" />
<property name="sql">
<value>
<![CDATA[
select * from table where section = #{jobParameters['section']}
]]>
</value>
</property>
<property name="rowMapper">
<bean class="xyzRowMapper" />
</property>
</bean>
JUnit Code:
JobParameters jobParameters = = new JobParametersBuilder()
.addString("section", section);
Can any body help on this?
As explained in §5.4 Late Binding of Job and Steps Attributes of official Spring Batch documentation, you need to add scope="step" to your step :
Using a scope of Step is required in order to use late binding since
the bean cannot actually be instantiated until the Step starts, which
allows the attributes to be found. Because it is not part of the
Spring container by default, the scope must be added explicitly,
either by using the batch namespace or by including a bean definition
explicitly for the StepScope (but not both)
Giving this :
<bean id="ItemReader" scope="step" class="org.springframework.batch.item.database.JdbcCursorItemReader">
<property name="dataSource" ref="dataSource" />
<property name="sql">
<value>
<![CDATA[
select * from table where section = #{jobParameters['section']}
]]>
</value>
</property>
<property name="rowMapper">
<bean class="xyzRowMapper" />
</property>
</bean>

Spring Batch - FlatFileItemReader \001 delimiter issue

I am working on a Spring batch application where i am using FlatFileItemReader to read the file with delimiter ~ or | and its working fine and its calling the processor once read is completed.
But when i try to use the delimiter as \001 the processor is not called and i am not getting any error also in the console.(Linux environment)
Example file format:
0002~000000000000000470~000006206210008078~PR~7044656907~7044641561~~~~240082202~~~ENG~CH~~19940926~D~~~AL~~~P~USA
This is my reader configuration.
<property name="resource" value="#{stepExecutionContext['fileResource']}" />
<!-- <property name="linesToSkip" value="1"></property> -->
<property name="lineMapper">
<bean class="org.springframework.batch.item.file.mapping.DefaultLineMapper">
<property name="lineTokenizer">
<bean
class="org.springframework.batch.item.file.transform.DelimitedLineTokenizer">
<property name="delimiter" value="${file.delimiter}"/>
<property name="names" value="sor_id,sor_cust_id,acct_id,cust_role_type_cd,cust_full_nm,mailg_adr_line_1,mailg_adr_line_2,mailg_city_nm,mailg_geo_st_cd,mailg_full_pstl_cd,mailg_cntry_cd,mailg_adr_desc,phy_adr_line_1,phy_adr_line_2,phy_city_nm,phy_geo_st_cd,phy_full_pstl_cd,phy_cntry_cd,phy_adr_desc,home_phn_num,work_phn_num,mobile_phn_num,email_adr_txt,ssn,cust_tax_idn_num,gndr_cd,martl_cd,lang_cd,acct_stat_cd,cust_brth_dt,acct_open_dt,sor_acct_stat_cd,sor_acct_stat_desc,vld_phn_num_ind,prod_cd,prft_ctr_cd,bus_legl_strc_cd,acct_use_cd,cntry_of_origin_cd" />
</bean>
</property>
<property name="fieldSetMapper">
<bean class="com.cap1.cdi.batch.SrcMasterFieldSetMapper" />
</property>
</bean>
</property>
</bean>
Is anyone else faced the same kind of issue?
Regards,
Shankar
I am going to answer my own question.
The actual issue was control character was used as delimiter in linux (^A)
In Java when i use string.split("\u0001") it was working. Also passing the same to Spring batch flatfileitemreader as delimiter it works like a charm.
Thanks
Shankar.

Spring batch StoredProcedureItemReader

I get the following error when I run a stored procedure using Spring Batch:
Caused by: com.microsoft.sqlserver.jdbc.SQLServerException: L'index 0 du paramètre de sortie n'est pas valide.
at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDriverError(SQLServerException.java:190)
at com.microsoft.sqlserver.jdbc.SQLServerCallableStatement.getterGetParam(SQLServerCallableStatement.java:354)
at com.microsoft.sqlserver.jdbc.SQLServerCallableStatement.getObject(SQLServerCallableStatement.java:659)
at org.springframework.batch.item.database.StoredProcedureItemReader.openCursor(StoredProcedureItemReader.java:214)
... 18 more
The stored procedure contains a create table command which is responsible for the error.
The stored procedure:
ALTER PROCEDURE [dbo].[TEST]
#orgKeyParam bigint
AS
BEGIN
CREATE TABLE #tmpPatients (
programID bigint NOT NULL)
drop table #tmpPatients;
SELECT last_name from patient;
END
The StoredProcedureItemReader configuration:
<bean id="DBReader"
class="org.springframework.batch.item.database.StoredProcedureItemReader">
<property name="dataSource" ref="dataSource2" />
<property name="procedureName" value="[${sql.RPMDBName}].dbo.Test" />
<property name="fetchSize" value="50" />
<property name="parameters">
<list>
<bean class="org.springframework.jdbc.core.SqlParameter">
<constructor-arg index="0" value="orgKeyParam" />
<constructor-arg index="1">
<util:constant static-field="java.sql.Types.INTEGER" />
</constructor-arg>
</bean>
</list>
</property>
<property name="rowMapper" ref="dataRowMapper" />
<property name="preparedStatementSetter" ref="preparedStatementSetter" />
</bean>
<bean id="preparedStatementSetter"
class="org.springframework.batch.core.resource.ListPreparedStatementSetter" scope="step">
<property name="parameters">
<list>
<value>1</value>
</list>
</property>
</bean>
Use nocount on in sql server stored proc.
Shoul b used before begin statement
Without seeing your stored procedure or the way you've configured your StoredProcedureItemReader, I can't give you an exact fix. However, the error is that your parameter at index 0 is invalid. By default, the StoredProcedureItemReader looks for the cursor that is returned to be the first out parameter. I'm going to assume that either you've configured your reader to look elsewhere or your stored procedure isn't returning a cursor for it's first out parameter.

In-memory Job-Explorer definition in Spring batch

I was trying to share My in-memory jobRepository to the jobExplorer. But it throws an error as,
Nested exception is
org.springframework.beans.ConversionNotSupportedException:
Failed to convert property value of type '$Proxy1 implementing
org.springframework.batch.core.repository.JobRepository,org.
springframework.aop.SpringProxy,org.springframework.aop.framework.Advised'
to required type
Even i tried putting '&' sign before jobRepository when passing to jobExplorer for sharing.But attempt end in vain.
I am using Spring Batch 2.2.1
Is the dependency for jobExplorer is only database not in-memory?
Definition is,
<bean id="jobRepository"
class="com.test.repository.BatchRepositoryFactoryBean">
<property name="cache" ref="cache" />
<property name="transactionManager" ref="transactionManager" />
</bean>
<bean id="jobOperator" class="test.batch.LauncherTest.TestBatchOperator">
<property name="jobExplorer" ref="jobExplorer" />
<property name="jobRepository" ref="jobRepository" />
<property name="jobRegistry" ref="jobRegistry" />
<property name="jobLauncher" ref="jobLauncher" />
</bean>
<bean id="jobExplorer" class="test.batch.LauncherTest.TestBatchExplorerFactoryBean">
<property name="repositoryFactory" ref="&jobRepository" />
</bean>
<bean id="transactionManager"
class="org.springframework.batch.support.transaction.ResourcelessTransactionManager" />
<bean id="jobLauncher" class="com.scb.smartbatch.core.BatchLauncher">
<property name="jobRepository" ref="jobRepository" />
</bean>
<!-- To store Batch details -->
<bean id="jobRegistry" class="com.scb.smartbatch.repository.SmartBatchRegistry" />
<bean id="jobRegistryBeanPostProcessor"
class="org.springframework.batch.core.configuration.support.JobRegistryBeanPostProcessor">
<property name="jobRegistry" ref="jobRegistry" />
</bean>
<!--Runtime cache of batch executions -->
<bean id="cache" class="com.scb.cache.TCRuntimeCache" />
thanks for your valuable inputs.
But I used '&' before the job repository reference, which allowed me to use it for my job explorer as a shared resource.
problem solved.
kudos.
Usually you have to wire interface instead of implementation.
Else, probably, you have to add <aop:config proxy-target-class="true"> to create CGLIB-based proxy instead of standard Java-based proxy.
Read Spring official documentation about that

Usage of CustomEditor with BeanWrapperFieldExtractor just like with BeanWrapperFieldSetMapper

I have written a simple Spring Batch application that reads a CSV file, does some transforming and writes a modified CSV to the disk.
The reading of the file into domain objects works like a charm. I use DelimitedLineTokenizer to tokenize the lines and a BeanWrapperFieldSetMapper to feed the values into a bean:
<bean id="reader" class="org.springframework.batch.item.file.FlatFileItemReader" scope="step">
<property name="resource" value="#{jobParameters['inputResource']}" />
<property name="linesToSkip" value="1" />
<property name="lineMapper">
<bean class="org.springframework.batch.item.file.mapping.DefaultLineMapper">
<property name="lineTokenizer">
<bean class="org.springframework.batch.item.file.transform.DelimitedLineTokenizer">
<property name="delimiter" value=";" />
<property name="names"
value="ID,NAME,DESCRIPTION,PRICE,DATE" />
</bean>
</property>
<property name="fieldSetMapper">
<bean class="org.springframework.batch.item.file.mapping.BeanWrapperFieldSetMapper">
<property name="targetType" value="myapp.MyDomainObject" />
<property name="customEditors">
<map>
<entry key="java.util.Date" value-ref="dateEditor" />
<entry key="java.math.BigDecimal" value-ref="numberEditor" />
</map>
</property>
</bean>
</property>
</bean>
</property>
</bean>
I especially like the features of BeanWrapperFieldSetMapper to "guess" the field names and the possibility to define CustomEditors which I use to define the special date and number formats used in the input file.
Now I would like to write the modified file in the same format like the input file.
I use the following configuration:
<bean id="writer" class="org.springframework.batch.item.file.FlatFileItemWriter" scope="step">
<property name="resource" value="#{jobParameters['outputResource']}" />
<property name="lineAggregator">
<bean class="org.springframework.batch.item.file.transform.DelimitedLineAggregator">
<property name="delimiter" value=";" />
<property name="fieldExtractor">
<bean class="org.springframework.batch.item.file.transform.BeanWrapperFieldExtractor">
<property name="names" value="id,name,description,price,date" />
</bean>
</property>
</bean>
</property>
</bean>
There are two things I miss with this configuration:
BeanWrapperFieldSetMapper allowed me to set CustomEditors, but BeanWrapperFieldExtractor has no such possibility. Is there a way to use these?
Is there a way to define the headings in the first line of the file? I have not found any way to write an initial line that is not a bean... It would be great to use the same names here as in BeanWrapperFieldSetMapper such that BeanWrapperFieldExtractor writes the inital line and guesses the bean property namens as BeanWrapperFieldSetMapper does.
The process to load files is so comfortable in Spring Batch. Why is the writing of files so different? Am I missing something?
I have to use Spring Batch 2.1.x because we are using Spring 3.0.x . Therefor an upgrade to 2.2.x would not be an option.
Which is your need? Extract field property as text? You can
use a FormatterLineAggregator if you needs are not too complicated
write your own CustomEditorsFieldExtractor (better)
Generate a complex domain object composed by original domain object and by text-formatted object and use last one as parameter of writer (but breaks your current processor/writer)
Use FlatFileItemWriter.headerCallback: if setted allow custom header write
Writing - in your case - seems a pain respect read process because spring-batch's reading components fits your needs. Standard components fits more used use-case and they cover a lot of scenario. Let us write a custom FieldExtractor sometimes! :)