Passing fragmentRootElementName as parameters to the xml - spring-batch

Is it possible to send fragmentRootElementName as a parameter to the job xml file. I have two processes one is plan and the other contract. So I divided my job into reading the file from database, converting it to an object and then publishing it in webservices. The reading part first reads a property file, there we get the info if the process is a plan or contract and accordingly we need to call the corresponding process. I did the one flow for plan but is it possible to pass the fragmentRootElementName as a prameter.. as it would be different for plan and contract
Thanks

Yes, you can using late-binding via scope="step" in this way:
<bean id="myReader" class="org.springframework.batch.item.xml.StaxEventItemReader" scope="step">
<property name="fragmentRootElementName" value="#{jobParameters['rootFragmentName']}" />
<!-- Other properties -->
</bean>

Related

I need some suggestion to develop an spring batch application

I need to send one template to greet the person on birthday.
so, I want to keep the whole html template in database . And that template should automatically send by spring batch email scheduler to that person.
i need some ideas.
First of all you should store this file in database (BLOB). It doesn't need to be in batch process because you just need do this once. Or you can add like your first step and verify if exist on table, is not exist insert that.
<batch:job id="greetJob">
<batch:step id="insertTemplateStep" next="sendEmailStep" >
<batch:tasklet ref="insertTemplate" />
</batch:step>
<batch:step id="sendEmailStep">
<batch:tasklet ref="sendEmail" />
</batch:step>
</batch:job>
The second step is where you will read from the database and send email. Problably tasklet is enough because you need read from database but you don't need write, just send an e-mail.

How can I set the data adapter when running a report via the rest 2 API

I am using Jasper Server 6.3 Community Edition. I have a set of reports that I want to be able to execute against different databases. I have a data adapter defined for each database. I want to be able to execute any of these reports via the rest 2 API and just tell it to use a different data adapter.
The piece of XML that is pertinent to this setting in the jrxml is:
<property name="net.sf.jasperreports.data.adapter" value="common\test.xml"/>
The ideal situation would be, to be able to pass any value I want to replace common\test.xml
From what I can tell though, there is no built in parameter to set the data adapter, and one cannot use a parameter to set this either. In other words this won't work:
<property name="net.sf.jasperreports.data.adapter" value="$P!{data_adapter_path}"/>
So how can the net.sf.jasperreports.data.adapter value be set dynamically using the rest 2 API?
The easiest solution would be to upgrade to at least JasperReports Server 6.4.0 where you could use a propertyExpression instead of property, like so (I am posting only the relevant fragment):
<propertyExpression name="net.sf.jasperreports.data.adapter"><![CDATA[$P{DataAdapterLocation}]]></propertyExpression>
<parameter name="DataAdapterLocation" class="java.lang.String" evaluationTime="Early">
<defaultValueExpression><![CDATA["default/path/to/DataAdapterFile"]]></defaultValueExpression>
</parameter>
The evaluationTime="Early" on the parameter is essential for this to work.
Another but complex solution, that would not require an upgrade, would involve creating JDBC Data Sources(via interface or REST API) and assigning them to the appropriate Report Unit. This post shows how you could achieve that by using the jrs-rest-java-client .

ETL implementation using Spring Batch

I need to implement an ETL application for one of the projects am working on.
It has following steps:
Need to read from a table to retrieve some values that will be
passed in as Job parameters.
The returned object of the step 1 will be further used to retrieve
some data from a second table.
Then has to read from a flat file that will be used along with the
values from step 2. Apply the business logic. Then write to a table.
We are using Spring Data JPA, Spring integration.
The challenge I am facing is to read the values from a table to retrieve the parameters for the job then launch the job.
And then the output of step 2 has to be sent along with the File information for further processing.
I know how to implement the above steps independently but struggling to tie them from end to end.
Sharing any ideas to design the above would be great. Thanks in advance.
I'll try to give you some ideas for your differents points.
1 - Read table values and pass them as Job Parameters
I see 2 solutions here :
You could do a "manual" query (ie. without springbatch), and then do your business logic to pass the results as JobParameters (you just need a JobLauncher or a CommandLineJobRunner, see Springbatch Documentation ยง4.4) :
JobLauncher jobLauncher = (JobLauncher) context.getBean("jobLauncher");
Job job = (Job) context.getBean(jobName);
// Do your business logic and your database query here.
// Create your parameters
JobParameter parameter = new JobParameter(resultOfQuery);
// Add them to a map
Map<String, JobParameter> parameters = new HashMap<String, JobParameter>();
parameters.add("yourParameter", parameter);
// Pass them to the job
JobParameters jobParameters = new JobParameters(parameters);
JobExecution execution = jobLauncher.run(job, parameters);
The other solution would be to add a JobExecutionListener and override the method beforeJob to do your query and then save the results in the executionContext (which you can then access with : #{jobExecutionContext[name]}).
#Override
public void beforeJob(JobExecution jobExecution) {
// Do your business logic and your database query here.
jobExecution.getExecutionContext().put(key, value);
}
In each case, you can use a SpringBatch ItemReader to do your query. You can, for example, declare an item reader as a field for your listener (don't forget the setter) and configure it as such :
<batch:listener>
<bean class="xx.xx.xx.YourListener">
<property name="reader">
<bean class="org.springframework.batch.item.database.JdbcCursorItemReader">
<property name="dataSource" ref="dataSource"></property>
<property name="sql" value="${yourSQL}"></property>
<property name="rowMapper">
<bean class="xx.xx.xx.YourRowMapper"></bean>
</property>
</bean>
</property>
</bean>
</batch:listener>
2 - Read a table depending on results from previous step
Once more, you can use the JobExecutionContext to store and retreive data between steps. You can then implement a StepExecutionListener to override the method beforeStep and access StepExecution which will lead you to JobExecution.
3 - Send result from table reading along results of file reading
There is no "default" CompositeItemReader which would let you read from 2 sources at the same time, but I don't think that's what you actually want to do.
For your case, I would declare the "table reader" as the reader in a <batch:chunk> and then declare a custom ItemProcessor which would have another ItemReader field. This reader would be your FlatFileItemReader. You can then manually start the read and apply your business logic in the process method.

Using Spring Batch Admin

I have looked around quite a bit at Spring Batch and Spring Batch Admin. My question is as follows. I understand that Spring Batch meta-tables do not store an attribute 'jobId' as such but the 'job name' which is the value passed as the 'id' in the <job/> bean. I want to have something of the following sort. For example:
<job id="myJob">
<property name="jobId" value="123"/>
</job>
That is, for my specific requirement I want to display the 'jobId' against the respective 'jobName'. So I have created another table that holds the 'jobName' and the 'jobId'. But I am unable to make any progress on how to go about making the Spring Batch Admin UI pick up the 'jobId' given the 'jobName' from my table and display it on the Admin screen. Or, is there any other way through which Spring Admin could pick up the jobId? For instance, will it make sense to have a class extend 'SimpleJob' and then make the job a child of this class? Say, something like this:
class MyJob extends SimpleJob{
private int jobId;
}
//And then in the config file
<bean id="baseJob" class="...MyJob/>
<job id="myJob" parent="baseJob">
<property name="jobId" value="123"/>
</job>
By the way, I am using spring-admin-manager and spring-admin-resources version '1.3.1.RELEASE'. And spring batch version is '2.1.8.RELEASE'
Would somehow please share some pointers?
Thanks
what is the spring batch version that you are using..?
While ago when I was using the batch varsion Spring-batch 2.1.8 - it used to insert the jobID , jobName , jobStatus and time too.

Spring batch usage or how to launch Jobs within a Job

TL;DR: How should one create Spring Batch Jobs using Spring Batch Job?
Transaction boundaries seem to be the problem. This seems to be a
classic question but here it goes again:
I have following use case: I need to poll a FTP server and store found
XML files as a blob in database. XML has 0...N entries of interest I
need to send to the external Web Service and store the
response. Responses can be non-retryable or retryable and I need to
store each request and their responses for auditing purposes.
The domain/JPA model is as follows: Batch (contains XML blob) contains
0-N BatchRow objects. BatchRow contains data to be sent to the web
service and it also contains 1...N BatchRowHistory objects holding status
information about web service calls.
I was asked to implement this using Spring Batch (Spring Integration
could've been other possibility since this case of integration). Now
I've struggled with different approaches and I find this task much
more complex and therefore difficult as it IMHO should be.
I've split the tasks to following jobs:
Job1:
Step11: Fetch file and store to the database as a blob.
Step12: Split XML to entries and store those entries to db.
Step13: Create Job2 and launch it for each entry stored in
Step12. Mark Job2 created flag up in the domain model
database for entries.
Job2:
Step21: Call web service for each entry and store result to db. Retry and
skip logic dwells here. Job2 types need possibly manual restarting etc.
The logic behind this structure is that Job1 is run periodically
scheduled (once a minute or so). Job2 is run whenever there are
those Jobs and they have either succeeded or their retry limit is up
and they have failed. Domain model stores basically only results and
Spring Batch is responsible for running the show. Manual relaunches
etc can be handled via Spring Batch Admin (at least I hope so). Also
Job2 has the BatchRow's id in the JobParameters map so it can be
viewed in Spring Batch Admin.
Question 1: Does this job structure make sense? I.e. creating new
Spring Batch Jobs for each row in db, it kind of seems to defeat the
purpose and re-invent the wheel at some level?
Question 2: How do I create those Job2 entries in Step13?
I got first problems with transaction and JobRepository but succeeded
to launch few jobs with following setup:
<batch:step id="Step13" parent="stepParent">
<batch:tasklet>
<batch:transaction-attributes propagation="NEVER"/>
<batch:chunk reader="rowsWithoutJobReader" processor="batchJobCreator" writer="itemWriter"
commit-interval="10" />
</batch:tasklet>
</batch:step>
<bean id="stepParent" class="org.springframework.batch.core.step.item.FaultTolerantStepFactoryBean" abstract="true"/>
Please note that commit-interval="10" means this can create up to 10
jobs currently and that's it... because batchJobCreator calls
JobLauncher.run method and it goes swimmingly BUT itemWriter can not
write BatchRows back to the database with updated information (boolean
jobCreated flag toggled on). Obvious reason for that is the propagation.NEVER in transaction-attributes, but without it I can't create jobs with jobLauncher.
Because updates are not passed to the database, I get the same BatchRows again and
they clutter the log with:
org.springframework.batch.retry.RetryException: Non-skippable exception in recoverer while processing; nested exception is org.springframework.batch.core.repository.JobExecutionAlreadyRunningException: A job execution for this job is already running: JobInstance: id=1, version=0, JobParameters=[{batchRowId=71}], Job=[foo.bar]
at org.springframework.batch.core.step.item.FaultTolerantChunkProcessor$2.recover(FaultTolerantChunkProcessor.java:278)
at org.springframework.batch.retry.support.RetryTemplate.handleRetryExhausted(RetryTemplate.java:420)
at org.springframework.batch.retry.support.RetryTemplate.doExecute(RetryTemplate.java:289)
at org.springframework.batch.retry.support.RetryTemplate.execute(RetryTemplate.java:187)
at org.springframework.batch.core.step.item.BatchRetryTemplate.execute(BatchRetryTemplate.java:215)
at org.springframework.batch.core.step.item.FaultTolerantChunkProcessor.transform(FaultTolerantChunkProcessor.java:287)
at org.springframework.batch.core.step.item.SimpleChunkProcessor.process(SimpleChunkProcessor.java:190)
at org.springframework.batch.core.step.item.ChunkOrientedTasklet.execute(ChunkOrientedTasklet.java:74)
at org.springframework.batch.core.step.tasklet.TaskletStep$ChunkTransactionCallback.doInTransaction(TaskletStep.java:386)
at org.springframework.transaction.support.TransactionTemplate.execute(TransactionTemplate.java:130)
at org.springframework.batch.core.step.tasklet.TaskletStep$2.doInChunkContext(TaskletStep.java:264)
at org.springframework.batch.core.scope.context.StepContextRepeatCallback.doInIteration(StepContextRepeatCallback.java:76)
at org.springframework.batch.repeat.support.RepeatTemplate.getNextResult(RepeatTemplate.java:367)
at org.springframework.batch.repeat.support.RepeatTemplate.executeInternal(RepeatTemplate.java:214)
at org.springframework.batch.repeat.support.RepeatTemplate.iterate(RepeatTemplate.java:143)
at org.springframework.batch.core.step.tasklet.TaskletStep.doExecute(TaskletStep.java:250)
at org.springframework.batch.core.step.AbstractStep.execute(AbstractStep.java:195)
at org.springframework.batch.core.job.SimpleStepHandler.handleStep(SimpleStepHandler.java:135)
at org.springframework.batch.core.job.flow.JobFlowExecutor.executeStep(JobFlowExecutor.java:61)
at org.springframework.batch.core.job.flow.support.state.StepState.handle(StepState.java:60)
at org.springframework.batch.core.job.flow.support.SimpleFlow.resume(SimpleFlow.java:144)
at org.springframework.batch.core.job.flow.support.SimpleFlow.start(SimpleFlow.java:124)
at org.springframework.batch.core.job.flow.FlowJob.doExecute(FlowJob.java:135)
at org.springframework.batch.core.job.AbstractJob.execute(AbstractJob.java:293)
at org.springframework.batch.core.launch.support.SimpleJobLauncher$1.run(SimpleJobLauncher.java:120)
at java.lang.Thread.run(Thread.java:680)
That means that job has already been created in Spring Batch and it
tries to create those files again on later executions of Step13. I
could circumvent this setting the jobCreated flag to true in the
Job2/Step21 but it feels kind of kludgy and wrong to me.
Question 3: I had more domain object driven approach; I had Spring
Batch Jobs scanning domain tables using pretty elaborate JPQL queries
and JPAItemReaders. The problem with this approach is that this does
not use Spring Batch's finer features. The history and retry logic are
the problem. I need to code the retry logic to the JPQL queries
directly (for example, if BatchRow has more than 3 BatchRowHistory
elements it has failed and needs to be manually re-examined). Should I
bite the bullet and continue with this approach instead of trying to
create individual Spring Batch Job for each web service call?
Software info if needed: Spring Batch 2.1.9, Hibernate 4.1.2, Spring
3.1.2, Java 6.
Thank you in advance and sorry for the long story, Timo
Edit 1:
The reason why I think I need to spawn new jobs is this:
Loop while reader returns null OR exception is thrown
Transaction start
reader - processor - writer loop for the whole N rows
Transaction end for batch size N
Each failed entry is the problem; I want manually restartable
executions (Jobs are the only ones that are restartable in the Spring
Batch Admin, right?) for each row in the batch so that I could use
Spring Batch Admin to view failed jobs (with their job parameters
which contain row ids from domain db) and restart those etc. How do I
accomplish this kind of behaviour without spawning jobs and storing
the history to the domain db?
Ok, i hate responding with questions... but i need to know something?
1) If your input files are XML, why don't you use the StaxEventItemReader on them and simply persist your entries in step 1?
2) Starting a second job from a step!!!! i don't even know if it should works... but IMO.. it smells ;-)
Why dont you just define another step that use a JdbcCursorItemReader to read your entries and call the web services in a ItemProcessor, then write the result in the database?
Maybe i don't understand your requirement to create different jobs for every call to the web service!!!
I Did something similar to your use case and it was done using this scenario:
Job 1 :
step 1 : read xml, process pojo->domain obj, write domain obj in DB
Job 2 :
step 1 : read obj from db, process = call WS, write response in DB
This was simple and worked very well (including restartable and skip features)
Hope it will help
regards