i am using CronTriggerBean and SimpleTriggerBean quartz schedulers for execute the triggers.After executing the triggers permanently not save the details about triggers. Before execute the triggers data is stored after execute it is deleted what is the problem. For that i am using code.
<prop key="org.quartz.plugin.triggHistory.class">org.quartz.plugins.history.LoggingTriggerHistoryPlugin</prop>
<prop key="org.quartz.plugin.triggHistory.triggerFiredMessage">Trigger {1}.{0} fired job {6}.{5} at: {4, date, HH:mm:ss dd/MM/yyyy}</prop>
<prop key="org.quartz.plugin.triggHistory.triggerCompleteMessage">Trigger {1}.{0} completed firing job {6}.{5} at {4, date, HH:mm:ss dd/MM/yyyy} with resulting trigger instruction code: {9}</prop>
<prop key="org.quartz.plugin.triggHistory.triggerMisfiredMessage">Trigger [{1}.{0}] misfired job [{6}.{5}]. Should have fired at: {3, date, dd-MM-yyyy HH:mm:ss.SSS}</prop>
<prop key="org.quartz.plugin.jobHistory.class">org.quartz.plugins.history.LoggingJobHistoryPlugin</prop>
<prop key="org.quartz.plugin.jobHistory.jobToBeFiredMessage">Job [{1}.{0}] to be fired by trigger [{4}.{3}], re-fire: {7}</prop>
<prop key="org.quartz.plugin.jobHistory.jobSuccessMessage">Job {1}.{0} fired at: {2, date, dd/MM/yyyy HH:mm:ss} result=OK</prop>
<prop key="org.quartz.plugin.jobHistory.jobFailedMessage">Job {1}.{0} fired at: {2, date, dd/MM/yyyy HH:mm:ss} result=ERROR</prop>
<prop key="org.quartz.plugin.jobHistory.jobWasVetoedMessage">Job [{1}.{0}] was vetoed. It was to be fired by trigger [{4}.{3}] at: {2, date, dd-MM-yyyy HH:mm:ss.SSS}</prop>
You need to create a new pluginHistoty which does save in db and use it instead of org.quartz.plugins.history.
LoggingInDbTriggerHistoryPlugin : ISchedulerPlugin, ITriggerListener
...
public virtualvoid TriggerComplete(..){
//save in db
}
Related
I am running jobs in parallel. My job execution is always null when I use JobRepositoryFactoryBean. I need to use to use this. If I don't use this, then I will not be able to use metadata tables. Because I want to restart my job when it is not completed because of some failure reason. So, I want previous record which I will be fetching from metadata tables. And if I use MapJobRepositoryFactoryBean, the job execution is not null. But then there will not be insertion in metadata tables.
I referred this link:-
My job is always null. Can't inject a batch job with Spring Batch. Why?
But the link is not working for me.
My congifuration is
<bean id="batchScheduler" class="com.abc.BatchScheduler">
<property name="jobLauncher" ref="jobLauncher" />
<property name="jobtwo" ref="JobTwo" />
</bean>
I searched a lot. Please help me out. I am not able to proceed.
Our Spring Batch application is, upon restart of a failed job, processing the same records again, resulting in duplicate rows, and we want to understand how to avoid this.
The Spring Integration poller which starts the batch job is configured to run every couple of hours. When it runs a second time, the job parameters will be the same, but if the previous run failed (for example, because of a DataTruncation exception), Spring Batch will not complain that the job has already completed.
At the point of failure, several hundred thousand records will already have been processed and copied fromn the source table to the destination table. When the job is run a subsequent time, the same rows will be copied to the destination table, resulting in duplicates. Therefore, it appears that the job is not being resumed, but restarted from the beginning.
The Spring Batch database is Derby (file based), this is setup when the application starts, and it appears state is not maintained between restarts of the actual application (because a job can be run again with the same parameters). However, within one application run, state is maintained. For instance, if the job completes succesfully, the next time the poller runs an exception will be thrown because a job (with those parameters) has already completed.
Our job is definition is as follows:
<batch:job id="publisherJob" >
<batch:step id="step1">
<batch:tasklet >
<batch:chunk reader="itemReader" processor="itemProcessor"
writer="itemWriter" commit-interval="${...}" />
</batch:tasklet>
<batch:listeners>
...
</batch:listeners>
</batch:job>
<bean id="itemReader" class="org.springframework.batch.item.database.JdbcCursorItemReader">
<property name="dataSource" ref="dataSource" />
<property name="sql" value="select ${...} from ${...} where ${...}" />
<property name="rowMapper" ref="rowMapper" />
</bean>
The WHERE clause includes ORDER BY.
Our understanding was that Spring Batch would retain the state at which processing failed and proceed from that point (if the error in the source table has been fixed), therefore preventing duplicate rows. What has to be configured for this to happen?
Thanks
Spring Batch maintains state in that it remembers how many records were processed, not specifically which ones. Because of that, it's up to you to guarantee the order of the items is reproducible from run to run so that if we process 100 records in run 1 and fail, when we skip the first 100 records in run 2, those are the right 100 records to skip. You didn't provide the configuration for your JdbcCursorItemReader but my assumption is that you are not using an order by in your SQL. If you want restartability, you need some way to guarantee the order of the items. Using an order by in your SQL is the easiest way to accomplish this (there are others like using the process indicator pattern if that's needed).
We have a requirement to pause a job before application maintenance. We are using Quartz 2.2.1 in cluster. Database is oracle.
I have developed a screen with "Pause" functionality. I observed that "pause" works fine until I start the server again. The moment I start server, TRIGGER_STATE of QRTZ_TRIGGERS table resets to "WAITING".
Can anyone please provide a hint.
Thanks a lot in advance.
Rgds - Roy
If you have set overwriteExistingJobs=true (note that default value is false) then each time server starts, it loads the jobs/triggers from the configuration file and replaces existing ones (that have the same job/trigger names), therefore overwriting triggers and their states too as in your case.
You could try to set overwriteExistingJobs=false in the SchedulerFactoryBean. This however may not be convenient for you, since if you ever change job configuration in the server, the existing jobs with old configuration will remain in the database.
<bean class="org.springframework.scheduling.quartz.SchedulerFactoryBean">
....
<property name="overwriteExistingJobs" value="false"/>
<property name="triggers">
<list>
....
</list>
</property>
....
</bean>
I am using Jasperserver 4.5.0 Pro. I have developed a custom data-source for some additional feature. All reports that use this custom DS get executed properly and show the correct output when executed manually. But when the same reports are scheduled using Jasper's report job scheduler, there is some problem with session initiation, and hence the reports do not get executed.
Let me explain this a bit.
For manual execution of reports -
As part of custom DS, I had to update the following 2 xmls -
viewReportFlow.xml :
I updated the action state 'runReport' to use our custom DS executer action bean method 'xmlHttpDsExecuterAction.setUpSession' to start session. Please see the below tag of runReport -
<action-state id="runReport" xmlns:b="http://www.springframework.org/schema/webflow" xmlns:xi="http://www.w3.org/2001/XInclude">
<on-entry>
<evaluate expression="xmlHttpDsExecuterAction.setUpSession"/>
</on-entry>
<evaluate expression="viewReportActionBean"/>
<transition on="success" to="reportOutput"/>
<on-exit>
<evaluate expression="xmlHttpDsExecuterPageAction.setIndex"/>
</on-exit>
viewReportBeans.xml :
I defined the executer action beans used in above flow xml here -
<bean id="xmlHttpDsExecuterAction" class="com.sigma.reporting.xmlhttpds.XmlHttpDsExecuterAction" xmlns:xsi="http://www.w 3.org/2001/XMLSchema-instance"/> <bean id="xmlHttpDsExecuterPageAction" class="com.sigma.reporting.xmlhttpds.XmlHttpDsExecuterPageAction" xmlns:xsi="http://www.w 3.org/2001/XMLSchema-instance">
<property name="requestParameterPageIndex" value="pageIndex"/>
<property name="flowAttributePageIndex" value="pageIndex"/>
<property name="xmlHttpDataSourceName" value="com.sigma.reporting.xmlhttpds.XmlHttpDsExecuterDataSourceService"/>
<property name="repository">
<ref bean="repositoryService"/>
</property>
<property name="jasperPrintName" value="jasperPrintName"/>
<property name="reportUnitObject" value="reportUnitObject"/> </bean>
For job scheduling of reports :
I want to implement similarly as above using scheduler. During my investigations, I have tried to analyze the scheduler flow, and tried to put our changes, but no luck so far. Can any one please let me know what flows are used for running reports via scheduler and also please recommend the places to configure custom DS as above?
Finally after understanding the flow of japser server scheduler i have got the solution for this.
For setting your custom data source beans and calling the function,we need to specify the bean destination in $JASPER_HOME/apache-tomcat/weaaps/jasperserver-pro/WEBINF/flows/reportJobBeans.xml and we can use this bean in reportJobFlow.xml in jobOutput tag lik this
<view-state id="jobOutput" view="modules/reportScheduling/jobOutput">
<on-entry>
<set name="flowScope.prevForm" value="'jobOutput'"/>
<evaluate expression="reportOptionsJobEditAction.setOutputReferenceData"/>
<evaluate expression="xmlHttpDsExecuterAction.setUpSession"/>
</on-entry>
</view-state>
I am having problems connecting to 2 persistence units from within the same transaction using following tech stack,
WLS 10.3.x, Eclipselink 2.1, Oracle 11g JDBC driver, Informix 10 JDBC driver
Using inputs from this SO post I made the oracle datasource XA compliant and the Informix ds "Emulate 2-phase commit" and things started to work. However, now I am getting a strange problem.
I am using standalone java client to invoke my ejb 3 SLSB which in turn invokes the JPA entities. The problem I am facing is it works the first time, the second time it does not throw any exception but does not update the data in either databases and the 3rd time it throws an exception stating "Transaction has already been committed" as if the application server JTA transaction manager is holding on to the original transaction context. Please note these 3 invocations are separate and sequential wherein every invocations completes with the client exiting the client process. The problem is very consistent and happens in the exact same sequence every time I restart the app server.
Appreciate any input!
<persistence-unit name="TopLinkDB" transaction-type="JTA">
<provider>org.eclipse.persistence.jpa.PersistenceProvider</provider>
<jta-data-source>jdbc/oracleDS</jta-data-source>
<class>com.home.domain.Property</class>
<properties>
<property name="eclipselink.target-server" value="WebLogic_10" />
</properties>
</persistence-unit>
<persistence-unit name="TopLinkINFO" transaction-type="JTA">
<provider>org.eclipse.persistence.jpa.PersistenceProvider</provider>
<jta-data-source>jdbc/infoDS</jta-data-source>
<class>com.home.domain.GlobalNumber</class>
<properties>
<property name="eclipselink.target-server" value="WebLogic_10" />
</properties>
</persistence-unit>