I need to publish notification events to external systems over JMS, when data is updated. Id like this to be done within the same transaction as the objects are committed to the database to ensure integrity.
The ApplicationLifecycle events that spring-data-rest emits seemed like the logical place to implement this logic.
#org.springframework.transaction.annotation.Transactional
public class TestEventListener extends AbstractRepositoryEventListener<Object> {
private static final Logger LOG = LoggerFactory.getLogger(TestEventListener.class);
#Override
protected void onBeforeCreate(Object entity) {
LOG.info("XXX before create");
}
#Override
protected void onBeforeSave(Object entity) {
LOG.info("XXX before save");
}
#Override
protected void onAfterCreate(Object entity) {
LOG.info("XXX after create");
}
#Override
protected void onAfterSave(Object entity) {
LOG.info("XXX after save");
}
}
However, these events happen before and after the tx starts and commits.
08 15:32:37.119 [http-nio-9000-exec-1] INFO n.c.v.vcidb.TestEventListener - XXX before create
08 15:32:37.135 [http-nio-9000-exec-1] TRACE o.s.t.i.TransactionInterceptor - Getting transaction for [org.springframework.data.jpa.repository.support.SimpleJpaRepository.save]
08 15:32:37.432 [http-nio-9000-exec-1] TRACE o.s.t.i.TransactionInterceptor - Completing transaction for [org.springframework.data.jpa.repository.support.SimpleJpaRepository.save]
08 15:32:37.479 [http-nio-9000-exec-1] INFO n.c.v.vcidb.TestEventListener - XXX after create
What extension point does spring-data-rest have for adding behaviour that will execute within the spring managed transaction?
I use aop (pointcut and tx advice) to solve this problem:
#Configuration
#ImportResource("classpath:/aop-config.xml")
public class AopConfig { ...
and aop-config.xml:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:aop="http://www.springframework.org/schema/aop" xmlns:tx="http://www.springframework.org/schema/tx"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop.xsd
http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx.xsd"
default-autowire="byName">
<aop:config>
<aop:pointcut id="restRepositoryTx"
expression="execution(* org.springframework.data.rest.webmvc.RepositoryEntityController.*(..))" />
<aop:advisor id="managerTx" advice-ref="txAdvice" pointcut-ref="restRepositoryTx" order="20" />
</aop:config>
<tx:advice id="txAdvice" transaction-manager="transactionManager">
<tx:attributes>
<tx:method name="postCollectionResource*" propagation="REQUIRES_NEW" rollback-for="Exception" />
<tx:method name="putItemResource*" propagation="REQUIRES_NEW" rollback-for="Exception" />
<tx:method name="patchItemResource*" propagation="REQUIRES_NEW" rollback-for="Exception" />
<tx:method name="deleteItemResource*" propagation="REQUIRES_NEW" rollback-for="Exception" />
<!-- <tx:method name="*" rollback-for="Exception" /> -->
</tx:attributes>
</tx:advice>
</beans>
This is the same as having controller methods annotated with #Transactional.
The solution described by phlebas work. And I also think "Run event handler within a same transaction" should be a feature which should be provided by Spring Data Rest. There are many common use cases to need to split logic to sepreate eventHandler. just like "triggers in database". The version show below is same as phlebas solution.
#Aspect
#Component
public class SpringDataRestTransactionAspect {
private TransactionTemplate transactionTemplate;
public SpringDataRestTransactionAspect(PlatformTransactionManager transactionManager) {
this.transactionTemplate = new TransactionTemplate(transactionManager);
this.transactionTemplate.setName("around-data-rest-transaction");
}
#Pointcut("execution(* org.springframework.data.rest.webmvc.*Controller.*(..))")
public void aroundDataRestCall(){}
#Around("aroundDataRestCall()")
public Object aroundDataRestCall(ProceedingJoinPoint joinPoint) throws Throwable {
return transactionTemplate.execute(transactionStatus -> {
try {
return joinPoint.proceed();
} catch (Throwable e) {
transactionStatus.setRollbackOnly();
if(e instanceof RuntimeException) {
throw (RuntimeException)e;
} else {
throw new RuntimeException(e);
}
}
});
}
}
I have not worked on spring-data-rest, but with spring, this can be handled the following way.
1) Define custom TransactionSynchronizationAdapter, and register the bean in TransactionSynchronizationManager.
Usually, I have a method registerSynchronizaiton with a #Before pointcut for this.
#SuppressWarnings("rawtypes") #Before("#annotation(org.springframework.transaction.annotation.Transactional)")
public void registerSynchronization() {
// TransactionStatus transStatus = TransactionAspectSupport.currentTransactionStatus();
TransactionSynchronizationManager.registerSynchronization(this);
final String transId = UUID.randomUUID().toString();
TransactionSynchronizationManager.setCurrentTransactionName(transId);
transactionIds.get().push(transId);
if (TransactionSynchronizationManager.isActualTransactionActive() && TransactionSynchronizationManager
.isSynchronizationActive() && !TransactionSynchronizationManager.isCurrentTransactionReadOnly()) {
if (!TransactionSynchronizationManager.hasResource(KEY)) {
final List<NotificationPayload> notifications = new ArrayList<NotificationPayload>();
TransactionSynchronizationManager.bindResource(KEY, notifications);
}
}
}
2) And, implement Override method as follows
#Override public void afterCompletion(final int status) {
CurrentContext context = null;
try {
context = ExecutionContext.get().getContext();
} catch (final ContextNotFoundException ex) {
logger.debug("Current Context is not available");
return;
}
if (status == STATUS_COMMITTED) {
transactionIds.get().removeAllElements();
publishedEventStorage.sendAllStoredNotifications();
// customize here for commit actions
} else if ((status == STATUS_ROLLED_BACK) || (status == STATUS_UNKNOWN)) {
// you can write your code for rollback actions
}
}
Related
I'm trying to set up a simple Java <-> #C/.NET proof of concept using Apache Geode, specifically testing the continuous query functionality using the .NET native client. Using a regular Query works fine from .NET, only the Continuous Query has an issue. I run into my problem when I call the Execute() method on the continuous query object. The specific error I get is
Got unhandled message type 26 while processing response, possible serialization mismatch
I'm only storing simple strings in the cache region so I'm a bit surprised that I'm having serialization issues. I've tried enabling PDX serialization on both sides (and running without it), it doesn't seem to make a difference. Any ideas?
Here is my code for both sides:
Java
Starts a server, puts some data, and then keeps updating a given cache entry.
public class GeodePoc {
public static void main(String[] args) throws Exception {
ServerLauncher serverLauncher = new ServerLauncher.Builder().setMemberName("server1")
.setServerBindAddress("localhost").setServerPort(10334).set("start-locator", "localhost[20341]")
.set(ConfigurationProperties.LOG_LEVEL, "trace")
.setPdxReadSerialized(true)
.set(ConfigurationProperties.CACHE_XML_FILE, "cache.xml").build();
serverLauncher.start();
Cache c = CacheFactory.getAnyInstance();
Region<String, String> r = c.getRegion("example_region");
r.put("test1", "value1");
r.put("test2", "value2");
System.out.println("Cache server successfully started");
int i = 0;
while (true) {
r.put("test1", "value" + i);
System.out.println(r.get("test1"));
Thread.sleep(3000);
i++;
}
}
}
Server cache.xml
<?xml version="1.0" encoding="UTF-8"?>
<cache xmlns="http://geode.apache.org/schema/cache" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://geode.apache.org/schema/cache http://geode.apache.org/schema/cache/cache-1.0.xsd"
version="1.0">
<cache-server bind-address="localhost" port="40404"
max-connections="100" />
<pdx>
<pdx-serializer>
<class-name>org.apache.geode.pdx.ReflectionBasedAutoSerializer</class-name>
<parameter name="classes">
<string>java.lang.String</string>
</parameter>
</pdx-serializer>
</pdx>
<region name="example_region">
<region-attributes refid="REPLICATE" />
</region>
</cache>
.NET Client
public static void GeodeTest()
{
Properties<string, string> props = Properties<string, string>.Create();
props.Insert("cache-xml-file", "<path-to-cache.xml>");
CacheFactory cacheFactory = new CacheFactory(props)
.SetPdxReadSerialized(true).SetPdxIgnoreUnreadFields(true)
.Set("log-level", "info");
Cache cache = cacheFactory.Create();
cache.TypeRegistry.PdxSerializer = new ReflectionBasedAutoSerializer();
IRegion<string, string> region = cache.GetRegion<string, string>("example_region");
Console.WriteLine(region.Get("test2", null));
PoolManager pManager = cache.GetPoolManager();
Pool pool = pManager.Find("serverPool");
QueryService qs = pool.GetQueryService();
// Regular query example (works)
Query<string> q = qs.NewQuery<string>("select * from /example_region");
ISelectResults<string> results = q.Execute();
Console.WriteLine("Finished query");
foreach (string result in results)
{
Console.WriteLine(result);
}
// Continuous Query (does not work)
CqAttributesFactory<string, object> cqAttribsFactory = new CqAttributesFactory<string, object>();
ICqListener<string, object> listener = new CacheListener<string, object>();
cqAttribsFactory.InitCqListeners(new ICqListener<string, object>[] { listener });
cqAttribsFactory.AddCqListener(listener);
CqAttributes<string, object> cqAttribs = cqAttribsFactory.Create();
CqQuery<string, object> cquery = qs.NewCq<string, object>("select * from /example_region", cqAttribs, false);
Console.WriteLine(cquery.GetState());
Console.WriteLine(cquery.QueryString);
Console.WriteLine(">>> Cache query example started.");
cquery.Execute();
Console.WriteLine();
Console.WriteLine(">>> Example finished, press any key to exit ...");
Console.ReadKey();
}
.NET Cache Listener
public class CacheListener<TKey, TResult> : ICqListener<TKey, TResult>
{
public virtual void OnEvent(CqEvent<TKey, TResult> ev)
{
object val = ev.getNewValue() as object;
TKey key = ev.getKey();
CqOperation opType = ev.getQueryOperation();
string opStr = "DESTROY";
if (opType == CqOperation.OP_TYPE_CREATE)
opStr = "CREATE";
else if (opType == CqOperation.OP_TYPE_UPDATE)
opStr = "UPDATE";
Console.WriteLine("MyCqListener::OnEvent called with key {0}, op {1}.", key, opStr);
}
public virtual void OnError(CqEvent<TKey, TResult> ev)
{
Console.WriteLine("MyCqListener::OnError called");
}
public virtual void Close()
{
Console.WriteLine("MyCqListener::close called");
}
}
.NET Client cache.xml
<client-cache
xmlns="http://geode.apache.org/schema/cache"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://geode.apache.org/schema/cache http://geode.apache.org/schema/cache/cache-1.0.xsd"
version="1.0">
<pool name="serverPool" subscription-enabled="true">
<locator host="localhost" port="20341"/>
</pool>
<region name="example_region">
<region-attributes refid="CACHING_PROXY" pool-name="serverPool" />
</region>
</client-cache>
This ended up being a simple oversight on my part. In order for continuous query to function you must include the geode-cq dependency on the Java side. I didn't do this, and this caused the exception.
I have an RCP Project. I currently want to modify the Project Explorer. I wrote an addition ContentProvider(implements ICommonContentProvider) and an additional LabelProvider (implements ICommonLabelProvider). In my plugin.xml I added the following:
<extension
id="navigator-viewbinding"
point="org.eclipse.ui.navigator.viewer">
<viewerContentBinding
viewerId="org.eclipse.ui.navigator.ProjectExplorer">
<includes>
<contentExtension
isRoot="true"
pattern="de.myapp.application.EditorResourceContent">
</contentExtension>
</includes>
</viewerContentBinding>
</extension>
And:
<extension
id="navigator-content"
point="org.eclipse.ui.navigator.navigatorContent">
<navigatorContent
activeByDefault="true"
contentProvider="de.myapp.application.ProjectExplorerContentProvider"
icon="icon.gif"
id="de.myapp.application.EditorResourceContent"
labelProvider="de.myapp.application.ProjectExplorerLabelProvider"
name="Editor Decoration"
priority="highest">
<triggerPoints>
<or>
<instanceof
value="org.eclipse.core.resources.IFile">
</instanceof>
<instanceof
value="org.eclipse.core.resources.IFolder">
</instanceof>
<instanceof
value="org.eclipse.jdt.internal.core.PackageFragment">
</instanceof>
<instanceof
value="org.eclipse.core.resources.IWorkspaceRoot" />
<instanceof
value="org.eclipse.core.resources.IProject" />
</or>
</triggerPoints>
<possibleChildren>
<or>
<instanceof
value="org.eclipse.core.resources.IWorkspaceRoot" />
<instanceof
value="org.eclipse.core.resources.IProject" />
<instanceof
value="org.eclipse.core.resources.IResource" />
<instanceof
value="org.eclipse.core.resources.IFolder" />
<instanceof
value="org.eclipse.core.resources.IFile" />
<instanceof
value="org.eclipse.jdt.internal.core.PackageFragment" />
</or>
</possibleChildren>
</navigatorContent>
</extension>
When I start the Editor, the Project Explorer won't show any projects, until I make a right mouse click. Then all projects are loaded. When I open the tree, I see no error markers on my modified icons, but the icons themself are shown. Also I see the error markers on the packages(unmodified). I even see the egit decorators, but not the red markers for the errors.
Also I provide both of the Providers, maybe it will help give me some hints for both of my problems.
ContentProvider:
public class ProjectExplorerContentProvider implements
ICommonContentProvider
{
private static final Object[] NO_CHILDREN = {};
#Override
public Object[] getElements(Object inputElement) {
return getChildren(inputElement);
}
#Override
public Object[] getChildren(Object parentElement) {
Object[] children = null;
if(IWorkspaceRoot.class.isInstance(parentElement))
{
IProject[] projects = ((IWorkspaceRoot)parentElement).getProjects();
children = createParents(projects);
}
else
{
children = NO_CHILDREN;
}
return children;
}
private Object[] createParents(IProject[] projects)
{
Object[] result;
List<Object> list = new ArrayList<Object>();
for (int i = 0; i < projects.length; i++) {
try {
if(projects[i].hasNature("org.eclipse.xtext.ui.shared.xtextNature"))
list.add(projects[i]);
} catch (CoreException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
result = new Object[list.size()];
list.toArray(result);
return result;
}
#Override
public Object getParent(Object element) {
Object parent = null;
if(IProject.class.isInstance(element))
{
parent = ((IProject)element).getWorkspace().getRoot();
}
return parent;
}
#Override
public boolean hasChildren(Object element) {
boolean hasChildren = false;
if(IWorkspaceRoot.class.isInstance(element))
{
hasChildren = ((IWorkspaceRoot)element).getProjects().length > 0;
}
return hasChildren;
}
#Override
public void restoreState(IMemento aMemento) {
}
#Override
public void saveState(IMemento aMemento) {
}
#Override
public void init(ICommonContentExtensionSite aConfig) {
}
}
And here the LabelProvider:
public class ProjectExplorerLabelProvider implements ICommonLabelProvider{
/****/
#Override
public void addListener(ILabelProviderListener listener) {
// TODO Auto-generated method stub
}
#Override
public void dispose() {
// TODO Auto-generated method stub
}
#Override
public boolean isLabelProperty(Object element, String property) {
// TODO Auto-generated method stub
return false;
}
#Override
public void removeListener(ILabelProviderListener listener) {
// TODO Auto-generated method stub
}
#Override
public Image getImage(Object anElement) {
if (anElement instanceof File) {
File fi = (File) anElement;
if (fi.getFileExtension().equalsIgnoreCase("mydsl")) {
return Activator.getImage("icons/img1.png");
} else {
try {
InputStream inputStream = fi.getContents();
String content =
ContentFactory.getInstance().toStringInputStream(inputStream);
inputStream.close();
if (content.contains("some string")) {
return Activator.getImage("icons/img2.png");
} else if (content.contains("some other string")) {
return Activator.getImage("icons/img3.png");
} else if (content.contains("some other string")) {
return Activator.getImage("icons/img4.png");
}
} catch (CoreException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
}
} else if (anElement instanceof Folder) {
Folder fo = (Folder) anElement;
}
else if (anElement instanceof PackageFragment) {
PackageFragment pf = (PackageFragment) anElement;
}
}
return null;
}
#Override
public String getText(Object element) {
// TODO Auto-generated method stub
return null;
}
#Override
public void restoreState(IMemento aMemento) {
// TODO Auto-generated method stub
}
#Override
public void saveState(IMemento aMemento) {
// TODO Auto-generated method stub
}
#Override
public String getDescription(Object anElement) {
// TODO Auto-generated method stub
return null;
}
#Override
public void init(ICommonContentExtensionSite aConfig) {
}
}
When I start the Editor, the Project Explorer won't show any projects,
until I make a right mouse click. Then all projects are loaded.
For this problem you can try this:
Update 2015-08-24: if in the final application the ProjectExplorer
content is not visible by its own (but only after you forced an
updated, e.g., via opening the context menu with a right-click), try
this: override the getDefaultPageInput() method of the
ApplicationWorkbenchAdvisor and add the following line:
return ResourcesPlugin.getWorkspace().getRoot();
Source:
https://dirksmetric.wordpress.com/2012/08/01/tutorial-eclipse-rcp-e4-with-3-x-views-like-project-explorer-properties-etc/
When I open the tree, I see no error markers on my modified icons, but
the icons themself are shown.
I think you override the Icons with the decoration? So its clear that there are no decoration on it.
Running only on job instance at a time this ok : Spring batch restrict single instance of job only
public class jobMailListener implements JobExecutionListener {
// active JobExecution, used as a lock.
private JobExecution _active;
public void beforeJob(JobExecution jobExecution) {
// create a lock
synchronized (jobExecution) {
if (_active != null && _active.isRunning()) {
//***************************//
// Here create/storage in queue it up ?
//****************************//
jobExecution.stop();
} else {
_active = jobExecution;
}
}
}
public void afterJob(JobExecution jobExecution) {
// release the lock
synchronized (jobExecution) {
if (jobExecution == _active) {
_active = null;
}
}
}
}
<batch:job id="envoiMail" restartable="true">
<batch:listeners><batch:listener ref="jobMailListener"/>
<batch:step id="prepareData">...
I would not stop the jobs but create a queue.
Can be used spring integration ?
I thought http://incomplete-code.blogspot.fr/2013/03/spring-batch-running-only-one-job.html#comment-form
But it is not functional.
I'm looking for a standard pattern for automatically retrying failed jobs within Spring XD for a configured number of times and after a specified delay. Specifically, I have an HTTP item reader job that is triggered periodically from a cron stream. Occasionally we see the HTTP item reader fail due to network blips so we want the job to automatically try again.
I've tried with a JobExecutionListener which picks up when a job has failed but the tricky bit is actually retrying the failed job. I can do it by triggering a HTTP PUT to the XD admin controller (e.g. http://xd-server:9393/jobs/executions/2?restart=true)
which successfully retries the job. However, I want to be able to:
Specify a delay before retrying
Have some sort of audit within XD to indicate the job will be retried in X seconds.
Adding the delay can be done within the JobExecutionListener but it involves spinning off a thread with a delay which isnt really traceable from the XD container so it's difficult to see if a job is about the be retried or not.
It appears that you need to have a specific job definition that does delayed job retries for you to be able to get any trace of it from the XD container.
Can anyone suggest a pattern for this?
So here's the solution I went for in the end:
Created a job execution listener
public class RestartableBatchJobExecutionListener extends JobExecutionListener {
private Logger logger = LoggerFactory.getLogger(this.getClass());
public final static String JOB_RESTARTER_NAME = "jobRestarter";
/**
* A list of valid exceptions that are permissible to restart the job on
*/
private List<Class<Throwable>> exceptionsToRestartOn = new ArrayList<Class<Throwable>>();
/**
* The maximum number of times the job can be re-launched before failing
*/
private int maxRestartAttempts = 0;
/**
* The amount of time to wait in milliseconds before restarting a job
*/
private long restartDelayMs = 0;
/**
* Map of all the jobs against how many times they have been attempted to restart
*/
private HashMap<Long,Integer> jobInstanceRestartCount = new HashMap<Long,Integer>();
#Autowired(required=false)
#Qualifier("aynchJobLauncher")
JobLauncher aynchJobLauncher;
#Autowired(required=false)
#Qualifier("jobRegistry")
JobLocator jobLocator;
/*
* (non-Javadoc)
* #see org.springframework.batch.core.JobExecutionListener#afterJob(org.springframework.batch.core.JobExecution)
*/
#Override
public void afterJob(JobExecution jobExecution) {
super.afterJob(jobExecution);
// Check if we can restart if the job has failed
if( jobExecution.getExitStatus().equals(ExitStatus.FAILED) )
{
applyRetryPolicy(jobExecution);
}
}
/**
* Executes the restart policy if one has been specified
*/
private void applyRetryPolicy(JobExecution jobExecution)
{
String jobName = jobExecution.getJobInstance().getJobName();
Long instanceId = jobExecution.getJobInstance().getInstanceId();
if( exceptionsToRestartOn.size() > 0 && maxRestartAttempts > 0 )
{
// Check if the job has failed for a restartable exception
List<Throwable> failedOnExceptions = jobExecution.getAllFailureExceptions();
for( Throwable reason : failedOnExceptions )
{
if( exceptionsToRestartOn.contains(reason.getClass()) ||
exceptionsToRestartOn.contains(reason.getCause().getClass()) )
{
// Get our restart count for this job instance
Integer restartCount = jobInstanceRestartCount.get(instanceId);
if( restartCount == null )
{
restartCount = 0;
}
// Only restart if we haven't reached our limit
if( ++restartCount < maxRestartAttempts )
{
try
{
reLaunchJob(jobExecution, reason, restartCount);
jobInstanceRestartCount.put(instanceId, restartCount);
}
catch (Exception e)
{
String message = "The following error occurred while attempting to re-run job " + jobName + ":" + e.getMessage();
logger.error(message,e);
throw new RuntimeException( message,e);
}
}
else
{
logger.error("Failed to successfully execute jobInstanceId {} of job {} after reaching the maximum restart limit of {}. Abandoning job",instanceId,jobName,maxRestartAttempts );
try
{
jobExecution.setStatus(BatchStatus.ABANDONED);
}
catch (Exception e)
{
throw new RuntimeException( "The following error occurred while attempting to abandon job " + jobName + ":" + e.getMessage(),e);
}
}
break;
}
}
}
}
/**
* Re-launches the configured job with the current job execution details
* #param jobExecution
* #param reason
* #throws JobParametersInvalidException
* #throws JobInstanceAlreadyCompleteException
* #throws JobRestartException
* #throws JobExecutionAlreadyRunningException
*/
private void reLaunchJob( JobExecution jobExecution, Throwable reason, int restartCount ) throws JobExecutionAlreadyRunningException, JobRestartException, JobInstanceAlreadyCompleteException, JobParametersInvalidException
{
try
{
Job jobRestarter = jobLocator.getJob(JOB_RESTARTER_NAME);
JobParameters jobParameters =new JobParametersBuilder().
addLong("delay",(long)restartDelayMs).
addLong("jobExecutionId", jobExecution.getId()).
addString("jobName", jobExecution.getJobInstance().getJobName())
.toJobParameters();
logger.info("Re-launching job with name {} due to exception {}. Attempt {} of {}", jobExecution.getJobInstance().getJobName(), reason, restartCount, maxRestartAttempts);
aynchJobLauncher.run(jobRestarter, jobParameters);
}
catch (NoSuchJobException e)
{
throw new RuntimeException("Failed to find the job restarter with name=" + JOB_RESTARTER_NAME + " in container context",e);
}
}
}
Then in the module definition, I add this job listener to the job:
<batch:job id="job">
<batch:listeners>
<batch:listener ref="jobExecutionListener" />
</batch:listeners>
<batch:step id="doReadWriteStuff" >
<batch:tasklet>
<batch:chunk reader="itemReader" writer="itemWriter"
commit-interval="3">
</batch:chunk>
</batch:tasklet>
</batch:step>
</batch:job>
<!-- Specific job execution listener that attempts to restart failed jobs -->
<bean id="jobExecutionListener"
class="com.mycorp.RestartableBatchJobExecutionListener">
<property name="maxRestartAttempts" value="3"></property>
<property name="restartDelayMs" value="60000"></property>
<property name="exceptionsToRestartOn">
<list>
<value>com.mycorp.ExceptionIWantToRestartOn</value>
</list>
</property>
</bean>
<!--
Specific job launcher that restarts jobs in a separate thread. This is important as the delayedRestartJob
fails on the HTTP call otherwise!
-->
<bean id="executor" class="org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor">
<property name="maxPoolSize" value="10"></property>
</bean>
<bean id="aynchJobLauncher"
class="com.mycorp.AsyncJobLauncher">
<property name="jobRepository" ref="jobRepository" />
<property name="taskExecutor" ref="executor" />
</bean>
AysncJobLauncher:
public class AsyncJobLauncher extends SimpleJobLauncher
{
#Override
#Async
public JobExecution run(final Job job, final JobParameters jobParameters)
throws JobExecutionAlreadyRunningException, JobRestartException, JobInstanceAlreadyCompleteException,
JobParametersInvalidException
{
return super.run(job, jobParameters);
}
}
I then have a separate processor module purely for restarting jobs after a delay (this allows us audit from the spring XD ui or db):
delayedJobRestart.xml:
<batch:job id="delayedRestartJob">
<batch:step id="sleep" next="restartJob">
<batch:tasklet ref="sleepTasklet" />
</batch:step>
<batch:step id="restartJob">
<batch:tasklet ref="jobRestarter" />
</batch:step>
</batch:job>
<bean id="sleepTasklet" class="com.mycorp.SleepTasklet" scope="step">
<property name="delayMs" value="#{jobParameters['delay'] != null ? jobParameters['delay'] : '${delay}'}" />
</bean>
<bean id="jobRestarter" class="com.mycorp.HttpRequestTasklet" init-method="init" scope="step">
<property name="uri" value="http://${xd.admin.ui.host}:${xd.admin.ui.port}/jobs/executions/#{jobParameters['jobExecutionId'] != null ? jobParameters['jobExecutionId'] : '${jobExecutionId}'}?restart=true" />
<property name="method" value="PUT" />
</bean>
delayedJobProperties:
# Job execution ID
options.jobExecutionId.type=Long
options.jobExecutionId.description=The job execution ID of the job to be restarted
# Job execution name
options.jobName.type=String
options.jobName.description=The name of the job to be restarted. This is more for monitoring purposes
# Delay
options.delay.type=Long
options.delay.description=The delay in milliseconds this job will wait until triggering the restart
options.delay.default=10000
and accompanying helper beans:
SleepTasklet:
public class SleepTasklet implements Tasklet
{
private static Logger logger = LoggerFactory.getLogger(SleepTasklet.class);
#Override
public RepeatStatus execute(StepContribution contribution, ChunkContext chunkContext) throws Exception
{
logger.debug("Pausing current job for {}ms",delayMs);
Thread.sleep( delayMs );
return RepeatStatus.FINISHED;
}
private long delayMs;
public long getDelayMs()
{
return delayMs;
}
public void setDelayMs(long delayMs)
{
this.delayMs = delayMs;
}
}
HttpRequestTasklet:
public class HttpRequestTasklet implements Tasklet
{
private HttpClient httpClient = null;
private static final Logger LOGGER = LoggerFactory.getLogger(HttpRequestTasklet.class);
private String uri;
private String method;
/**
* Initialise HTTP connection.
* #throws Exception
*/
public void init() throws Exception
{
// Create client
RequestConfig config = RequestConfig.custom()
.setCircularRedirectsAllowed(true)
.setRedirectsEnabled(true)
.setExpectContinueEnabled(true)
.setRelativeRedirectsAllowed(true)
.build();
httpClient = HttpClientBuilder.create()
.setRedirectStrategy(new LaxRedirectStrategy())
.setDefaultRequestConfig(config)
.setMaxConnTotal(1)
.build();
}
#Override
public RepeatStatus execute(StepContribution contribution, ChunkContext chunkContext) throws Exception
{
if (LOGGER.isDebugEnabled()) LOGGER.debug("Attempt HTTP {} from '" + uri + "'...",method);
HttpUriRequest request = null;
switch( method.toUpperCase() )
{
case "GET":
request = new HttpGet(uri);
break;
case "POST":
request = new HttpPost(uri);
break;
case "PUT":
request = new HttpPut(uri);
break;
default:
throw new RuntimeException("Http request method " + method + " not supported");
}
HttpResponse response = httpClient.execute(request);
// Check response status and, if valid wrap with InputStreamReader
StatusLine status = response.getStatusLine();
if (status.getStatusCode() != HttpStatus.SC_OK)
{
throw new Exception("Failed to get data from '" + uri + "': " + status.getReasonPhrase());
}
if (LOGGER.isDebugEnabled()) LOGGER.debug("Successfully issued request");
return RepeatStatus.FINISHED;
}
public String getUri()
{
return uri;
}
public void setUri(String uri)
{
this.uri = uri;
}
public String getMethod()
{
return method;
}
public void setMethod(String method)
{
this.method = method;
}
public HttpClient getHttpClient()
{
return httpClient;
}
public void setHttpClient(HttpClient httpClient)
{
this.httpClient = httpClient;
}
}
And finally when all is built and deployed, create your jobs as a pair (note, the restarter should be defined as "jobRestarter"):
job create --name myJob --definition "MyJobModule " --deploy true
job create --name jobRestarter --definition "delayedRestartJob" --deploy true
A little convoluted, but it seems to work.
I'm using Quartz and want to change it's thread pool size via remote JMX call, but unfortunately couldn't find any proper solution. Is it possible to change the configuration of the running job programmatically ?
I used Quartz with spring. In my web.xml I created a spring ContextListener. My app starts the Quartz job and exposes 2 JMX methods to start and stop on demand.
<listener>
<listener-class>za.co.lance.admin.infrastructure.ui.util.MBeanContextListener</listener-class>
</listener>
The MBeanContextListener class like this.
public class MBeanContextListener extends ContextLoaderListener {
private ObjectName objectName;
private static Logger logger = LoggerFactory.getLogger(MBeanContextListener.class);
#Override
public void contextDestroyed(final ServletContextEvent sce) {
super.contextDestroyed(sce);
logger.debug("=============> bean context listener destroy");
final MBeanServer mbeanServer = ManagementFactory.getPlatformMBeanServer();
try {
mbeanServer.unregisterMBean(objectName);
logger.info("=============> QuartzJmx unregisterMBean ok");
} catch (final Exception e) {
e.printStackTrace();
}
}
#Override
public void contextInitialized(final ServletContextEvent sce) {
super.contextInitialized(sce);
logger.debug("=============> bean context listener started");
final MBeanServer mbeanServer = ManagementFactory.getPlatformMBeanServer();
try {
final QuartzJmx processLatestFailedDocumentsMbean = new QuartzJmx();
Scheduler scheduler = (Scheduler) ContextLoader.getCurrentWebApplicationContext().getBean("runProcessLatestFailedDocumentsScheduler");
processLatestFailedDocumentsMbean.setScheduler(scheduler);
objectName = new ObjectName("za.co.lance.admin.infrastructure.jmx.mbeans:type=QuartzJmxMBean");
mbeanServer.registerMBean(processLatestFailedDocumentsMbean, objectName);
logger.info("=============> QuartzJmx registerMBean ok");
} catch (final Exception e) {
e.printStackTrace();
}
}
}
The QuartzJmx class. PLEASE NOTE! Any MBean class (QuartzJmx) must have an interface ending with MBean (QuartzJmxMBean ).
#Component
public class QuartzJmx implements QuartzJmxMBean {
private Scheduler scheduler;
private static Logger LOG = LoggerFactory.getLogger(QuartzJmx.class);
#Override
public synchronized void suspendRunProcessLatestFailedDocumentsJob() {
LOG.info("Suspending RunProcessLatestFailedDocumentsJob");
if (scheduler != null) {
try {
if (scheduler.isStarted()) {
scheduler.standby();
LOG.info("RunProcessLatestFailedDocumentsJob suspended");
} else {
LOG.info("RunProcessLatestFailedDocumentsJob already suspended");
throw new SchedulerException("RunProcessLatestFailedDocumentsJob already suspended");
}
} catch (SchedulerException e) {
LOG.error(e.getMessage());
}
} else {
LOG.error("Cannot suspend RunProcessLatestFailedDocumentsJob. Scheduler = null");
throw new IllegalArgumentException("Cannot suspend RunProcessLatestFailedDocumentsJob. Scheduler = null");
}
}
#Override
public synchronized void startRunProcessLatestFailedDocumentsJob() {
LOG.info("Starting RunProcessLatestFailedDocumentsJob");
if (scheduler != null) {
try {
if (scheduler.isInStandbyMode()) {
scheduler.start();
LOG.info("RunProcessLatestFailedDocumentsJob started");
} else {
LOG.info("RunProcessLatestFailedDocumentsJob already started");
throw new SchedulerException("scheduler already started");
}
} catch (SchedulerException e) {
LOG.error(e.getMessage());
}
} else {
LOG.error("Cannot start RunProcessLatestFailedDocumentsJob. Scheduler = null");
throw new IllegalArgumentException("Cannot start RunProcessLatestFailedDocumentsJob. Scheduler = null");
}
}
#Override
public void setScheduler(Scheduler scheduler) {
this.scheduler = scheduler;
}
And last, the Spring context
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-3.0.xsd">
<bean id="runProcessLatestFailedDocumentsTask"
class="za.co.lance.admin.infrastructure.service.vbs.process.ProcessDocumentServiceImpl" />
<!-- Spring Quartz -->
<bean name="runProcessLatestFailedDocumentsJob" class="org.springframework.scheduling.quartz.JobDetailBean">
<property name="jobClass"
value="za.co.lance.admin.infrastructure.service.quartz.RunProcessLatestFailedDocuments" />
<property name="jobDataAsMap">
<map>
<entry key="processDocumentService" value-ref="runProcessLatestFailedDocumentsTask" />
</map>
</property>
</bean>
<!-- Cron Trigger -->
<bean id="processLatestFailedDocumentsTrigger" class="org.springframework.scheduling.quartz.CronTriggerBean">
<property name="jobDetail" ref="runProcessLatestFailedDocumentsJob" />
<!-- Cron-Expressions (seperated with a space) fields are -->
<!-- Seconds Minutes Hours Day-of-Month Month Day-of-Week Year(optional) -->
<!-- Run every morning hour from 9am to 6pm from Monday to Saturday -->
<property name="cronExpression" value="0 0 9-18 ? * MON-SAT" />
</bean>
<!-- Scheduler -->
<bean id="runProcessLatestFailedDocumentsScheduler"
class="org.springframework.scheduling.quartz.SchedulerFactoryBean">
<property name="jobDetails">
<list>
<ref bean="runProcessLatestFailedDocumentsJob" />
</list>
</property>
<property name="triggers">
<list>
<ref bean="processLatestFailedDocumentsTrigger" />
</list>
</property>
</bean>
</beans>