I have developed a Mule(3.3) application consisting of multiple scheduled events using Quartz end-point.
I have created a mule-config.xml and deployed it in Mule server, and i can see events triggered at specified time.
Now, I need to implement a feature where i can have setting (configured in properties file or XML file) to ON/OFF the quartz scheduler.
ON means the event to be triggered and OFF means the events not to be triggered. But i cannot see any such settings in Mule.
Any pointers as how to solve this will be helpful.
Thanks.
Personally, instead of trying to pause/restart the endpoint, I would let the Quartz events fire but switch-off downstream processing with something similar to what's discussed here: Mule 3: Controlling whether a flow is allowed to be executed
Related
The design brief for my project includes the following:
Be able to spin up new instances of Windows Services at will and attach random message handlers to each for the purpose of logically grouping the handlers along service boundary lines (or any other arbitrary reason to group handlers).
The design we've chosen is to encapsulate a given message TYPE and all of it's handlers into a single assembly (DLL). I am trying to generate subscriptions when the host starts and remove them from Raven when the host stops.
I have success in their creation implementing the IWantToRunWhenBusStartsAndStops interface within a particular message TYPE assembly. The IWantToRunWhenBusStartsAndStops.Start() fires and I add the proper subscription there.
The removal is the issue I am trying to solve. The IWantToRunWhenBusStartsAndStops.Stop() method is only invoked when a control-break is issued manually.
Is there a different interface I should be implementing or a different approach to the issue I should take??
Thanks in advance for your help!
Standard timeout of Apache Felix Event Admin Implementation in felix->configuration is 5000ms. Now how to allow one or more event handler to take longer time (with pleasure programmatically)?
If you don't want your event handler to be subject to blacklisting, you can execute the event as a job. Jobs are not subject to blacklisting and are guaranteed to run. See http://experiencedelivers.adobe.com/cemblog/en/experiencedelivers/2012/04/event_handling_incq.html for more info on processing a job from an event handler and http://sling.apache.org/apidocs/sling6/org/apache/sling/event/jobs/JobUtil.html#processJob(org.osgi.service.event.Event,%20org.apache.sling.event.jobs.JobProcessor) for executing your JobProcessor.
You can change any OSGi configuration programatically via the ConfigurationAdmin service described at http://www.osgi.org/javadoc/r4v42/org/osgi/service/cm/ConfigurationAdmin.html
You'll need the PID of the configuration that you want to change (the OSGi admin console or shell will provide that). Use ConfigurationAdmin.getConfiguration(..) to retrieve the corresponding Configuration object, and call Configuration.update(...) with the changed properties.
That being said, raising the events blacklisting timeout is usually a bad idea - event handlers that take a long time to run will block things. Use separate threads or jobs (as suggested by Chris Leggett) to do the slow work.
I'd like to move my scheduled tasks into workflows so I can better monitor their execution. Currently I'm using a Window's scheduled task to call a web service that starts the process. Is there a facility that you use to schedule execution of a sequence so that it occurs every N minutes?
My optimal solution would:
Easy to configure
Provide useful feedback on errors
Be 'fire and forget'
PS - Trying out AppFabric for Windows Server if that adds any options.
The most straightforward way I know of would be to make an executable for each workflow (could be console or windows app), and have it host the workflow through code.
This way you can continue to use scheduled tasks to manage the tasks, the main issue is feedback/monitoring the process. For this you could output to console, write to the event log, or even have a more advanced visualisation with a windows app - although you'd have to write this yourself (or google for something!). This MS Workflow Monitoring sample might be of interest, haven't used it myself.
Similar deal with errors, although writing to the event log would be the normal course of action in this case.
I'm not aware of any other hosts for WF, aside from things like Dynamics CRM, but that won't help you with what you're trying to do.
You need to use a scheduler. Either roll your own, use AppFabic as mentioned or use Quartz.NET:
http://quartznet.sourceforge.net/
If you use Quartz, it's either roll your own service host or use the ready-made one and configure it using xml. I rolled my own and it worked fine.
Autorun is another free option... http://autorun.codeplex.com/
I have several web-servers and need them to use Quartz. The clustering feature of Quartz would be ideal, but it requires that the servers clocks are in complete sync. They have a very scary warning about this:
Never run clustering on separate machines, unless their clocks are synchronized using some form of time-sync service (daemon) that runs very regularly (the clocks must be within a second of each other).
I cannot guarantee complete clock synchronization, so instead of using the clustering feature I was thinking to have a single Quartz instance (with a stand-by for fail-over). Having a single instance executing jobs is not a problem, but I still need all of the web servers to be able to schedule jobs.
Can I directly add jobs into the JDBCJobStore from the web servers, and will they be picked up by the (non-clustered) Quartz server? I would be doing this by creating schedule instances in the web servers to add jobs. These instances would never be started, just used to access the JobStore.
Wrote a test program that creates a "non-clustered" Quartz scheduler using the same JobStore as the "real" scheduler (also non-clustered), and schedules jobs. After a few seconds, these jobs do get executed, so it seems to work.
Update: I cross-posted this question to the Quartz forum, and got the answer that this should work. In a related question they state that
The jobs can be inserted into that database by another process by:
1- using the rmi features of quartz from another process, and using the quartz API
2- instantiating a scheduler within another process (e.g. webapp), also pointing it to the same database, but NOT start()ing that scheduler instace, and the using the quartz api to schedule jobs.
I have a web application that I am adding workflow functionality to using Windows Workflow Foundation. I have based my solution around K. Scott Allen's Orders Workflow example on OdeToCode. At the start I didn't realise the significance of the caveat "if you use Delay activities with and configure active timers for the manual scheduling service, these events will happen on a background thread that is not associated with an HTTP request". I now need to use Delay activities and it doesn't work as is with his solution architecture. Has anyone come across this and found a good solution to this? The example is linked to from a lot of places but I haven't seen anyone else come across this issue and it seems like a bit of a show stopper to me.
Edit: The problem is that the results from the workflow are returned to the the web application via HttpContext. I am using the ManualWorkflowSchedulerService with the useActiveTimers and this works fine for most situations because workflow events are fired from the web app and HttpContext still exists when the workflow results are returned and the web app can continue processing. When a delay activity is used processing happens on a background thread and when it tries to return results to the web app, there is no valid HttpContext (because there has been no Http Request), so further processing fails. That is, the webapp is trying to process the workflow results but there has been no http request.
I think I need to do all post Delay activity processing within the workflow rather than handing off to the web app.
Cheers.
You didn't describe the problem you are having. But maybe this is of some help.
You can use the ManualWorkflowSchedulerService with the useActiveTimers and the workflow will continue on another thread. Normally this is fine because your HTTP request has already finished and it doesn't really matter.
If however you need full control the workflow runtime will let you get a handle on all loaded workflows using the GetLoadedWorkflows() function. This will return acollection of WorkflowInstance objects. usign these you can can call the GetWorkflowNextTimerExpiration() to check which is expired. If one is you can manually resume it. In this case you want to use the ManualWorkflowSchedulerService with the useActiveTimers=false so you can control the last thread as well. However in most cases using useActiveTimers=true works perfectly well.