I have two web applications living on the same Tomcat installation, in which I would like to implement Quartz 1.x such that each web app only serves a single jobgroup in a shared Quartz data store.
Is it possible to configure a Quartz instance to serve (or ignore) specific group set?
No, but you can create multiple quartz instances, and only put certain job groups in each (and then it will of course only fire certain job groups).
Related
Exploring (Ado)JobStore (data base job store in general) I met subjects like clustering, load balancing and sharing jobs' work data state across multiple applications.
But I think I didn't find a JobStore subject that covers my scenario.
I need to run Quartz Jobs in Windows Service and I need to be able to change configuration of Triggers in other application (in Admin panel in web application) and the Triggers to be applied by the Quartz in my Windows Service automatically (Quartz tracks changes and applies them).
Is it possible to do this by using AdoJobStore/Clustering mechanism? I mean in terms of JobStore's features, so by using Quartz scheduler API. Not by using SQL and changing data in Quartz tables directly or any other workarounds (according to Quartz's Best Practices doc).
The Quartz.NET scheduler can be accessed remotely, independently of job stores. Since you already have a web app you can add a reference to the remote scheduler and use the API to administer jobs, triggers etc.
I'd like to develop a bunch of SaaS-Applications in Java and I'm not sure wat is the best way to go.
Each Application will have a WAR containing the Webservice and will have at least one Worker-WAR, which is a Thread waiting for new Tasks in the DB to come up and then working off this task. This worker contains the intelligence of the application and uses a lot of cpu. The Webservice gives Users the possibility to add new tasks and other stuff ...
Resource Limitations
The infrastructure must ensure the following:
The Webservice must always get a certain amount of cpu time to be able to respond to the user. So the hungry Worker must not get all cpu time for its working.
Each Tenant has its own worker and they must not interfere with each other as it must be not possible to block the whole system (and all tenants) with a single task.
Resource Sharing
It would be nice to be able to share the resources but always ensure that in extreme situations every worker and webservice gets the required minimum.
Versioning
As new Versions of a application are released each tenant must have the possibility to initiate a update on its own when he adapted to the API-Changes. Furthermore a tenant must be able to keep more than one application-endpoint (lets call them channels) to have a production channel and a beta-channel. In the Beta-Channel the tenant can test againts new versions and when he feels comfortable with the new version he can update his production channel.
User-Management
All applications of a tenant must share a user-Database and have the same way to authenticate.
Environment
I want to use Java EE 7. I would enjoy using Wildfly.
Question
What is the best infrastructure to approach these aims? I want to host this on my own servers.
What I already found
I understand that you cannot limit CPU-usage in a jvm. So the Workers must have their own jvms.
I looked at PaaS-Providers like OpenShift Origin, but it seems that they encourage you to run a application-server per tenant, per application which sounds to me as a resource-eater.
Is there no way to have one Wildfly running and limit the amount cpu-usage per tenant and app?
Thank You
Lukas
I am developing a library that is distributed internal to my company and consumed by a variety of applications. This library must be platform agnostic in that it may be deployed in a web context or even within a console app. I would like to register objects to be per-http-request or per-thread, depending on the context of the application consuming this framework. In StructureMap, I can do this using the Hybrid lifetime. Essentially, if an HttpContext exists then the object will be scoped to that, otherwise ThreadLocalStorage will be used on a per-thread basis. No additional configuration is required for the distributed library or the consuming application. Is this possible using Autofac? Given our wide variability of developer skill levels, our goal is to minimize/eliminate any specialized configuration for consumers.
I understand that registrations can be context agnostic using the InstancePerLifetimeScope lifetime, but then consuming applications are required to consume the ASP.NET/WCF/MVC integration binaries in order to bind InstancePerLifetimeScope registrations to an Http Request. Or, for per-thread scopes, the consuming code needs to have the responsibility of creating a lifetime scope per thread.
Any suggestions?
It's easy to implement own lifetime manager that will check if 'HttpContext.Current != null' and then delegate to one of the existing managers.
I would however suggest that each application wire up appropriate manager itself. An example would be a unit test scenario where 'HttpContext' might not exist since it's mocked an you might want to control the lifetime manually for test specific purposes.
Quartz can store jobs on database so its not volatile.
But if i have two application(web-application and web service) ,
how can i share this store between applications.
That is if one application select a job to run other application informed.And when one application fail it will continue to run
I realise this is a late reply, but for anyone else who might find this useful...
Quartz is designed with clustered environments in mind, specifically for what you're asking. You can point both of your applications (web service and web application) to the same Quartz job database, and Quartz itself will manage locking the jobs so that they still only run according to their schedule.
In your Quartz config make sure you're using:
org.quartz.jobStore.class=org.quartz.impl.jdbcjobstore.JobStoreTX
... And then duplicate the Quartz setup across both your applications, ensuring they both point to the same database.
I think it should take care of itself! Search for "Quartz clustering" if you need more info.
I am writting a manager program in a rcp way with eclipse, so I want to create a "command center" job which will run until the game is over. It'll get input from views, editors or via socket channel which is another job to get remote servers'/clients' request, and vice versa. But I do not know how to do it? So as a summary I have two problem:
How a job communicate with a ui part of eclipse?
How a job communicate with another?
I do not think, that an Eclipse Job is well-suited for this purpose, because jobs are basically used as elementary, but long tasks.
I would create something you do require as a controller/"command center" view, that can be used by the user to control the game. In this case, the view can communicate with the internal model e.g. using the Data Binding API, and with other views using the Selection service.
Or if you would like to control your application automatically in the background, you could create different event listeners, that can create small jobs, that read/write the data model of the application.