Quartz .NET 2.x can I configure scheduler with different thread pools - quartz-scheduler

i am new to Quartz .NET. I'm using version 2.6.
I have 2 kind of job, low and high priority. I'd want a thread pool only for high priority job.
Is there a way to configure the scheduler to handle this?
Thanks

I'd want a thread pool only for high priority job.
The ThreadPool provides a set of Threads for Quartz to use when executing Jobs. When there is a new job, it should be executed in the thread from quartz thread pool and this thread could come only from Quartz Thread Pool.
How many thread pools you may have: each Quartz scheduler instance will only allow you to create one thread pool and all the jobs in this scheduler instance will be run in this pool.
And so you may create many Quartz scheduler instances, and thus separate your jobs.
Note: Thread pool size, threads system priority, and pool implementation could be modified/replaced. To provide own implementation, you need a class derived from IThreadPool interface
/// Execute the given <see cref="Task" /> in the next
/// available <see cref="Thread" />.
bool RunInThread(Func<Task> runnable);
From docs: Quartz ships with a simple (but very satisfactory) thread pool named Quartz.Simpl.SimpleThreadPool. This IThreadPool implementation simply maintains a fixed set of threads in its pool - never grows, never shrinks. But it is otherwise quite robust and is very well tested - as nearly everyone using Quartz uses this pool.

Related

Should we use thread pool for long running threads?

Should we use a thread pool for long running threads or start our own threads? Is there some design pattern?
Unfortunately, it depends. There is no hard and fast rule saying that you should always employ thread pools.
Thread pools offer two main things:
Delegated creation/reuse of threads.
Back-pressure
IMO, it's the back-pressure property that's interesting, but often the most poorly understood. Your machine runs on a limited set of resources. If you have (say) 8 CPU cores and they are all busy working, you would like to signal that in some way that adding more work (submitting more tasks) isn't going to help, at least not in terms of latency.
This is the reason java.util.concurrent.ExecutorService implementations allow you to specify a java.util.concurrent.BlockingQueue of your choice. When this queue grows full, invoking threads will block until the thread pool has managed to complete tasks in progress.
Whether or not to have long-running threads inside the thread pool depends on what it's doing. If the thread is constantly busy (meaning it will never complete) then it will always occupy a slot in the thread pool, which is kind of pointless.
Regarding delegated creation/reuse of threads; maybe you could have two pools, one for long-running tasks and one for other tasks. Or perhaps a long-running thread pool with one single slot, this will prevent two long-running tasks from running at the same time, provided that is what you want.
As you can see, there is no single good answer. It really boils down to what you are trying to achieve and how you want to use the resources at hand.

Scala parallel collections, threads termination, and sbt

I am using parallel collections, and when my application terminates, sbt issues:
Not interrupting system thread Thread[process reaper,10,system]
It issues this message one time per core (minus one to be precise).
I have seen in sbt code that this is by design, but I am not sure why don't the threads terminate along with my application. Any insight would be appreciated if you were unlucky enough to come across the same...
Parallel collections by default are backed by ForkJoinTasks.defaultForkJoinPool, which is a lazy val, so it's created the first time it's used.
Like any ForkJoinPool, it runs until explicitly shut down. The pool has no way of knowing whether it's going to receive any new tasks, and thread creation is relatively expensive, so it would be wasteful for the pool to shut down when it was empty only to start up again as soon as new tasks are added. So its threads hang around unless and until the pool is explicitly shut down.
As a design decision the JVM doesn't kill other threads just because the main thread terminates; in some programming styles the main thread terminates relatively early (e.g. think about web servers where the main thread sets up everything, starts a pool of dispatcher threads, and then exits, but the web server continues to run indefinitely).
You could call ForkJoinTasks.defaultForkJoinPool.shutdown() once you know you're not going to do any more parallel operations, or you could create parallel collections using a custom pool that's explicitly controlled from your code.

Why do we need to set Max and Min thread in .Net?

I am struggling in a thread project. I came across the setMaxThread , SetMinThread, GetMaxThread and GetAvailableThread. I didn't find any good reason to use those methods in the threadpool.
Help me out here,
why do we need it and when do we use it?
According to MSDN
There is one thread pool per process. Beginning with the .NET
Framework 4, the default size of the thread pool for a process depends
on several factors, such as the size of the virtual address space. A
process can call the GetMaxThreads method to determine the number of
threads. The number of threads in the thread pool can be changed by
using the SetMaxThreads method.
If you don't want to use the default value, use setter method to change your process's thread pool.

how is HawtDispatch different from Java's Executors? (and netty)

Frustratingly, HawtDispatch's website describes it as "thread pooling and NIO event notification framework API."
Let's take the 'thread pooling' part first. Most of the Executors provided by Java are also basically thread pools. How is HawtDispatch different?
It is also apparently an "NIO event notification framework API." I'm assuming it is a thin layer on top NIO which takes incoming data and passes to its notion of 'thread pool,' and passes it to a consumer when the thread pool scheduler finds the time. Correct? (Any improvement over NIO is welcomed). Has anyone done any performance analysis of netty vs HD?
HawtDispatch is designed to be a single system wide fixed size thread pool. It provides implements 2 flavors of Java Executors:
Global Dispatch Queue: Submitted Runnable objects are executed concurrently (You get the same effect using a Executors.newFixedThreadPool(n) executor)
Serial Dispatch Queue: Submitted Runnable objects are executed serially (You get the same effect using a Executors.newSingleThreadExecutor() executor)
Unlike the java executor model all global and serial dispatch queues share a single fixed size thread pool. You can use thousands of serial dispatch queues without increasing your thread count. Serial dispatch queues can be used like Erlang mailboxes to drive reactive actor style applications.
Since HawtDispatch is using a fixed size thread pool for processing all global and serial queue executions, all Runnable tasks it executes must be non-blocking. In a way this is similar to the NodeJS architecture except it using multiple threads instead of just one.
In comparison to Netty, HawtDispatch is not a framework for actually processing socket data. It does not provide a framework for how encode/decode, buffer and process the socket data. All it does is execute a user configured Runnable when data can be read or written on the non-blocking socket. It's up to you application then to actually read/write the socket data.

Thread monitoring for scala actors

Is there a way to monitor how many threads are actually alive and running my scala actors ?
The only way to properly do this is to inject your own executor for the actors subsystem as, by default, the actor threads do not have actor- or scala-specific names (they may just be called Thread-N or pool-N-thread-M depending on which version of Scala you are using.
Philip Haller has given instructions on using your own executor, where you can monitor thread usage if you wish, or at the very least name the threads so created. If you override thread naming you could then use the standard Java system MBeans (i.e. ThreadMXBean) to monitor the threads programmatically (or via the JConsole/JVisualVM).
Note that you can control the default mechanism using the system properties:
actors.minPoolSize
actors.maxPoolSize
actors.corePoolSize
You might try the VisualVM tool (available free from Sun). Among other things, it can monitor threads in running JVMs.