Why do we need to set Max and Min thread in .Net? - threadpool

I am struggling in a thread project. I came across the setMaxThread , SetMinThread, GetMaxThread and GetAvailableThread. I didn't find any good reason to use those methods in the threadpool.
Help me out here,
why do we need it and when do we use it?

According to MSDN
There is one thread pool per process. Beginning with the .NET
Framework 4, the default size of the thread pool for a process depends
on several factors, such as the size of the virtual address space. A
process can call the GetMaxThreads method to determine the number of
threads. The number of threads in the thread pool can be changed by
using the SetMaxThreads method.
If you don't want to use the default value, use setter method to change your process's thread pool.

Related

Should we use thread pool for long running threads?

Should we use a thread pool for long running threads or start our own threads? Is there some design pattern?
Unfortunately, it depends. There is no hard and fast rule saying that you should always employ thread pools.
Thread pools offer two main things:
Delegated creation/reuse of threads.
Back-pressure
IMO, it's the back-pressure property that's interesting, but often the most poorly understood. Your machine runs on a limited set of resources. If you have (say) 8 CPU cores and they are all busy working, you would like to signal that in some way that adding more work (submitting more tasks) isn't going to help, at least not in terms of latency.
This is the reason java.util.concurrent.ExecutorService implementations allow you to specify a java.util.concurrent.BlockingQueue of your choice. When this queue grows full, invoking threads will block until the thread pool has managed to complete tasks in progress.
Whether or not to have long-running threads inside the thread pool depends on what it's doing. If the thread is constantly busy (meaning it will never complete) then it will always occupy a slot in the thread pool, which is kind of pointless.
Regarding delegated creation/reuse of threads; maybe you could have two pools, one for long-running tasks and one for other tasks. Or perhaps a long-running thread pool with one single slot, this will prevent two long-running tasks from running at the same time, provided that is what you want.
As you can see, there is no single good answer. It really boils down to what you are trying to achieve and how you want to use the resources at hand.

CoreData: would it be safe using NSConfinementConcurrencyType and dispatch all operations using this context on a serial dispatch queue?

Literally, this concurrency type requires an specific thread, but using a serial queue would be more easy, but is it safe to use the context with a NSConfinementConcurrencyType concurrency type on a serial dispatch queue?
As long as you're sure you only use that queue with the context, yes, that's completely fine.
Core Data doesn't care about the thread so much as it cares about concurrent access. If you serialize access, you're safe, however you choose to do it. You could use NSRecursiveLock or semaphores or whatever works for you.
Note that the newer concurrency models are queue based. NSPrivateQueueConcurrencyType does not guarantee that operations are always performed on the same thread, even when you use performBlock:. They happen on a private queue and might run on different threads at different times. If you can manage your queue and your access well enough to do this yourself, it's reasonable to do so.
No, having a serial queue does not guarantee the operations will execute on the same thread:
The Concurrency Programming Guide specifies
Serial queues (also known as private dispatch queues) execute one task
at a time in the order in which they are added to the queue. The
currently executing task runs on a distinct thread (which can vary
from task to task) that is managed by the dispatch queue. Serial
queues are often used to synchronize access to a specific resource.
Why don't you just use the NSPrivateQueueConcurrencyType? It will make your code cleaner and thread safe. You just need to call -performBlock: or -performBlockAndWait: when accessing the context from somewhere other than the block that initialized the context.

NSManagedObjectContext deadlocking from 2 serial queues

I've created a system where i can request an NSManagedObjectContext from a singleton object, dependant on the queue it's running on. Every serial GCD dispatch queue is associated with a certain task, and thus gets its own context, though all with the same persistent store coordinator.
I was under the assumption that this would solve my problems associated with threads, which it so far seems to have done, but now i have a different problem: If 2 serial queues, with different MOCs, both try to make the context execute, they both lock and the app freezes. So what did i miss?
"...[I]f you create one context per thread, but all pointing to the same persistent store coordinator, Core Data takes care of accessing the coordinator in a thread-safe way (the lock and unlock methods of NSManagedObjectContext handle recursion)." (source)
What i read there, is that Core Data should handle locking and unlocking correctly with my setup. Or do i understand 'in a thread-safe way' wrong in this case?
Edit: I basically have a dictionary that maps a queue to a context. At first i wanted to work with threads instead of queues, until i read this part:
"Note: You can use threads, serial operation queues, or dispatch queues for concurrency. For the sake of conciseness, this article uses “thread” throughout to refer to any of these." (source)
If by "serial queue" you mean GCD dispatch queue or NSOperationQueue, you are making incorrect assumptions that each queue has a dedicated thread or that the tasks for each queue always run on the same thread.
You need to figure out a way of mapping a thread to a managed object context, perhaps by way of an NSDictionary and when you run a task on your queue, get the MOC associated with the current thread.
JeremyP is right: queues do not == threads. A queue may create a new thread for each operation - Core Data (in the default mode) requires thread confinement (that is, the thread that created the NSManagedObjectContext must be the thread used for all access to any objects from that context).
You may want to check how the confinement options are used - if you're targeting iOS5 alone, you might be able to change it without too much difficulty and still use the queues.

how is HawtDispatch different from Java's Executors? (and netty)

Frustratingly, HawtDispatch's website describes it as "thread pooling and NIO event notification framework API."
Let's take the 'thread pooling' part first. Most of the Executors provided by Java are also basically thread pools. How is HawtDispatch different?
It is also apparently an "NIO event notification framework API." I'm assuming it is a thin layer on top NIO which takes incoming data and passes to its notion of 'thread pool,' and passes it to a consumer when the thread pool scheduler finds the time. Correct? (Any improvement over NIO is welcomed). Has anyone done any performance analysis of netty vs HD?
HawtDispatch is designed to be a single system wide fixed size thread pool. It provides implements 2 flavors of Java Executors:
Global Dispatch Queue: Submitted Runnable objects are executed concurrently (You get the same effect using a Executors.newFixedThreadPool(n) executor)
Serial Dispatch Queue: Submitted Runnable objects are executed serially (You get the same effect using a Executors.newSingleThreadExecutor() executor)
Unlike the java executor model all global and serial dispatch queues share a single fixed size thread pool. You can use thousands of serial dispatch queues without increasing your thread count. Serial dispatch queues can be used like Erlang mailboxes to drive reactive actor style applications.
Since HawtDispatch is using a fixed size thread pool for processing all global and serial queue executions, all Runnable tasks it executes must be non-blocking. In a way this is similar to the NodeJS architecture except it using multiple threads instead of just one.
In comparison to Netty, HawtDispatch is not a framework for actually processing socket data. It does not provide a framework for how encode/decode, buffer and process the socket data. All it does is execute a user configured Runnable when data can be read or written on the non-blocking socket. It's up to you application then to actually read/write the socket data.

How to limit concurrency when using actors in Scala?

I'm coming from Java, where I'd submit Runnables to an ExecutorService backed by a thread pool. It's very clear in Java how to set limits to the size of the thread pool.
I'm interested in using Scala actors, but I'm unclear on how to limit concurrency.
Let's just say, hypothetically, that I'm creating a web service which accepts "jobs". A job is submitted with POST requests, and I want my service to enqueue the job then immediately return 202 Accepted — i.e. the jobs are handled asynchronously.
If I'm using actors to process the jobs in the queue, how can I limit the number of simultaneous jobs that are processed?
I can think of a few different ways to approach this; I'm wondering if there's a community best practice, or at least, some clearly established approaches that are somewhat standard in the Scala world.
One approach I've thought of is having a single coordinator actor which would manage the job queue and the job-processing actors; I suppose it could use a simple int field to track how many jobs are currently being processed. I'm sure there'd be some gotchyas with that approach, however, such as making sure to track when an error occurs so as to decrement the number. That's why I'm wondering if Scala already provides a simpler or more encapsulated approach to this.
BTW I tried to ask this question a while ago but I asked it badly.
Thanks!
I'd really encourage you to have a look at Akka, an alternative Actor implementation for Scala.
http://www.akkasource.org
Akka already has a JAX-RS[1] integration and you could use that in concert with a LoadBalancer[2] to throttle how many actions can be done in parallell:
[1] http://doc.akkasource.org/rest
[2] http://github.com/jboner/akka/blob/master/akka-patterns/src/main/scala/Patterns.scala
You can override the system properties actors.maxPoolSize and actors.corePoolSize which limit the size of the actor thread pool and then throw as many jobs at the pool as your actors can handle. Why do you think you need to throttle your reactions?
You really have two problems here.
The first is keeping the thread pool used by actors under control. That can be done by setting the system property actors.maxPoolSize.
The second is runaway growth in the number of tasks that have been submitted to the pool. You may or may not be concerned with this one, however it is fully possible to trigger failure conditions such as out of memory errors and in some cases potentially more subtle problems by generating too many tasks too fast.
Each worker thread maintains a dequeue of tasks. The dequeue is implemented as an array that the worker thread will dynamically enlarge up to some maximum size. In 2.7.x the queue can grow itself quite large and I've seen that trigger out of memory errors when combined with lots of concurrent threads. The max dequeue size is smaller 2.8. The dequeue can also fill up.
Addressing this problem requires you control how many tasks you generate, which probably means some sort of coordinator as you've outlined. I've encountered this problem when the actors that initiate a kind of data processing pipeline are much faster than ones later in the pipeline. In order control the process I usually have the actors later in the chain ping back actors earlier in the chain every X messages, and have the ones earlier in the chain stop after X messages and wait for the ping back. You could also do it with a more centralized coordinator.