Can two processes or threads run at the same time in an operating system? - operating-system

Can two processes run at the same time in an operating system? What about two threads at the same time?

Yes and yes. Since CPUs have multiple cores they can provide multiple threads of execution concurrently. See https://en.wikipedia.org/wiki/Multithreading_(computer_architecture)

Related

Multicore thread processing

I understand that in Multicore environment, different threads are executed in parallel by OS scheduler. In a single core environment, context switching enables processing of high priority tasks and when an hardware interrupt occurs, ISR is given utmost priority.Is there any context switching criteria for assigning threads to different cores in multicore environment.In other words, when a hardware interrupt occurs which core among the many core is interrupted for ISR execution? How does OS choose one among many possible cores in multi core environment for high priority execution?

What is the relation between threads and concurrency?

Concurrency means the ability to allow more than one tasking process at a time
But where does threading fit in it?
What's the relation between threading and concurrency?
What is the important link between these two which will fully clear all the confusion?
Threads are one way to achieve concurrency. Concurrency can be achieved at many levels and in many ways. Here are some of them from low to high level to give you a rough idea:
CPU pipelines: at a hardware level, multiple instructions are executed in parallel (each instruction is at a different stage in the pipeline)
Duplication of ALU and FPU CPU units. There are more arithmetic-logic units and floating point units in a processor that can execute instructions in parallel.
vectorized instructions. Instructions which execute for multiple data.
hyperthreading/SMT. Duplication of the process context.
threads. Streams of instructions which can be executed in parallel.
processes. You run both a browser and a word processor on your system.
tasks. Higher abstraction over threads and async work.
multiple computers. Run your program on multiple computers
I'm new here but I don't really understand the down votes? Could someone explain it to me? Is it just because this question has (likely) been answered or because it's considered obvious?
Now that that's out of the way...
Nothing being executed on the CPU is from a "process" or anything else. They're all threads, scheduled and entirely managed by the kernel using a variety of algorithms to reach expected performance for any given application. The CPU only allows n threads, where n equals (cores * hyperthreads). In most cases hyperthreads will be 2 so you have double the core count to get logical CPU count. What this really means is that instead of 4 (for example) threads being run at once, it can support up to 8. Now the OS may have hundreds of threads at any given time, how is that possible? Well the kernel uses a variety of checks such as how frequently and long the thread sleeps to assign it a priority. Whenever the CPU triggers a timer interrupt the OS will swap out threads appropriately if they've reached their alotted time slice based on the OS determination of its priority.

How can kernel run all the time?

How can kernel run all the time, when CPU can execute only one process at a time ?
That is, if kernel is occupying CPU all the time , then how come other processes run.
Please explain
Thank You
In the same way that you can run multiple userspace processes at the same time: Only one of them is actually using the CPU at any given time. You have some interrupts that force them to give it up.
Code that is part of the operating system is no different here (except that it is in control of setting up this scheduling in the first place).
You also have to distinguish between processes run by the OS in the background (I suppose that is what you are talking about here), and system calls (which are being run as part of "normal" processes that temporarily switch into supervisor mode).

Celery: per task concurrency limits (# of workers per task)?

Is it possible to set the concurrency (the number of simultaneous workers) on a per-task level in Celery? I'm looking for something more fine-grained that CELERYD_CONCURRENCY (that sets the concurrency for the whole daemon).
The usage scenario is: I have a single celerlyd running different types of tasks with very different performance characteristics - some are fast, some very slow. For some I'd like to do as many as I can as quickly as I can, for others I'd like to ensure only one instance is running at any time (ie. concurrency of 1).
You can use automatic routing to route tasks to different queues which will be processed by celery workers with different concurrency levels.
celeryd-multi start fast slow -c:slow 3 -c:fast 5
This command launches 2 celery workers listening fast and slow queues with 3 and 5 concurrency levels respectively.
CELERY_ROUTES = {"tasks.a": {"queue": "slow"}, "tasks.b": {"queue":
"fast"}}
The tasks with type tasks.a will be processed by slow queue and tasks.b tasks by fast queue respectively.

Application-level scheduling

As far as I know Windows uses a round-robin scheduler which distributes time slices to each ruanable thread.
This means that if an application/process has multiple threads it gets an larger amount of the computational resources than other application with fewer threads.
Now one could think of a operating system scheduler that assigns an equal amount of the compuational resources to each application. And this partition is distributed among all threads of this application. The result would be that no application could affect other applications just because it has more threads.
Now my questions:
How is such scheduling called? I need a term so I can search for research papers regarding such scheduling.
Do operating systems exist which uses such scheduling?
I think it's some variation of "fair" scheduling.
I expect that you will need to use synonyms for "application", for example they may be called "tasks" or "processes" instead.