What are the Types of Process and Thread in Operating System? - operating-system

I have been learning O.S in which it is written that there are two types of Process
1) CPU Bound Processes
2) I/O Bound Processes.
and somewhere its
1)Independent Processes
2)Cooperative Processes.
same goes for Threads
1) Single Level Thread.
2) Multilevel Thread.
and
1)User Level Thread
2)Kernel Level Thread.
Now confusion is that if someone asks me about Types of Process and Thread so which ones should i tell them, from above?
Kindly Make My Concept Clear?
I shall remain thankful to you!

Processes are two types based on their types of categories. The first one which you mentioned is related to event-specific process categorization and the next categorization is based on their nature. But, if someone asks you, you should ask for more clarification as to which type of category does he/she wants the classification. If null, then you should state the first(default) category as shown below:-
Event-specific based category of process
a) CPU Bound Process: Processes that spend the majority of their time simply using the CPU (doing calculations).
b) I/O Bound Process: Processes that are associated with input/output-based activity like reading from files, etc.
Category of processes based on their nature
a) Independent Process: A process that does not need any other external factor to get triggered is an independent process.
b) Cooperative Process: A process that works on the occurrence of any event and the outcome affects any part of the rest of the system is a cooperating process.
But, Threads have got only one classification based on their nature(Single Level Thread and Multi-Level Threads).
Actually, in modern operating systems, there are two levels at which threads operate. They are system or kernel threads and user-level threads. This one is generally not the classification, though some of them freely do classify. It is a misuse.
If you've further doubts, leave a comment below.

Basically there are two types of process:
Independent process.
Cooperating process.
For execution a process should be mixer of CPU bound and I/O bound.
CPU bound: is a time process reside in processor and perform it's execution.
I/O bound: is a time in which a process perform input output operation.e.g take input from keyboard or display output in monitor.

What is a Process?
A process is a program in execution. Process is not as same as program code but a lot more than it. A process is an 'active' entity as opposed to program which is considered to be a 'passive' entity. Attributes held by process include hardware state, memory, CPU etc.
Process memory is divided into four sections for efficient working :
The Text section is made up of the compiled program code, read in from non-volatile storage when the program is launched.
The Data section is made up the global and static variables, allocated and initialized prior to executing the main.
The Heap is used for the dynamic memory allocation, and is managed via calls to new, delete, mallow, free, etc.
The Stack is used for local variables. Space on the stack is reserved for local variables when they are declared.

Category of process:
1.Independent/isolated/competing.
2.Dependent/co-operating/concurrent.
1.Independetn:Execution of one process does not effect the execution's of other process that means there is nothing common for sharing.
2.Dependent:in it process can share some deliver buffer variable ,resources,(cpu,printer).
it process can share any thing, then execution of one process can effect other.
->execution of one process can effect or get affected by the execution of process.

Related

Does each system call create a process?

Does each system call create a process?
Are all functions (e.g. interrupts) of programs and operating systems executed in the form of processes?
I feel that such a large number of process control blocks, a large number of process scheduling waste a lot of resources.
Or, the kernel instruction of the system call is regarded as part of the current
process.
The short answer is - not exactly. But we have to agree on what we are going to call a "process". A process is more of an abstract idea, which encapsulates multiple instructions, each sequentially executed.
So let's start from the first question.
Does each system call create a process?
No. Each system call is the product of the currently running process, that tells the OS - "Hey OS, I need you to open this file for me, or read these here bits". In this case, the process is a bag of sequentially executed instructions, some are system calls, some are not.
Then we have.
Are all functions (e.g. interrupts) of programs and operating systems executed in the form of processes?
Well this kind of goes back to the first question. We are not considering that a system call (an operation that tell the OS to do something and works under very strict conditions) is a separate process. We will NOT see that system call execution to have its OWN process id (pid).
Then we have.
I feel that such a large number of process control blocks, a large number of process scheduling waste a lot of resources.
Well, I would say, do not underestimate your OS and the capabilities of your hardware. A modern processor with a modern OS on it, is VERY, VERY fast and more than capable of computing billions of instructions in seconds. We can't really imagine how fast that is. I wouldn't worry for optimizations on such a micro-level.
Okay, but let's dig deeper into this. What is a process exactly?
Informally, a process is a program in execution. The status of the current activity of a process is represented by a value, called the program counter, and the contents of the processor’s registers. The memory layout of a process is typically divided into multiple sections.
These sections include:
Text section.
Data section.
Heap section.
Stack section.
As a process executes, it changes state. The state of a process is defined in part by the current activity of that process. Each process is represented in the OS by a process control block (PCB), as you already mentioned.
So we can see that we treat a process as a very complicated structure that is MORE that just occupying CPU time. It has a state, storage, timing, and so on.
But because you are interested in system calls, then what are they?
For us, system calls provide an interface to the services made available by an OS. They are the way we tell the OS to do things FOR US. We know that systems execute thousands of system calls per second.
No, they don't.
The operating system uses software interrupt to execute the system call operation within the same process.
You can imagine them as a function call but they are executed with kernel privileges.

How did the apples fell for threads to be conceived

I was going through the following lecture notes on OS :
http://williamstallings.com/Extras/OS-Notes/h2.html
What I could draw is that "A process is a stream of execution ,i.e.basically a sequence of statements and so is a thread .However , the register states of one process are independent of the register states of another process but the register states of another thread can be accessed inside a thread. For every process at least one thread is allotted or dedicated ,when a process is started the OS activities for that process are taken over by the thread ( or a thread)"
What was the rationale behind conceiving the idea of threads ? When the OS is running a particular process why do we need some intermediate like a thread between them ?
"However , the register states of one process are independent of the register states of another process but the register states of another thread can be accessed inside a thread".
Can the above statement be taken as in the code for a process we cannot access the register states of a another process but in a code for a thread we can access the register states of another thread ?
(The above question did have the substitution of process and thread by their definition as codes or sequences of streams )
P.S : The title of the question is a metaphoric one .Please forgive if it misleads in any way . :P Could I take the liberty to broaden up and ask that if
the processor generates a thread for every process what does it write in the code for a thread ?(How does the code for a thread look like ? )
Terminology - for a system with virtual memory, threads share the same virtual memory address space, while each process has it's own address space. Processes can share physical memory by having a portion of that memory shared into their virtual address spaces (but the virtual address for each process may be different even though it is the same physical memory block).
Early (1960's) instances of multi-processing were mainframes that ran multiple processes that usually did not communicate with each other. Most of this activity was for batch oriented jobs, with a stream of jobs to be run, often from a punched card reader, or in more advanced situations, from remote job entry sites, which were other computers with a few peripherals (card readers, tape drives, line printers, ... ) that communicated with the mainframe to run jobs. There were also time sharing applications, similar to servers, except in many cases, relatively dumb terminals were used to communicate with the main frame. By the 1970's, APL/SV (A Programmming Language / Share Variables) was a time sharing application / programming language that could share variables between users.
For multi-process / multi-threaded operating systems, the device drivers operate from a queue of requests (such as a file read or write). Each request to be added to a device driver queue is done similar to a context switch so there won't be conflicts between process or thread requests for I/O. Some peripherals, such as mainframe, SCSI, or ... disk drives also operated from an internal queue, and could process I/O requests out of order to reduce random access overhead.
The basic problem that drove thread was how can an application handle multiple tasks at the same time and do it in a system-independent manner?
In classical eunuchs, a process could only do one thing at a time. If you needed to handle multiple things you kicked off multiple processes.
In the olde RSX and VMS systems (and Windoze under the covers), programmers relied on software interrupts. A process could queue I/O requests to multiple devices and receive a software interrupt when the request completed, thus allowing the application to do multiple things at once.
Another approach to the multiple things at once problem was to use event queues (Windoze, X Windows).
The ADA programming language was the first (and still really the only) mainstream programming language to support threads (tasks) as a system independent way to handle these kinds of problems. DOD compliance mandates drove the creation of threads.
Originally, threads were implemented through libraries ("use threads", "many to one model"). With the rise of multiprocessor systems, there became an increased demand to be able to have threads execute in parallel on different processors. This drove the creates of kernel threads in operating systems. (Many operating systems still do not support kernel threads).

why it is called context switch? why not Process Swap or Swapping,cause resources are same

The process of a Context Switch. In computing, a context switch is the
process of storing and restoring the state (more specifically, the
execution context) of a process or thread so that execution can be
resumed from the same point at a later time.
but if it is the process or thread
and resource is not shared then it must be called as Process Swap or Thread Swap
It is not swapping threads themselves or process themselves. It is swapping execution context information stored within the CPU - various CPU registers, flag bits, etc. A process contains 1+ threads, and each thread manipulates CPU state. A context switch captures the CPU state (the context) of the current running thread and pauses it, then swaps in the state of another thread so it can then resume running where it previously left off.
Keep in minds that it was called a change in process context long before there were threads. Since we are talking history, I ignore threads.
Making a process executable requires loading registers. The set of registers varies among systems but it typically includes:
General Registers
Processor Status
Use Page Table Registers
When a process ceases to be current, its register values need to be saved. When the process becomes current, its register values need to be restored.
Nearly every processor has an instruction that performs each of these tasks to allow the switch to be performed atomically.
The processor then needs to define a data structure for holding the register values. For whatever reason, this data structure was called a "Process Context Block" or PCB (some idiotic textbooks use PCB to describe something else that does not really exist).
Someone probably thought calling it a Process "Context" was a cute name. It really does not make sense in English but that is what happened.
The instructions for loading and saving then became variants of Load Process Context and Save Process Context.
but if it is the process or thread and resource is not shared then it must be called as Process Swap or Thread Swap
Swapping is an entirely different concept. So the terms "swap" and "swapping" were already taken. In the days before paging, entire processes were moved (swapped) to disk. That is why eunuchs has a "swap" partition—the name never got updated to "page partition."

job, task and process, what's the difference

what is the difference between these concepts?
“Process” is well-defined; “job” and “task” are ambiguous.
Fundamentally a job/task is what work is done, while a process is how it is done, usually anthropomorphised as who does it. A job is an overall unit of work, and is composed of tasks. In practice usage is very inconsistent, and often “task” == “process”, though formally a process performs a task.
Process is a well-defined operating systems concept, as is thread: a process is an instance of a program that is being executed, and is the basic unit of resources: a process consists of or “owns” its image, execution context, memory, files, etc.; etymologically a process is the steps done by a processor. A process consists of one or more threads, which are the unit of scheduling, and consist of some subset of a process (possibly shared with other threads): execution context and perhaps more. Traditionally a thread is the unit of execution on a processor (a thread is “what is executing”), but with multi-core processors and hardware threads, some scheduling is done even at the level of a single core. There are various kinds of processes and threads, and the exact definition varies between platforms.
Job and task are today vague, ambiguous terms, especially task. A “job” often means a set of processes, while a “task” may mean a process, a thread, a process or thread, or, distinctly, a unit of work done by a process or thread.
To give an idea how confused the naming is,
Windows Task Manager manages (running) processes, while
Windows Task Scheduler schedules programs to execute in future, what is traditionally known as a job scheduler, and uses the .job extension!
The term “job” traditionally means a “piece of work” (as opposed to “occupation”), and is used as such in manufacturing, in the phrase “job production”, meaning “custom production”, where it is contrasted with batch production (many items at once, one step at a time) and flow production (many items at once, all steps at the same time, by item). Note that these distinctions have become blurred in computing, notably in the oxymoronic term “batch job”.
In computing, “job” originates in non-interactive processing on mainframes, notably in IBM’s Job Control Language for the DOS/360 and OS/360 of the mid-1960s, and formally means a “unit of work for an operating system”, which consists of steps, each of which is a request to execute a specific program. Early computers primarily did batch processing (running the same program over many input data), like census or billing, and a standard type of one-off job was compiling a program from source, which could then process batches of data. Later batch came to be applied to all non-interactive computing, whether one-off or multiple items.
In Unix shells, a “job” is the shell’s representation for a process group – a set of processes that can all be sent a signal – concretely a pipeline and its descendent processes; note that running a script starts a job, exactly as in mainframes. The job is not done until the processes complete, and a job can be stopped, resumed, or terminated, which corresponds to suspending, resuming, or terminating the processes. Thus while formally a job is distinct from the process group, this is a subtle distinction and thus people often use “job” to mean “set of processes”.
Traditional jobs (and batches) have finite input data and should complete processing, successfully or not. By contrast, when running a server, such as a web server, the input, such as a stream of requests, is unlimited (formally codata). This is analogous to flow production, and the process (or “job”) never completes, though it can be terminated or “canceled”. In a quip, “a server’s job is never done” (formally, exit status will be CANCELED, not COMPLETED/SUCCESS).
The term “step” makes sense for sequential computing – one step follows another – but once you have concurrent computing, you have a set of tasks, which do not necessarily run in a particular order, rather than a sequence of steps. The term “task” was popularized by OS/360, which featured “Multiprogramming with a Fixed number of Tasks (MFT)” and “Multiprogramming with a Variable number of Tasks (MVT)”, though in this case “task” was used synonymously with “process” or “thread”, as the basic task is “execute this program” (so the resulting process/thread performs the task), which is probably the source of the ambiguity.
Formally “multitasking” means “working on multiple tasks concurrently”, but in practice means an operating system (or virtual machine, or runtime, or individual process) “running multiple processes/threads concurrently”.
A clear distinction between tasks as work and process/threads as how the work is done is given in a
task queue, as in this diagram of a thread pool: there is a (big, potentially unlimited) queue of incoming tasks (pending), which are performed by a (small, often fixed) set of threads, each task being performed by a single thread, and each thread performing a single task at a time: the active tasks correspond to the active threads. Concretely, consider a multithreaded web server, where the tasks are “service this web page request”, and each thread fetches (from disk or memory) or renders the web page (say by a template or PHP), then returns the result.
As you can see from this last example, it is often useful to distinguish tasks from threads or processes, and in particular contexts “job” and “task” have specific meanings, though in general they are ambiguous.
Clearest is thus to avoid using “job” or “task” and instead refer to a “set of processes”, “process”, or “thread”, and for servers to refer to requests (or queries) rather than tasks.
They can be all considered the same thing, really depends on the context. A process though is usually an isolated entity that's managed by the operating system. A job is often more of an application level term or just some script that's executed to do a specific set of task(s). A task is often a part of a job - sometimes the only part.
A job is a unit of work that has been submitted by user. It is usually associated with batch systems. A batch job might be a request to run multiple programs in succession [pg 144]. However, it can be assumed that a job is a request to run a single program. Hence, depending on the context, a job can be a program (we usually assume this), or a set of programs (e.g. batch systems) [pg 8].
A process is an active entity, which requires a set of resources, including a processor and special registers to perform its function. It is a single instance of an executable program. So from here, you can see the connection between a process and a program, hence, a job.
The Linux kernel internally represents processes as tasks [pg 742].
Source: Modern Operating Systems (3rd edition) by Tanenbaum, published by Pearson Education, Inc, 2009
A task represents the execution of a single process or multiple processes on a compute node. A collection of tasks that is used to perform a computation is known as a job. Jobs are used to reserve the resources required by tasks.
source: jobs and tasks http://msdn.microsoft.com/en-us/library/bb525214%28v=vs.85%29.aspx
Well...
This might not be as clear as described here. It may very well depend of the operating system some one is dealing with.
For example when compiling a DIGITAL Equipment OSF1 kernel (also known as TruUnix64) -- when that Unix was still existant, end of the nineties, beginning of the century -- the term TASK was dedicated to the number of parallel tasks the kernel was able to handle.
It was a fixed array of tasks the kernel could perform at a given moment.
Thus it was the sum of the processes it could spawn as well as internal tasks it has to do even if not seen as processes by ps. Then it was a very low level count of actions allowed to the kernel on each NUMA node, not something accessible outside the kernel.
On the other hand a previous operating system like DEC VMS was known for having its base OS unit as a job (you interactively logged under a job) executing possibly (depending on system and account parameters and privileges) many processes at a time. An image (an executable) then occupying a process and (most of the time) multiple threads (the OS took care of multithreading by itself) at a time.
So, then, the job was not application related but really OS related.
Somewhat similarly Windows, which does not natively support fork() as a lightweight process creator, tends to create processes (using a spawn - CreateProcess - primitive that looks very much alike the one that existed onto VMS / OpenVMS 40 years ago) that are heavier that the Unix ones. Here, we have the same word (process) to describe (in term of OS) two realities that are quite different: a Windows process tends to be closer to a VMS job than a true Unix process.
As I did not configure/build any Unix Kernel since TrueUnix64, I am not able to discuss the TASK kernel parameter of a Debian or Linux OS if any. It might be interesting that someone with inner knowledge of the tasks limit of those kind of OS could explain us further on this concept in these systems.
To conclude: task, process, job, spawn, fork, thread... the more you dig into different OS, the more varieties you get and possible contradictory definitions you face.
Gilles
[non native English speaker, pardon my English].

Relationship between a kernel and a user thread

Is there a relationship between a kernel and a user thread?
Some operating system textbooks said that "maps one (many) user thread to one (many) kernel thread". What does map means here?
When they say map, they mean that each kernel thread is assigned to a certain number of user mode threads.
Kernel threads are used to provide privileged services to applications (such as system calls ). They are also used by the kernel to keep track of what all is running on the system, how much of which resources are allocated to what process, and to schedule them.
If your applications make heavy use of system calls, more user threads per kernel thread, and your applications will run slower. This is because the kernel thread will become a bottleneck, since all system calls will pass through it.
On the flip side though, if your programs rarely use system calls (or other kernel services), you can assign a large number of user threads to a kernel thread without much performance penalty, other than overhead.
You can increase the number of kernel threads, but this adds overhead to the kernel in general, so while individual threads will be more responsive with respect to system calls, the system as a whole will become slower.
That is why it is important to find a good balance between the number of kernel threads and the number of user threads per kernel thread.
http://www.informit.com/articles/printerfriendly.aspx?p=25075
Implementing Threads in User Space
There are two main ways to implement a threads package: in user space and in the kernel. The choice is moderately controversial, and a hybrid implementation is also possible. We will now describe these methods, along with their advantages and disadvantages.
The first method is to put the threads package entirely in user space. The kernel knows nothing about them. As far as the kernel is concerned, it is managing ordinary, single-threaded processes. The first, and most obvious, advantage is that a user-level threads package can be implemented on an operating system that does not support threads. All operating systems used to fall into this category, and even now some still do.
All of these implementations have the same general structure, which is illustrated in Fig. 2-8(a). The threads run on top of a run-time system, which is a collection of procedures that manage threads. We have seen four of these already: thread_create, thread_exit, thread_wait, and thread_yield, but usually there are more.
When threads are managed in user space, each process needs its own private thread table to keep track of the threads in that process. This table is analogous to the kernel's process table, except that it keeps track only of the per-thread properties such the each thread's program counter, stack pointer, registers, state, etc. The thread table is managed by the run-time system. When a thread is moved to ready state or blocked state, the information needed to restart it is stored in the thread table, exactly the same way as the kernel stores information about processes in the process table.
When a thread does something that may cause it to become blocked locally, for example, waiting for another thread in its process to complete some work, it calls a run-time system procedure. This procedure checks to see if the thread must be put into blocked state. If so, it stores the thread's registers (i.e., its own) in the thread table, looks in the table for a ready thread to run, and reloads the machine registers with the new thread's saved values. As soon as the stack pointer and program counter have been switched, the new thread comes to life again automatically. If the machine has an instruction to store all the registers and another one to load them all, the entire thread switch can be done in a handful of instructions. Doing thread switching like this is at least an order of magnitude faster than trapping to the kernel and is a strong argument in favor of user-level threads packages.
However, there is one key difference with processes. When a thread is finished running for the moment, for example, when it calls thread_yield, the code of thread_yield can save the thread's information in the thread table itself. Furthermore, it can then call the thread scheduler to pick another thread to run. The procedure that saves the thread's state and the scheduler are just local procedures, so invoking them is much more efficient than making a kernel call. Among other issues, no trap is needed, no context switch is needed, the memory cache need not be flushed, and so on. This makes thread scheduling very fast.
User-level threads also have other advantages. They allow each process to have its own customized scheduling algorithm. For some applications, for example, those with a garbage collector thread, not having to worry about a thread being stopped at an inconvenient moment is a plus. They also scale better, since kernel threads invariably require some table space and stack space in the kernel, which can be a problem if there are a very large number of threads.
Despite their better performance, user-level threads packages have some major problems. First among these is the problem of how blocking system calls are implemented. Suppose that a thread reads from the keyboard before any keys have been hit. Letting the thread actually make the system call is unacceptable, since this will stop all the threads. One of the main goals of having threads in the first place was to allow each one to use blocking calls, but to prevent one blocked thread from affecting the others. With blocking system calls, it is hard to see how this goal can be achieved readily.
The system calls could all be changed to be nonblocking (e.g., a read on the keyboard would just return 0 bytes if no characters were already buffered), but requiring changes to the operating system is unattractive. Besides, one of the arguments for user-level threads was precisely that they could run with existing operating systems. In addition, changing the semantics of read will require changes to many user programs.
Another alternative is possible in the event that it is possible to tell in advance if a call will block. In some versions of UNIX, a system call, select, exists, which allows the caller to tell whether a prospective read will block. When this call is present, the library procedure read can be replaced with a new one that first does a select call and then only does the read call if it is safe (i.e., will not block). If the read call will block, the call is not made. Instead, another thread is run. The next time the run-time system gets control, it can check again to see if the read is now safe. This approach requires rewriting parts of the system call library, is inefficient and inelegant, but there is little choice. The code placed around the system call to do the checking is called a jacket or wrapper.
Somewhat analogous to the problem of blocking system calls is the problem of page faults. We will study these in Chap. 4. For the moment, it is sufficient to say that computers can be set up in such a way that not all of the program is in main memory at once. If the program calls or jumps to an instruction that is not in memory, a page fault occurs and the operating system will go and get the missing instruction (and its neighbors) from disk. This is called a page fault. The process is blocked while the necessary instruction is being located and read in. If a thread causes a page fault, the kernel, not even knowing about the existence of threads, naturally blocks the entire process until the disk I/O is complete, even though other threads might be runnable.
Another problem with user-level thread packages is that if a thread starts running, no other thread in that process will ever run unless the first thread voluntarily gives up the CPU. Within a single process, there are no clock interrupts, making it impossible to schedule processes round-robin fashion (taking turns). Unless a thread enters the run-time system of its own free will, the scheduler will never get a chance.
One possible solution to the problem of threads running forever is to have the run-time system request a clock signal (interrupt) once a second to give it control, but this, too, is crude and messy to program. Periodic clock interrupts at a higher frequency are not always possible, and even if they are, the total overhead may be substantial. Furthermore, a thread might also need a clock interrupt, interfering with the run-time system's use of the clock.
Another, and probably the most devastating argument against user-level threads, is that programmers generally want threads precisely in applications where the threads block often, as, for example, in a multithreaded Web server. These threads are constantly making system calls. Once a trap has occurred to the kernel to carry out the system call, it is hardly any more work for the kernel to switch threads if the old one has blocked, and having the kernel do this eliminates the need for constantly making select system calls that check to see if read system calls are safe. For applications that are essentially entirely CPU bound and rarely block, what is the point of having threads at all? No one would seriously propose computing the first n prime numbers or playing chess using threads because there is nothing to be gained by doing it that way.
User threads are managed in userspace - that means scheduling, switching, etc. are not from the kernel.
Since, ultimately, the OS kernel is responsible for context switching between "execution units" - your user threads must be associated (ie., "map") to a kernel schedulable object - a kernel thread†1.
So, given N user threads - you could use N kernel threads (a 1:1 map). That allows you to take advantage of the kernel's hardware multi-processing (running on multiple CPUs) and be a pretty simplistic library - basically just deferring most of the work to the kernel. It does, however, make your app portable between OS's as you're not directly calling the kernel thread functions. I believe that POSIX Threads (PThreads) is the preferred *nix implementation, and that it follows the 1:1 map (making it virtually equivalent to a kernel thread). That, however, is not guaranteed as it'd be implementation dependent (a main reason for using PThreads would be portability between kernels).
Or, you could use only 1 kernel thread. That'd allow you to run on non multitasking OS's, or be completely in charge of scheduling. Windows' User Mode Scheduling is an example of this N:1 map.
Or, you could map to an arbitrary number of kernel threads - a N:M map. Windows has Fibers, which would allow you to map N fibers to M kernel threads and cooperatively schedule them. A threadpool could also be an example of this - N workitems for M threads.
†1: A process has at least 1 kernel thread, which is the actual execution unit. Also, a kernel thread must be contained in a process. OS's must schedule the thread to run - not the process.
This is a question about thread library implement.
In Linux, a thread (or task) could be in user space or in kernel space. The process enter kernel space when it ask kernel to do something by syscall(read, write or ioctl).
There is also a so-called kernel-thread that runs always in kernel space and does not represent any user process.
According to Wikipedia and Oracle, user-level threads are actually in a layer mounted on the kernel threads; not that kernel threads execute alongside user-level threads but that, generally speaking, the only entities that are actually executed by the processor/OS are kernel threads.
For example, assume that we have a program with 2 user-level threads, both mapped to (i.e. assigned) the same kernel thread. Sometimes, the kernel thread runs the first user-level thread (and it is said that currently this kernel thread is mapped to the first user-level thread) and some other times the kernel thread runs the second user-level thread. So we say that we have two user-level threads mapped to the same kernel thread.
As a clarification:
The core of an OS is called its kernel, so the threads at the kernel level (i.e. the threads that the kernel knows of and manages) are called kernel threads, the calls to the OS core for services can be called kernel calls, and ... . The only definite relation between kernel things is that they are strongly related to the OS core, nothing more.