I'm currently researching threads in the context of the operating system and I'm unsure if a thread is a set sequence of instructions that can be repeatedly executed or if it is filled and replaced with new instructions by the user or the operating system.
Thanks a bundle!
-Tom
I'm not quite sure what you mean - the compiled instructions for a program are stored in memory and are not changed at runtime (at least for languages which are not JIT-compiled).
A thread is an entirely separate concept from the code itself. A thread gives you the ability to be running at "two places at once" in the code. At a conceptual level, a thread is simply a container for the context that you need at any point in the execution of some code. This means that each thread has a call stack and a set of registers (which are either actually stored in the registers of a processor if the thread is running, or elsewhere if the thread is paused).
Almost all thread libraries work such that a new thread will execute some user-defined function and will then exit. This function can be long-running, just like main() (which is the function executed by the first thread in your process).
If the threads are supported by the OS (ie they are not "green threads"/"fibers") they will exit by calling an OS API which tells the OS it can deallocate any data it has which is associated with that thread.
Sometimes, abstractions are built on top of this mechanism such that a thread or pool of threads will execute a function which simply loops over a queue of tasks to run, but the fundamental mechanism is the same. However, these abstractions are provided by user libraries built on top of the OS threading mechanisms, not by the OS itself.
Related
On an AUTOSAR realtime Operating System (OS), the software architecture is layered separately (User space, systemcall interface, kernel space). Also, the switching between user context and kernel context is handled by hardware-specific infrastructure and typically the context switching handler is written in assembly code.
IBM® Rational® Test RealTime v8.0.1 (RTRT) currently treats embedded-assembly-code as mentioned in the below Q&A.
https://www.ibm.com/support/pages/how-treat-embedded-assembly-code ( ** )
RTRT tool using code insertion technololy (technically known as instrumentation process) to insert its own code to measure code coverage of the system under test.
In my case, OS with full pre-emptive design doesn't have the termination points. As the result, OS always runs unless loss of power supply. If there's no work, OS shall be in sleep (normally an idle state and do nothing). If any unexpected errors or exceptions occurs, OS shall be shutdown and run into an infinite loop. These indicated that OS is always running.
I learnt from ( ** ) and ensure context switching working correctly.
But I don't know how to teach RTRT to finish its postprocessing (consisting of attolcov and attolpostpro) in a right way. Note that OS has worked correctly throughout all my tasks already and was confirmed by debugger. SHUTDOWN OS procedure has been executed correctly and OS has been in INFINITE loop (such as while(1){};)
After RTRT ends all its processes, the coverage report of OS module is still empty.
Based on IBM guideline for RTRT
https://www.ibm.com/developerworks/community/forums/atom/download/attachment_14076432_RTRT_User_Guide.pdf?nodeId=de3b0048-968c-4111-897e-b73654af32af
RTRT provides two breakpoints to mark the logging point (priv_writeln) and termination point (priv_close) of its process.
I already tried to drive from INFINITE (my OS) to priv_close (RTRT) by interacting PC register and all Context Switching registers with the Lauterbach debugger but RTRT coverage report was empty even thougth none of errors happened. No error meant that the context switch from kernel space to user space is able to work well and main() function returned correctly.
Solved the problem.
It definitely came from context switching process of Operating System.
In my case, I did a RAM dump to see how user context memory (befor Starting OS) look like.
After that, I will backup all context areas and retore them from Sleep or Infiniteloop in the exact order.
Hereby, RTRT can know the return point and reach its own main() function's _exit to finish report generation progress.
Finally, the code coverage report has been generated.
I know that an interrupt causes the OS to change a CPU from its current task and to run a kernel routine. I this case, the system has to save the current context of the process running on the CPU.
However, I would like to know whether or not a context switch occurs when any random process makes a system call.
I would like to know whether or not a context switch occurs when any random process makes a system call.
Not precisely. Recall that a process can only make a system call if it's currently running -- there's no need to make a context switch to a process that's already running.
If a process makes a blocking system call (e.g, sleep()), there will be a context switch to the next runnable process, since the current process is now sleeping. But that's another matter.
There are generally 2 ways to cause a content switch. (1) a timer interrupt invokes the scheduler that forcibly makes a context switch or (2) the process yields. Most operating systems have a number of system services that will cause the process to yield the CPU.
well I got your point. so, first I clear a very basic idea about system call.
when a process/program makes a syscall and interrupt the kernel to invoke syscall handler. TSS loads up Kernel stack and jump to syscall function table.
See It's actually same as running a different part of that program itself, the only major change is Kernel play a role here and that piece of code will be executed in ring 0.
now your question "what will happen if a context switch happen when a random process is making a syscall?"
well, nothing will happen. Things will work in same way as they were working earlier. Just instead of having normal address in TSS you will have address pointing to Kernel stack and SysCall function table address in that random process's TSS.
I am having trouble with the JVM immediately exiting using various new applications I wrote which spawn threads through the Scala 2.10 Futures + Promises framework.
It seems that at least with the default execution context, even if I'm using blocking, e.g.
future { blocking { /* work */ }}
no non-daemon thread is launched, and therefore the JVM thinks it can immediately quit.
A stupid work around is to launch a dummy Thread instance which is just waiting, but then I also need to make sure that this thread stops when the processes are done.
So how to I enforce them to run on non-daemon threads?
In looking at the default ExecutionContext attached to ExecutionContext.global, it's of the fork join variety and the Threadfactory it uses sets the threads to daemon. If you want to work around this, you could use a different ExecutionContext, one you set up yourself. If you still want the FJP variety (and you probably do as it scales the best), you should be able to look at what they are doing in ExecutionContextImpl via this link and create something similar. Or just use a cached thread pool via Executors.newCachedThreadPool as that won't shut down immediately before your futures complete.
spawn processes
If this means processes and not just tasks, then scala.sys.process spawns non-daemon threads to run OS processes.
Otherwise, if you're creating a bunch of tasks, this is what Future.sequence helps with. Then just Await ready (Future sequence List(futures)) on the main thread.
This questions came to my head when I was studying processes scheduling.
How does OS execute and control the execution of binary and compiled files? I thought maybe OS copies a part of the binary to some memory location, jumps there, comes back after executing that block and executes the next one. But then it wouldn't have any control on it (e.g. the program can do a jump anywhere and don't come back).
In JVM case it makes perfect sense, the VM is interpreting each instruction. But in the binary files case the instructions are real CPU executable instructions so I don't think that an OS acts like VM.
It does exactly that. The operating system, in some order,
creates an entry in the process table
creates a virtual memory space for the process
loads the program code in the process memory
points the process instruction pointer to the process entry point
creates an entry in the scheduler and sets the process thread ready for execution.
Concurrency is not handled by the program being split into blocks. Switching between tasks is done via interrupts: before a process is given CPU, a timer is set up. When the timer finishes, the CPU registers an interrupt, pushes the instruction pointer to the stack and jumps to the interrupt handler defined by the operating system. This handler stores the CPU state in memory, swaps a virtual memory table and restores some other thread that is ready for execution. The same swap occurs if the thread must pause for some other reason (waiting for user / disk / network...) or yields.
http://en.wikipedia.org/wiki/Multitasking#Preemptive_multitasking.2Ftime-sharing
Note that relying on the process yielding the CPU is possible but unreliable (the process might not yield, preventing other processes from running)
http://en.wikipedia.org/wiki/Multitasking#Cooperative_multitasking.2Ftime-sharing
Security is handled by switching the CPU into protected mode where the application code cannot run some instructions (so jumping around randomly is mostly harmless). See the link provided by #SkPhilipp
Note that modern JVM does not interpret each instruction (that would be slow). Instead it compiles into native code and runs the code or (in case of just-in-time compilation) interprets at first, but compiles the "hot spots" (the code that gets run often enough).
As I can understand, every OS need to have some mechanism to periodically check if it should run some tasks and suspend others.
One way would be some kind of timer on whose expiry the OS will check if it should run/suspend some task.
Generally, say on a ARM system that would probably be some kind of ISR.
My real question, is that I've been ABLE to only visualize this and not see it somewhere. Could some one point to some free/open RTOS code where I can actually see the code that handles the preemption/scheduling?
freertos.org. The entire OS is open source, and right there for you to see. And there are dozens of different ports to compare and contrast. For the context switch code, you will want to look in the ports directory, in any one of many files called port.c, port.asm, etc. And yes, in the case of freertos all context switches are performed in interrupts (a tick timer ISR, or any other SysCall interrupt).
A context switch is very-much processor specific, as the list of registers to save and the assembly code to save them varies between processor families, and sometimes within a given family. As a result each port has a separate file for this code.
The scheduling (selection of next task to run), on the other hand, is done in a file called tasks.c, which is common to all ports and references the port-specific code.
It is not the case than an RTOS simply context switches periodically - that is how most GPOS work. In an RTOS the scheduler runs on any scheduling event. These include system-tick, but also message post, event trigger, semaphore give, or mutex unlock for example.
On ARM Cortex-M the CMSIS 3.x includes an RTOS API (intended primarily for RTOS developers rather than a complete RTOS itself), the source for this will include a context switching mechanism.
If you want a detailed description for a simple RTOS you might consider reading µC/OS-II: The Real-Time Kernel or the slightly more sophisticated µC/OS-III: The Real-Time Kernel .
FreeRTOS is increasingly popular, though perhaps a little unconventional architecturally. A more complete (in that it is not just a scheduling kernel but a more complete OS) and very powerful option is eCos.
You can take a look at xv6.
Its not an RTOS, it is just a skeleton OS(based on V6 unix) meant for academic purpose.
In the XV6 book take a look at chapter 4, there is explanation along with the code as to how scheduling is done on a small OS like xv6.XV6 puts a process to sleep when it is waiting for disk or some I/O operation, there is also timer interupt every 100msec to switch a process.
There is also explanation with code on how the context switching takes place, what information is saved( context frame of a process), how the switch from user to kernel mode happens when the scheduler has to run.
The best part is that the amount of reading you have to do to understand these concepts is very less unlike some reference book on OS :) The code is relatively small, you can infact run the XV6 on qemu set breakpoints in the sched , swtch and other functions and actually see the information saved during a context switch.(how to run xv6 in this link)
You dont have to read previous chapters to understand the chapter4. There isnt much dependency,xv6 uses struct proc to identify a process, ptable for all the current running process in the system, proc->conext -refers to the state the process is in (register value etc) , this is saved by the scheduler.
Cheers :)