Is cursor necessary in a multitasked system? - operating-system

Does a multitasking system involve a mouse cursor to make the user able to interact with more than one task/process at a time?

You don't need a mouse to have a multitasking system. The Wikipedia article on multitasking has some history of multitasking systems; they're a lot older than window environments and mice. The first multitasking systems ran batch jobs: you submit a task (by loading up a deck of punched cards, for example) and wait for it to finish; there could be multiple tasks in progress at any given time.
Later systems had user interaction through a command line; for example, in a purely textual unix user interface, you can use job control to run commands in the background, and control which program you get to interact with.
Even in a typical window environment, the application that has the focus (i.e. the application that you type into) isn't the only one that can get CPU time. A window environment on a multitasking operating system lets you switch to another window while an application is computing something. Additionally pretty much any multitasking system has a bunch of tasks ready in the background, only running when some event happens (hardware event, packet received over the network, timer, …). So even when there are windows and a mouse, there's no particular relationship between them and multitasking.

There's nothing about a multi-tasking system that requires any kind of involvement by the user.
To tackle the banal answer, my system, which is a Windows 7 64-bit system, could start up Notepad and seem to be single-process only in the sense that I'm only running one program, but obviously that's far from the truth.
In the other end of the scale you could have a system where the concept of a mouse cursor wouldn't make sense at all, let alone a display. For instance, a mainframe would fit this end of the scale, where the system doesn't really have a user-interface or a mouse, but is still very much so a multi-user and thus a multi-process system.
I guess my answer is more like this: What is actually your question?

Related

Sparc V8 RTOS Query

In the Sparc V8 architecture we have some N register windows. Generally an RTOS during context switching pushes and pops registers. Is it possible( or already has been done) to use each of these register windows as one of the thread. This will make switching onto next thread as good as shifting register window and pushing and popping PSR ! Thus saving context switching time and enabling faster context switching frequency.
Maybe, it depends on what you mean by threads and how many.
The register windows are built around the idea of function calls and returns, implementing this in hardware and software traps with well defined operation. If your threads are just functions that get called in a round robin fashion etc... then yes they will get switched in this manner as will any other functions called from your "thread". That said once your have more functions than the number of register windows they will start getting paged in and out of the register file.
From the perspective of OS and User code... you don't have control of what happens when you enter and leave a register window as that is implemented as a trap as I understand it probably in the firmware. If you go changing how that works you aren't running a Sparc anymore because what it does there is defined in the Spec.
The whole point of Register windows has always been fast context switching.. but other aspects of Sparc hardware such as the TLB can get in the way of that... in the context of a Sparc MCU with a flat address space... then yeah it would be really fast.

Is it theoretically possible to run software parallel to the OS?

Could you run software in conjunction with the OS? although it might not be very practical, I am curious to know if there are any limitations that deem this impossible without regards to performance, ... etc. The way in which I could visualize the system functioning would be in the same manner in which the OS gives the illusion that multiple programs are executed at the same time in order to multitask when in reality only one program operates at a time, but in this case, it is not just the OS and the processes executing on the processor, but a program and a OS at the same time. The processor architecture which I would based this design on would be the x86.
At its core, a multitasking OS is nothing more than a task switcher. There are two kinds of multitasking which usually exist in parallel - co-operative (like windows 3.1) where the program is responsible for sharing resources (either "I'm waiting for this so do something else in the meantime" or "Give someone else a chance for a while") and preemptive where the OS steps in and says "You've had enough time, now give someone else a chance."
Even the most primitive CPUs have interrupts. Something happens (a key is pressed or a timer goes off) and a function is called to do something before returning to what it was doing. The return from interrupt command restores the registers and returns to the exact instruction that was about to be executed when the interrupt happened.
However, it does not have to return to the same place. When entering the interrupt routine, the return address and registers are on the stack. Take them off and save them somewhere referenced by the current task. Now take those you saved earlier from a different task and put those on the stack (return address last). Now returning from the interrupt will continue executing the task from earlier. You might also want to set a timer before you leave to set a time limit before switching tasks again.
That's the simplest form of task-switching as you describe.

Operating System Overhead [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am working on a time-consuming computation algorithm and want to run it as fast as possible.
How much presence (running algorithm under it) of Operating System (Windows or Linux) slows the process?
Is there any example of "OS" specifically implemented to run predefined program?
First of all I'd like to introduce that I am also working on a very similar topic time-consuming computation algorithm! So much common here OR maybe just a co-incidence...
Now,let's proceed to the answer section :-
Presence of the process(your algorithm which is running) in OS is affected by daemons and other available user programs waiting in the ready queue depending on the scheduling algorithm applied by your OS. Generally, daemons are always running and some of the system applications related process just preempts other low-priority processes(maybe like your's if your process has lower priority,generally system processes and daemons preempt all other processes). The very presence of OS(Windows Or Linux)---I am considering only their kernel here--- doesn't affect as the kernels are the manager of the OS and all process and tasks. So,they don't slow the process but daemons and system processes are heavy one and they do affect your program significantly. I also wish if we could just disable all the daemons but they are just for the efficient working of OS(like mouse control,power efficiency,etc) all in all...
Just for an example, on Linux and Unix based systems, top command provides an ongoing look at processor activity in real time. It displays a listing of the most CPU-intensive tasks on the system.
So, if you will execute this code on a Linux system,you'll get the result of all the heavy processes which are intensely consuming memory! here, you'll find that apart from your process which is heavily utilising memory there are several daemons like powerd, moused, etc., and other System processes like Xorg,kdeinit4,etc... which does affect the user processes !!!
But, one thing is clear that each process or daemons generally won't occupy more memory than your intense computation algorithm process! The ratio will be lesser instead may be one-eighth,one-fourth!!!
UPDATE BASED ON COMMENTS :-
If you're specifically looking for the process to be running on the native hardware without OS facilitation/installation---you have got two choices.
Either develop the code in machine-level language or assembly languages or other low-level languages which will directly run your process on the hardware without the need for OS to manage memory sections and all and other system processes and daemons!
Second solution is to develop/utilise a very minimal OS comprising of only those settings which are required for your algorithmic program/process! And,then this minimal OS won't be a complete OS---thereby lack of daemons,multiple system calls as in major OS' like Windows,Linux,Unix,etc.
One of the useful link which Nazar554 has provided in the comment section.I'll just quote him :-
if you really want to remove any possible overhead you can try:
BareMetal OS
In your case,it seems you are preferring the first option more than the other. But,you can achieve your task in either way!
LATEST EDIT :-
It's just a feedback from myside as I couldn't get you more clearly! It would be better if you ask the same question on Operating Systems Beta as there are several experts sitting to answer all queries regarding OS development/functionality,etc! There you'll receive a more strong and positive response regarding every single tiny detail which is relevant to your topic that I might have missed.
Best wishes from myside...
The main idea in giving processor to a task is same among all major operating systems. I've provided a diagram demonstrating it. First let me describe this diagram then I'll answer your question.
Diagram Description
When a operating system wants to execute some tasks simultaneously, it can not give processor to all of them at once. Because processor can process a single operation at a time and it can't do more that one tasks processing at the same time. Because of it OS shares it among all tasks in a time-slot by time-slot manner. In other words each task is allowed to use the processor just in its own time slot and it should give the processor back to the OS once its time slot finished.
Operating systems uses a dispatcher component to select and dispatch a pending task to give the processor to it. What is different among operating systems is how the dispatcher works, What does a typical dispatcher do? in simple words :
Pick next pending task from the queues based on a scheduling algorithm
Context switching
Decide where the removed task (from processor) should go
Answer to your question
How much presence (running algorithm under it) of Operating System (Windows or Linux) slows the process?
It depends on:
Dispatcher algorithm (i.e. which OS do you use)
Current loads on the system (i.e. how much applications and daemons is running now)
How much priority have your process task (i.e. real-time priority, UI priority, regular priority, low ,...)
How much I/O stuff is going to be done by your task (Because I/O requesting tasks usually are scheduled in a separate queue)
Excuse me for my English issues, because English isn't my native language
Hope it helps you
Try booting in single-user mode.
From debian-administration.org and debianadmin.com:
Run Level 1 is known as 'single user' mode. A more apt description would be 'rescue', or 'trouble-shooting' mode. In run level 1, no daemons (services) are started. Hopefully single user mode will allow you to fix whatever made the transition to rescue mode necessary.
I guess "no daemons" is not entirely true, with wiki.debian.org claiming:
For example, a daemon can be configured to run only when the computer is in single-user mode (runlevel 1) or, more commonly, when in multi-user mode (runlevels 2-5).
But I suppose single-user mode will surely kill most of your daemons.
It's a bit of a hack, but it may just do the job for you.

Is the change between kernel/user mode done by hardware or software?

I was just wondering whether the switch between the kernel mode and the user mode in an operating system is done by the hardware or the os itself.
I understand that when a user process wants to get into kernel mode it can make a system call and execute some kernel code. When the system call is made, the process goes into the kernel mode and now all memory becomes accessible, etc. In order for this to happen, I would assume that the interrupt handler needs to switch or alter the page table. Is this true? If not, how does the CPU know, that it is running in the kernel mode and does not need to page fault when accessing restricted (unaccessible to the user process) memory?
Thanks!
The last answer is actually not true....
Changing to kernel mode doesn't go through 'Real mode'. Actually after finishing the boot process, the computer never goes back to real mode.
In normal x86 systems, changing to kernel mode involves calling 'sysenter' (after setting parameters in some registers), which causes jumping a predefined address (saved in the MISR register of the CPU), that was set when the computer booted, because it can be done only from kernel mode (it is a 'privileged' command).
So it basically involves executing a software command, that the hardware responds to, by the way it was set, when it was in kernel mode
This is kind of a broad question - each hardware platform is going to do things slightly differently, but I think the basic answer is that it's done w/ software that leverages hardware facilities for memory protection, etc.
When a user process wants to do a system call, it executes a special CPU instruction, and the CPU switches from virtual mode (for user processes, has page tables specific to processes) to real mode (for the kernel) and jumps to the OS syscall handler. The kernel can then do what it likes.
CPU support for this is required. The CPU keeps track of which mode it is in, where the page tables are located, jumping the instruction pointer, etc. It is triggered by the user software doing the syscall, and is dependent on the kernel providing support for whatever it is trying to do. As with all computation, it's always both hardware and software. I cannot be done solely with software however, because then there would be no way to prevent a process making a syscall from abusing the privelages it gains, e.g. it could start reading /etc/shadow.
Modern x86 computers have a special instruction just for doing system calls. Earlier x86 processors, and some current RISC ones, have an instruction to trigger an interrupt. Older architecures had other ways of switching control.

How is multitasking implemented at the elementary level?

How is the multitasking implemented at the basic level ? To clarify my question, lets say we are given a C runtime to make an application which implements multitasking, which can run only one task at a time on a single core processor, say, by calling main() function of this "mutlitasking" application.
How do standard OS kernels implement this ? How does this change with multicore processors
OS sets an interrupt timer, and lets the program run. Once the timer expires, control flow jumps to code of the OS for context switch.
On the context switch OS saves registers and supporting data of the current process and replaces it in CPU with data of the next process in queue. Then it sets another interrupt timer and let the next program run from where it was interrupted.
Also a system call from the current process gives control to the OS to decide if it is time for a context switch (eq. process is waiting for an IO operation)
The mechanics is transparent for programs.
Run. Switch. Repeat. :)
I've not done much work with multi-core processors, so I will refrain from attempting to answer that part of the query. However, with uniprocessors, two strategies come to mind when it comes to multi-tasking.
If I remember correctly, the x86 supports hardware task switching. (I've had minimal experience with this type of multi-tasking.) From what I recall, when the processor detects the conditions for a task switch, it automatically saves all the registers of the outgoing task into its Task State Segment (x86), and loads all the registers from the incoming task's Task State Segment. There are various caveats and limitations with this approach such as the 'busy bit' being set and only being able to switched back to a 'busy task' under special conditions. Personally, I do not find this method to be particularly useful to me.
The more common solution that I have seen is task switching by software. This, can be broken down into cooperative task switching and pre-emptive task switching. If you are coding up a cooperative task switching strategy, a task switch only occurs when the task voluntarily gives up the processor. In this strategy, you only need to save and load the non-volatile registers. If a pre-emptive strategy is chosen, then a task switch can occur either voluntarily, or non-voluntarily. In this case, all the registers must be saved and loaded. When coding either scenario, you have to pay extra care that you do not corrupt your register contents and that you set up your stack correctly so that when you return from task-switching code you are at the right place on the stack of the incoming task.
Hope this helps.