Computer dedicated to one program - operating-system

I would like to have a computer running only one program, so whenever the computer boots, it executes that program.
For example:
Computer board from Tesla car, common supermarket systems
One example of how I use that:
Develop a system to make a house automatic, so there would be a screen showing lights which can be turned on or off, and if the house runs out of power, when the energy come back the computer would reboot and display lights options again.
Do I have to build a OS for that?

A program that boots + runs on bare metal is called a "freestanding" program. It doesn't run under an OS, and includes everything it needs to manage the hardware, and includes all libraries it needs (statically linked).
It needs to do some of the same things an OS does (talk to hardware, install interrupt handlers, etc.) so in some respects you could call it an OS, but it's also just one program and doesn't necessarily provide any mechanism for running other programs.
The more bare-bones and light-weight the microcontroller is (and/or the program), the more obvious it is that it's just a program, not an OS. (e.g. if you don't do any dynamic memory allocation. Or you don't load any code from anywhere into RAM, just execute it from ROM).
BTW, an OS kernel is a freestanding program. Not all freestanding programs are kernels, but a kernel has to be be freestanding by the normal definitions of what a kernel is.
Also BTW, it's totally normal for an embedded system to run an OS, and have that OS start some specific programs. In fact the examples you cited do use OSes. So instead of writing all your own drivers, scheduling code, etc., you use an existing OS and write a program that runs under that OS.
Sometimes that OS is Linux, sometimes it's a light-weight real-time OS.
For a kiosk, sometimes that OS is even Windows. (Or in older systems, DOS which is barely an OS.) See comments under the question.

You should take a look at IncludeOS Which is made exactly for your purpose, only include what is needed.

Related

what can a computer do without an operation system

An operation system is a program that controls all the other programs so that the user can manipulate he's work.
What will happen if the os seize to exist?
Could the user still be able to perform some tasks or not?
The OS is not just needed to control other programs, but also to do all the heavy-lifting involved with dealing with the hardware, so the actual programs don't have to do it or even know about it.
The user can do whatever his programs allow him to do, while the programs need the operating system to function properly (or at all). If there where no OS, all the programs would have to do everything that OS was doing for them before (which is A LOT).
Among many things that OS does the most important for the user is that it allows to run several programs simultaneously without them interfering with each other.
Computers do not need operating systems. If the computer does not have an operating system, the application needs to perform the operating system functions. Such applications as operating systems used to be more common than they are today. They are most common in real-time systems where the computer is performing only one general function.

Is it possible to run two, seperate operating systems on a computer?

I imagine this would be accomplished by assigning some RAM and L3 cache to one OS, some to another, and having two hard drives and two monitors. I don't know if its possible to do that at all, and if it is, how? A wrapper OS? Are there any functional examples?
I know that most advantages of such a system can be acquired by virtualization, but that is different than what I mean.
Theoretically It is possible to have multiple operating system running on a single machine but on different cores. Like one core will be running windows and other will be running linux distro. Though It is very hard to achieve, because both OS assume that It is the only king of the island, and tries to rule on everything like memory and devices. Eventually without having any exclusive lock, both OS will confuse hardware or itself and crash.
Let's come to the point that How is it even possible theoretically? This is possible through asymmetric multiprocessing (AMP), Before executing operating system A you hide the 2nd core so that OS will assume that there is only one core present on the machine then after OS will setup environment for that core.
Once things are ready this side, You ask operating system B to load up on 2nd core by hiding first core. And yes you need a separate program except boot loader to do all of this work.
Now you have two OS running but what about memory? devices? Yes that's a major concern. One workaround that I could see is to modify kernel of OS A and OS B such that you could properly divide system resources. Like you tell the OS A to use lower 2GB memory to use and assume upper 2GB as not available, thus modify OS B to use upper 2GB memory.
Memory concern is resolved, but than It would little tricky to modify every device driver to do that.
I guess this is the only reason for not doing such kind of experiment. It isn't worthy at all.
Outside of virtualization, it would really not be possible to do this on any current processors.
When the processor receives an interrupt, what operating system handles it?

Do CPU and main memory need drivers to work?

Peripheral devices require drivers to work in a computer system (operating system).
Does a CPU need a driver to work?
Same question for a main memory?
The answer is no.
The reason is that the motherboard comes with an (upgradable) BIOS, which takes care of making sure the CPU features function correctly (obviously, an AMD processor won't work on an Intel motherboard). You can upgrade the BIOS, but that should be avoided until, ... reasons of course.
Same goes for memory, it does not require a driver either.
Just so that you know, if you ever tried overclocking you can notice that you can alter the way the RAM functions, ganged/unganged mods and so on. My point is that there is already an interface established using code allowing you to make changes in real time, isn't that the very purpose we even have drivers, to be able to use a peripheral with expected outcome.
On the other hand, peripheral devices are just extensions, which the motherboard does not know how to handle, hence needing a set of instructions i.e. drivers.
In a modern system both memory and the CPU require kernel mode code — as do devices — to function.
Memory requires management of virtual memory tables. The CPU requires maintenance of process control structures.
In the business, such code is not called a "driver".
Generally, one thinks of a device driver as being kernel mode code that responds to devices through the interrupt vector.
That said, on some systems there are "printer drivers" that do not fit that definition of driver.
In short, do memory and CPU have something called a "driver"? No.
Do they have something analogous to a driver? Yes.

Is kernel a special program that executes always? and why are these CPU modes?

I am new to this OS stuff. Since the kernel controls the execution of all other programs and the resources they need, I think it should also be executed by the CPU. If so, where does it gets executed? and if i think that what CPU should execute is controlled by the kernel, then how does kernel controls the CPU if the CPU is executing the kernel itself!!!..
It seems like a paradox for me... plz explain... and by the way i didn't get these CPU modes at all... if kernel is controlling all the processes... why are these CPU modes then? if they are there, then are they implemented by the software(OS) or the hardware itself??
thanq...
A quick answer. On platforms like x86, the kernel has full control of the CPU's interrupt and context-switching abilities. So, although the kernel is not running most of the time, every so often it has a chance to decide which program the CPU will switch to and allow some running for that program. This part of the kernel is called the scheduler. Other than that the kernel gets a chance to execute every time a program makes a system call (such as a request to access some hardware, e.g. disk drive, etc.)
P.S The fact that the kernel can stop a running program, seize control of the CPU and schedule a different program is called preemptive multitasking
UPDATE: About CPU modes, I assume you mean the x86-style rings? These are permission levels on the CPU for currently executing code, allowing the CPU to decide whether the program that is currently running is "the kernel" and can do whatever it wants, or perhaps it is a lower-permission-level program that cannot do certain things (such as force a context switch or fiddle with virtual memory)
There is no paradox:
The kernel is a "program" that runs on the machine it controls. It is loaded by the boot loader at the startup of the machine.
Its task is to provide services to applications and control applications.
To do so, it must control the machine that it is running on.
For details, read here: http://en.wikipedia.org/wiki/Operating_System

OS memory isolation

I am trying to write a very thin hypervisor that would have the following restrictions:
runs only one operating system at a time (ie. no OS concurrency, no hardware sharing, no way to switch to another OS)
it should be able only to isolate some portions of RAM (do some memory translations behind the OS - let's say I have 6GB of RAM, I want Linux / Win not to use the first 100MB, see just 5.9MB and use them without knowing what's behind)
I searched the Internet, but found close to nothing on this specific matter, as I want to keep as little overhead as possible (the current hypervisor implementations don't fit my needs).
What you are looking for already exists, in hardware!
It's called IOMMU[1]. Basically, like page tables, adding a translation layer between the executed instructions and the actual physical hardware.
AMD calls it IOMMU[2], Intel calls it VT-d (please google:"intel vt-d" I cannot post more than two links yet).
[1] http://en.wikipedia.org/wiki/IOMMU
[2] http://developer.amd.com/documentation/articles/pages/892006101.aspx
Here are a few suggestions / hints, which are necessarily somewhat incomplete, as developing a from-scratch hypervisor is an involved task.
Make your hypervisor "multiboot-compliant" at first. This will enable it to reside as a typical entry in a bootloader configuration file, e.g., /boot/grub/menu.lst or /boot/grub/grub.cfg.
You want to set aside your 100MB at the top of memory, e.g., from 5.9GB up to 6GB. Since you mentioned Windows I'm assuming you're interested in the x86 architecture. The long history of x86 means that the first few megabytes are filled with all kinds of legacy device complexities. There is plenty of material on the web about the "hole" between 640K and 1MB (plenty of information on the web detailing this). Older ISA devices (many of which still survive in modern systems in "Super I/O chips") are restricted to performing DMA to the first 16 MB of physical memory. If you try to get in between Windows or Linux and its relationship with these first few MB of RAM, you will have a lot more complexity to wrestle with. Save that for later, once you've got something that boots.
As physical addresses approach 4GB (2^32, hence the physical memory limit on a basic 32-bit architecture), things get complex again, as many devices are memory-mapped into this region. For example (referencing the other answer), the IOMMU that Intel provides with its VT-d technology tends to have its configuration registers mapped to physical addresses beginning with 0xfedNNNNN.
This is doubly true for a system with multiple processors. I would suggest you start on a uniprocessor system, disable other processors from within BIOS, or at least manually configure your guest OS not to enable the other processors (e.g., for Linux, include 'nosmp'
on the kernel command line -- e.g., in your /boot/grub/menu.lst).
Next, learn about the "e820" map. Again there is plenty of material on the web, but perhaps the best place to start is to boot up a Linux system and look near the top of the output 'dmesg'. This is how the BIOS communicates to the OS which portions of physical memory space are "reserved" for devices or other platform-specific BIOS/firmware uses (e.g., to emulate a PS/2 keyboard on a system with only USB I/O ports).
One way for your hypervisor to "hide" its 100MB from the guest OS is to add an entry to the system's e820 map. A quick and dirty way to get things started is to use the Linux kernel command line option "mem=" or the Windows boot.ini / bcdedit flag "/maxmem".
There are a lot more details and things you are likely to encounter (e.g., x86 processors begin in 16-bit mode when first powered-up), but if you do a little homework on the ones listed here, then hopefully you will be in a better position to ask follow-up questions.