List the four steps that are necessary to run a program on a completely dedicated machine—a computer that is running only that program - operating-system

In my OS class, we use a text book "Operating System Concepts" by Silberschatz.
I ran into this question and answer in practice exercise and wanted to know further explanation.
Q. List the four steps that are necessary to run a program on a completely dedicated machine—a computer that is running only that program.
A.
1. Reserve machine time
2. Manually load program into memory
3. Load starting address and begin execution
4. Monitor and control execution of program from console
Actually, I don't understand the first step, "Reserve machine time". Could you explain what each step means here?
Thank you in advance.

If the computer can run only a single program, but the computer is shared between multiple people, then you will have to agree on a time that you get to use the computer to run your program. This was common up through the 1960s. It is still common in some contexts, such as very expensive super-computers. Time-sharing became popular during the 1970s, enabling multiple people to appear to share a computer at the same time, when in fact the computer quickly switched from one person's program to another.

In my opinion teaching about old batch systems in today's OS classes is not very helpful. You should use some text which is more relevant to contemporary OS design such as the Minix Book
Apart from that if you really want to learn about old systems then wikipedia has pretty good explanation.
Early computers were capable of running only one program at a time.
Each user had sole control of the machine for a scheduled period of
time. They would arrive at the computer with program and data, often
on punched paper cards and magnetic or paper tape, and would load
their program, run and debug it, and carry off their output when done

Related

Bitdefender detects my C++ file as a virus

I am learning how to code in C++ and at the moment I am creating some basic programs that calculate something or generally do anything connected with math. So, I am using Code:Blocks for this and every time I compile a harmless program, my antivirus, Bitdefender, detects it as a virus and immediately deletes it. I have tried putting it on whitelist but I often make programs and having to whitelist every directory or program takes too much time. Can somebody explain to me why does Bitdefender, which I bought and which usually works fine is mistakenly detecting a harmless file as a virus? (The virus is described as
Gen:Variant.Ursu.'number'
The vast majority of users (of an anti-virus program) will never run a legitimate/safe program that the anti-virus hasn't seen before (less true for people on this site).
Whereas much malware is polymorphic, altering itself every time it is deployed.
Therefore a useful heuristic for an anti-virus is to block all executables the first time they are seen. Unfortunately this hits software developers rather hard. Fortunately this group is likely to be able to work out how to use exclusions to help themselves.

Is it possible to write a program that will set the computer on fire?

Let’s assume you have administrator access, and that this is a run of mill laptop or desktop. Is it possible to write a program that will result in a fire or something equally as destructive?
EDIT:
To the ”how do you think bombs work” answer: valid answer, but I’m asking about if I have a pocket universe with just a laptop, is it possible to have a program that when run, will set the computer on fire?
It isn't impossible, but with most off the shelf goods, it is unlikely you will find a deterministic way to do it. Groups like CSA, Underwriters, ETL, are pretty careful about what they give the stamp of approval to.
Depending upon that last time you have flown in the US, you may have heard various warnings that you are not to carry a certain brand of Samsung Phone or Apple Laptop on board; further you are not allowed to store them in your luggage, and if you drop one between the seats, to notify the attendants.
These are all precautions because the FAA has determined that these devices pose a fire risk, presumably due to over-heating. So, if you run caffeinate -- which prevents sleeping -- and ran a heavy workload, you could induce the high enough temperatures to cause ignition.
But, heavy on the could. There are a lot of defenses built into the batteries themselves to prevent this; then there are system management components in the computer to prevent this; then there are monitoring components on the CPU to prevent this. So, whatever you do, has to line up some failure mode of all of these systems simultaneously.
Not impossible, but maybe not far from it.

Operating System Overhead [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am working on a time-consuming computation algorithm and want to run it as fast as possible.
How much presence (running algorithm under it) of Operating System (Windows or Linux) slows the process?
Is there any example of "OS" specifically implemented to run predefined program?
First of all I'd like to introduce that I am also working on a very similar topic time-consuming computation algorithm! So much common here OR maybe just a co-incidence...
Now,let's proceed to the answer section :-
Presence of the process(your algorithm which is running) in OS is affected by daemons and other available user programs waiting in the ready queue depending on the scheduling algorithm applied by your OS. Generally, daemons are always running and some of the system applications related process just preempts other low-priority processes(maybe like your's if your process has lower priority,generally system processes and daemons preempt all other processes). The very presence of OS(Windows Or Linux)---I am considering only their kernel here--- doesn't affect as the kernels are the manager of the OS and all process and tasks. So,they don't slow the process but daemons and system processes are heavy one and they do affect your program significantly. I also wish if we could just disable all the daemons but they are just for the efficient working of OS(like mouse control,power efficiency,etc) all in all...
Just for an example, on Linux and Unix based systems, top command provides an ongoing look at processor activity in real time. It displays a listing of the most CPU-intensive tasks on the system.
So, if you will execute this code on a Linux system,you'll get the result of all the heavy processes which are intensely consuming memory! here, you'll find that apart from your process which is heavily utilising memory there are several daemons like powerd, moused, etc., and other System processes like Xorg,kdeinit4,etc... which does affect the user processes !!!
But, one thing is clear that each process or daemons generally won't occupy more memory than your intense computation algorithm process! The ratio will be lesser instead may be one-eighth,one-fourth!!!
UPDATE BASED ON COMMENTS :-
If you're specifically looking for the process to be running on the native hardware without OS facilitation/installation---you have got two choices.
Either develop the code in machine-level language or assembly languages or other low-level languages which will directly run your process on the hardware without the need for OS to manage memory sections and all and other system processes and daemons!
Second solution is to develop/utilise a very minimal OS comprising of only those settings which are required for your algorithmic program/process! And,then this minimal OS won't be a complete OS---thereby lack of daemons,multiple system calls as in major OS' like Windows,Linux,Unix,etc.
One of the useful link which Nazar554 has provided in the comment section.I'll just quote him :-
if you really want to remove any possible overhead you can try:
BareMetal OS
In your case,it seems you are preferring the first option more than the other. But,you can achieve your task in either way!
LATEST EDIT :-
It's just a feedback from myside as I couldn't get you more clearly! It would be better if you ask the same question on Operating Systems Beta as there are several experts sitting to answer all queries regarding OS development/functionality,etc! There you'll receive a more strong and positive response regarding every single tiny detail which is relevant to your topic that I might have missed.
Best wishes from myside...
The main idea in giving processor to a task is same among all major operating systems. I've provided a diagram demonstrating it. First let me describe this diagram then I'll answer your question.
Diagram Description
When a operating system wants to execute some tasks simultaneously, it can not give processor to all of them at once. Because processor can process a single operation at a time and it can't do more that one tasks processing at the same time. Because of it OS shares it among all tasks in a time-slot by time-slot manner. In other words each task is allowed to use the processor just in its own time slot and it should give the processor back to the OS once its time slot finished.
Operating systems uses a dispatcher component to select and dispatch a pending task to give the processor to it. What is different among operating systems is how the dispatcher works, What does a typical dispatcher do? in simple words :
Pick next pending task from the queues based on a scheduling algorithm
Context switching
Decide where the removed task (from processor) should go
Answer to your question
How much presence (running algorithm under it) of Operating System (Windows or Linux) slows the process?
It depends on:
Dispatcher algorithm (i.e. which OS do you use)
Current loads on the system (i.e. how much applications and daemons is running now)
How much priority have your process task (i.e. real-time priority, UI priority, regular priority, low ,...)
How much I/O stuff is going to be done by your task (Because I/O requesting tasks usually are scheduled in a separate queue)
Excuse me for my English issues, because English isn't my native language
Hope it helps you
Try booting in single-user mode.
From debian-administration.org and debianadmin.com:
Run Level 1 is known as 'single user' mode. A more apt description would be 'rescue', or 'trouble-shooting' mode. In run level 1, no daemons (services) are started. Hopefully single user mode will allow you to fix whatever made the transition to rescue mode necessary.
I guess "no daemons" is not entirely true, with wiki.debian.org claiming:
For example, a daemon can be configured to run only when the computer is in single-user mode (runlevel 1) or, more commonly, when in multi-user mode (runlevels 2-5).
But I suppose single-user mode will surely kill most of your daemons.
It's a bit of a hack, but it may just do the job for you.

Perl scripts, to use forks or threads?

I am writing a couple fo scripts that go and collect data from a number of servers, the number will grow and im trynig to future proof my scripts, but im a little stuck.
so to start off with I have a script that looks up an IP in a mysql database and then connects to each server grabs some information and then puts it into the database again.
What i have been thinknig is there is a limited amount of time to do this and if i have 100 servers it will take a little bit of time to go out to each server get the information and then push it to a db. So I have thought about either using forks or threads in perl?
Which would be the prefered option in my situation? And hs anyone got any examples?
Thanks!
Edit: Ok so a bit more inforamtion needed: Im running on Linux, and what I thought was i could get the master script to collect the db information, then send off each sub process / task to connect and gather information then push teh information back to the db.
Which is best depends a lot on your needs; but for what it's worth here's my experience:
Last time I used perl's threads, I found it was actually slower and more problematic for me than forking, because:
Threads copied all data anyway, as a thread would, but did it all upfront
Threads didn't always clean up complex resources on exit; causing a slow memory leak that wasn't acceptable in what was intended to be a server
Several modules didn't handle threads cleanly, including the database module I was using which got seriously confused.
One trap to watch for is the "forks" library, which emulates "threads" but uses real forking. The problem I faced here was many of the behaviours it emulated were exactly what I was trying to get away from. I ended up using a classic old-school "fork" and using sockets to communicate where needed.
Issues with forks (the library, not the fork command):
Still confused the database system
Shared variables still very limited
Overrode the 'fork' command, resulting in unexpected behaviour elsewhere in the software
Forking is more "resource safe" (think database modules and so on) than threading, so you might want to end up on that road.
Depending on your platform of choice, on the other hand, you might want to avoid fork()-ing in Perl. Quote from perlfork(1):
Perl provides a fork() keyword that
corresponds to the Unix system call of
the same name. On most Unix-like
platforms where the fork() system call
is available, Perl's fork() simply
calls it.
On some platforms such as Windows
where the fork() system call is not
available, Perl can be built to
emulate fork() at the interpreter
level. While the emulation is
designed to be as compatible as
possible with the real fork() at the
level of the Perl program, there are
certain important differences that
stem from the fact that all the pseudo
child "processes" created this way
live in the same real process as far
as the operating system is concerned.

How are Operating Systems "Made"?

Creating an OS seems like a massive project. How would anyone even get started?
For example, when I pop Ubuntu into my drive, how can my computer just run it?
(This, I guess, is what I'd really like to know.)
Or, looking at it from another angle, what is the least amount of bytes that could be on a disk and still be "run" as an OS?
(I'm sorry if this is vague. I just have no idea about this subject, so I can't be very specific. I pretend to know a fair amount about how computers work, but I'm utterly clueless about this subject.)
Well, the answer lives in books: Modern Operating Systems - Andrew S. Tanenbaum is a very good one. The cover illustration below.
The simplest yet complete operating system kernel, suitable for learning or just curiosity, is Minix.
Here you can browse the source code.
(source: cs.vu.nl)
Operating Systems is a huge topic, the best thing I can recommend you if you want to go really in depth on how a operating systems are designed and construced it's a good book:
Operating System Concepts
If you are truly curious I would direct you to Linux from Scratch as a good place to learn the complete ins and outs of an operating system and how all the pieces fit together. If that is more information than you are looking for then this Wikipedia article on operating systems might be a good place to start.
A PC knows to look at a specific sector of the disk for the startup instructions. These instructions will then tell the processor that on given processor interrupts, specific code is to be called. For example, on a periodic tick, call the scheduler code. When I get something from a device, call the device driver code.
Now how does an OS set up everything with the system? Well hardware's have API's also. They are written with the Systems programmer in mind.
I've seen a lot of bare-bones OS's and this is really the absolute core. There are many embedded home-grown OS's that that's all they do and nothing else.
Additional features, such as requiring applications to ask the operating system for memory, or requiring special privileges for certain actions, or even processes and threads themselves are really optional though implemented on most PC architectures.
The operating system is, simply, what empowers your software to manage the hardware. Clearly some OSes are more sophisticated than others.
At its very core, a computer starts executing at a fixed address, meaning that when the computer starts up, it sets the program counter to a pre-defined address, and just starts executing machine code.
In most computers, this "bootstrapping" process immediately initializes known peripherals (like, say, a disk drive). Once initialized, the bootstrap process will use some predefined sequence to leverage those peripherals. Using the disk driver again, the process might read code from the first sector of the hard drive, place it in a know space within RAM, and then jump to that address.
These predefined sequence (the start of the CPU, the loading of the disk) allows the programmers to star adding more and more code at the early parts of the CPU startup, which over time can, eventually, start up very sophisticated programs.
In the modern world, with sophisticated peripherals, advanced CPU architectures, and vast, vast resources (GBs or RAM, TB of Disk, and very fast CPUs), the operating system can support quite powerful abstractions for the developer (multiple processes, virtual memory, loadable drivers, etc.).
But for a simple system, with constrained resourced, you don't really need a whole lot for an "OS".
As a simple example, many small controller computers have very small "OS"es, and some may simply be considered a "monitor", offering little more than easy access to a serial port (or a terminal, or LCD display). Certainly, there's not a lot of needs for a large OS in these conditions.
But also consider something like a classic Forth system. Here, you have a system with an "OS", that gives you disk I/O, console I/O, memory management, plus the actual programming language as well as an assembler, and this fits in less than 8K of memory on an 8-Bit machine.
or the old days of CP/M with its BIOS and BDOS.
CP/M is a good example of where a simple OS works well as a abstraction layer to allow portable programs to run on a vast array of hardware, but even then the system took less than 8K of RAM to start up and run.
A far cry from the MBs of memory used by modern OSes. But, to be fair, we HAVE MBs of memory, and our lives are MUCH MUCH simpler (mostly), and far more functional, because of it.
Writing an OS is fun because it's interesting to make the HARDWARE print "Hello World" shoving data 1 byte at a time out some obscure I/O port, or stuffing it in to some magic memory address.
Get a x86 emulator and party down getting a boot sector to say your name. It's a giggly treat.
Basically... your computer can just run the disk because:
The BIOS includes that disk device in the boot order.
At boot, the BIOS scans all bootable devices in order, like the floppy drive, the harddrive, and the CD ROM. Each device accesses its media and checks a hard-coded location (typically a sector, on a disk or cd device) for a fingerprint that identifies the media, and lists the location to jump to on the disk (or media) where instructions start. The BIOS tells the device to move its head (or whatever) to the specified location on the media, and read a big chunk of instructions. The BIOS hands those instructions off to the CPU.
The CPU executes these instructions. In your case, these instructions are going to start up the Ubuntu OS. They could just as well be instructions to halt, or to add 10+20, etc.
Typically, an OS will start off by taking a large chunk of memory (again, directly from the CPU, since library commands like 'GlobalAlloc' etc aren't available as they're provided by the yet-to-be-loaded-OS) and starts creating structures for the OS itself.
An OS provides a bunch of 'features' for applications: memory management, file system, input/output, task scheduling, networking, graphics management, access to printers, and so on. That's what it's doing before you 'get control' : creating/starting all the services so later applications can run together, not stomp on each other's memory, and have a nice API to the OS provided services.
Each 'feature' provide by the OS is a large topic. An OS provides them all so applications just have to worry about calling the right OS library, and the OS manages situations like if two programs try to print at the same time.
For instance, without the OS, each application would have to deal with a situation where another program is trying to print, and 'do something' like print anyway, or cancel the other job, etc. Instead, only the OS has to deal with it, applications just say to the OS 'print this stuff' and the OS ensure one app prints, and all other apps just have to wait until the first one finishes or the user cancels it.
The least amount of bytes to be an OS doesn't really make sense, as an "OS" could imply many, or very few, features. If all you wanted was to execute a program from a CD, that would be very very few bytes. However, that's not an OS. An OS's job is to provide services (I've been calling them features) to allow lots of other programs to run, and to manage access to those services for the programs. That's hard, and the more shared resources you add (networks, and wifi, and CD burners, and joysticks, and iSight video, and dual monitors, etc, etc) the harder it gets.
http://en.wikipedia.org/wiki/Linux_startup_process you are probably looking for this.
http://en.wikipedia.org/wiki/Windows_NT_startup_process or this.
One of the most recent operating system projects I've seen that has a serious backing has been a MS Research project called Singularity, which is written entirely in C#.NET from scratch.
To get an idea how much work it takes, there are 2 core devs but they have up to a dozen interns at any given time, and it still took them two years before they could even get the OS to a point where it would bootup and display BMP images (it's how they use to do their presentations). It took much more work before they could even get to a point where there was a command line (like about 4yrs).
Basically, there are many arguments about what an OS actually is. If you got everyone agreed on what an OS specifically is (is it just the kernel? everything that runs in kernel mode? is the shell part of OS? is X part of OS? is Web browser a part of OS?), your question is answered! Otherwise, there's no specific answer to your question.
Oh, this is a fun one. I've done the whole thing at one point or another, and been there through a large part of the evolution.
In general, you start writing a new OS by starting small. The simplest thing is a bootstrap loader, which is a small chunk of code that pulls a chunk of code in and runs it. Once upon a time, with the Nova or PDP computers, you could enter the bootstrap loader through the front panel: you entered the instructions hex number by hex number. The boot loader than reads some medium into memory, and set the program counter to the start address of that code.
That chunk of code usualy loads something else, but it doesn't have to: you can write a program that's meant to run on the bare metal. That sort of program does something useful on its own.
A real operating system is bigger, and has more pieces. you need to load programs, put them in memory, and run them; you need to provide code to run the IO devices; as it gets bigger, you need to manage memory.
If you want to really learn how it works, find Doug Comer's Xinu books, and Andy Tannenbaum's newest operating system book on Minix.
You might want to get the book The Design and Implementation of the FreeBSD Operating system for a very detailed answer. You can get it from Amazon or this link to FreeBSD.org's site looks like the book as I remember it: link text
Try How Computers Boot Up, The Kernel Boot Process and other related articles from the same blog for a short overview of what a computer does when it boots.
What a computer does when its start is heavily dependent (maybe obvious?) on the CPU design and other "low-level stuff"; therefore it's kind of difficult to anticipate what your computer does when it boots.
I can't believe this hasn't been mentioned... but a classic book for an overview of operating system design is Operating Systems - Design and Implementation written by Andrew S Tanenbaum, the creator of MINIX. A lot of the examples in the book are geared directly towards MINIX as well.
If you would like to learn a bit more, OS Dev is a great place to start. Especially the wiki. This site is full of information as well as developers who write personal operating systems for a small project/hobby. It's a great learning resource too, as there are many people in the same boat as you on OSDev who want to learn what goes into an OS. You might end up trying it yourself eventually, too!
the operating system (OS) is the layer of software that controls the hardware. The simpler the hardware, the simpler the OS, and vice-versa ;-)
if the early days of microcomputers, you could fit the OS into a 16K ROM and hard-wire the motherboard to start executing machine code instructions at the start of the ROM address space. This 'bootstrap' process would then load the code for the drivers for the other devices like the keyboard, monitor, floppy drive, etc., and within a few seconds your machine would be booted and ready for use.
Nowadays... same principle, but a lot more and more complex hardware ;-)
Well you have something linking the startup of the chip to a "bios", then to a OS, that is usually a very complicated task done by a lot of services of code.
If you REALY want to know more about this i would recomend reading a book... about microcontrllers, especially one where you create a small OS in c for a 8051 or the like.. or learn some x86 assembly and create a very small "bootloader OS".
You might want to check out this question.
An OS is a program, just like any other application you write. The main purpose of this program is that it allows you to run other programs. Modern OSes take advantage of modern hardware to ensure that programs do not clash with one another.
If you are interested in writing your own OS, check out my own question here:
How to get started in operating system development
You ask how few bytes could you put on disk and still run as an OS? The answer depends on what you expect of your OS, but the smallest useful OS that I know of fits in 1.7 Megabytes. It is Tom's Root Boot disk and it is a very nice if small OS with "rescue" applications that fits on one floppy disk. Back in the days when every machine had a floppy drive and not every machine had a CD-ROM drive, I used to use it frequently.
My take on it is that it is like your own life. AT first, you know very little - just enough to get along. This is similar to what the BIOS provides - it knows enough to look for a disk drive and read information off of it. Then you learn a little bit more when you go to elementary school. This is like the boot sector being read into memory and being given control. Then you go to high school, which is like the OS kernel loading. Then you go to college (drivers and other applications.) Of course, this is the point at which you are liable to CRASH. HE HE.
Bottom line is that layers of more and more capability are slowly loaded on. There's nothing magic about an OS.
Reading through here will give you an idea of what it took to create Linux
https://netfiles.uiuc.edu/rhasan/linux/
Another really small operating system that fits on one disk is QNX (when I last looked at it a long time ago, the whole OS, with GUI interface, web browser, disk access and a built in web server, fit on one floppy drive).
I haven't heard too much about it since then, but it is a real time OS so it is designed to be very fast.
Actually, some people visit a 4-year college to get a rough idea on this..
At its core, OS is extremely simple. Here's the beginner's guide to WHAT successful OS are made to do:
1. Manage CPU using scheduler which decides which process (program's running instance) to be scheduled.
2. Manage memory to decide which all processes use it for storing instruction(code) and data(variables).
3. Manage I/O interfaces such as disk drives, alarms, keyboard, mouse.
Now, above 3 requirements give rise to need for processes to communicate(and not fight!), to interact with outside world, help applications to do what they want to do.
To dig deeper into HOW it does that, read Dinosaur book :)
So, you can make OS as small as you want to as long as you manage to handle all hardware resources.
When you bootup, BIOS tells CPU to start reading bootloader(which loads first function of OS which resides at fixed address in memory--something like main() of small C program). Then this creates functions and processes and threads and starts the big bang!
Firstly, reading reading and reading about, what is OS; then what are the uses/ Types/ nature / objective/ needs/ of the different OS's.
Some of the links are as follows; newbie will enjoy these links:
Modern OS - this gives Idea about general OS.
Start of OS - this gives basics of what it really takes to MAKE OS, how we can make it and how one can modify a present open source code of OS by himself.
Wiki OS - Gives idea about the different Os's used in different fields and uses of it(Objects / features of OS.)
Let's see in general what OS contains (Not the sophisticatedLinux or Windows)
OS need a CPU and to dump a code in it you need a bootloader.
OS must be have the objectives to fullfill and those objectives mustbe defined in a wrapper which is called Kernel
Inside you could have scheduling time and ISR's (Depends on the objective and OS you need to make)
OS development is complicated. There are some websites like osdev or lowlevel.eu (german) dedicated to the topic. There are also some books, that others have mentioned already.
I can't help but also reference the "Write your own operating system" video series on youtube, as I'm the one who made it :-)
See https://www.youtube.com/playlist?list=PLHh55M_Kq4OApWScZyPl5HhgsTJS9MZ6M