uC/OS-III User's Manual says:
The design process of a real-time application involves splitting the work into tasks(also called threads), and each task responsible for a portion of the job.
From this quote, we can inferred that an application consists of tasks (threads).
Also, In Processes and Threads from Microsoft:
An application consists of one or more processes
Why different difinition?
Is this because uC/OS-III is for embedded environment and Microsoft is for PC environment?
In a PC environment, a process is basically the same thing as a program. A process has an address space - a chunk of virtual memory that can only be accessed by that process. It consists of one or several threads, executing in the same address space, sharing the same memory. Different threads can run on different CPU cores, executing simultaneously.
On embedded RTOS systems, we don't really have all the dead weight of a hosted system process. Traditionally, RTOS therefore speaks of tasks, which is essentially the same thing as a thread. Except most microcontrollers are still single core, so the multi-tasking is simulated through task switches, everything running on one core. Older PC worked in the same manner.
Traditional microcontrollers don't have virtual memory, but addresses physical memory addresses directly. Therefore anything running on the microcontroller can access anything, by default.
Nowadays, upper-end embedded systems and hosted system are smeared together, as are the concepts. High-end microcontrollers have memory mapping units (MMU) capable of setting up virtual address spaces. PC programmers trinkle down into embedded systems and start looking for threads. And so on. The various concepts are blurring.
One (of several) dictionary definitions of "application" is:
a program or piece of software designed to fulfil a particular purpose
In that sense both the Microsoft and uC/OS definitions are valid, it is simply that in the specific environments the structure, and execution environment of an application differ. What they describe is what an application is composed of in the context of the specific platforms and execution environments.
I would suggest that "application" has no particular technical meaning; it is simply "the purpose to which a system or software is put" - it is just English, not a specific technical concept.
The boundary of an "application" is context dependent and a Desktop software application is a very different context that an embedded microcontroller application. Equally you could draw your application boundary to encompass entire systems comprising many computers or processors running a variety of software and other equipment.
It means whatever the writer/speaker intends and can normally be inferred by the context. Don't waste your time looking for the one true definition or be confused by different usage.
Related
We had a question in the exam whether a desktop computer is a multiprocessor or not. We are having a discussion now whether the BigO pc from Origin uses a single microprocessor or it uses more than one.
This question is really too broad for SO (for future reference); but Ill provide some insight that will hopefully improve your understanding. Your question is really, really broad in general and since the term microprocessor is a bit general and not all the technical information about all the parts of modern PCs is publicly available, its hard to give an exact number, mostly because alot of the components and subsystems of a modern desktop PC will have some kind of processor; generally these are microcontrollers but they are still processors running some firmware/software to do whatever functionality is required by that subsystem.
Certainly, none of the modern PCs (like the one you mention assuming its this one: https://www.originpc.com/gaming/desktops/big-o/) would be consider single processor system. Everything from desktops to laptops to smart phones these days all have at least 2-4 physical processors (ie, cores) as part of the application SoC; these all have mulitple main cores. So when you read about how this system has an Intel i7-9700K, that "processor" is really made up for 8 of the same x86-64 processors all in one. Its these cores that run all your applications and operating systems; but there are many little processors running their own code to do various other functions. For example, on Intel CPUs, there is a small processor that starts up when the computer first powers on and enables various management and security features (https://en.wikipedia.org/wiki/Intel_Management_Engine). Likewise, theres processors in many of the subsystems, like the audio subsystem for controlling low-level audio features has a small microcontroller/DSP, the graphics system can have tens or more of small processors depending on how you count the cores in the integrated GPU. And all of these as contained inside the package of the i7; there are even more on the motherboard and external components. Depending on what you count, there can be 100s of small processors in a modern computer system.
In the past, the main processor was a single core unit that really only had one microprocessor in it; the term "processor" and "CPU" have kind of held over so you might say that a desktop has an Intel i7 for a processor despite the fact the chip itself contains many main processors/cores and numerous subprocessors/microcontrollers. So while you might say that particular desktop has a single "processor" (as there are systems that can install more than 1 Intel/AMD SoCs, these are usually for high-end workstations or servers, also called multisocketed), note the difference between multiple processors and multiple sockets on the motherboard.
So, to directly answer your questions, it depends what is meant by processor. If the question is, can I fit multiple i7s (ie, is the system multisocketed), then no. If the question is, does a modern PC has multiple processors in terms of CPU cores, then yes. If the question is to count all the processing units on the system, including all the little microprocessors doing their particular job, then its really hard to say but there are alot of them.
I think it is a bit vague question. But I was trying to get a clear understanding on how a hypervisor interacts with operating systems under the hood, and what makes them two so different. Let me just drive you through my thought process.
Why do we need a virtualization manager a.k.a. a hypervisor, if we already have an operating system to manage resources which are shared?
One answer that I got was: suppose the system crashes, and if we have no virtualization manager, then it's a full loss. So, virtualization keeps another system unaffected, by providing isolation.
Okay, then why do we need an operating system? Well, both operating systems and hypervisors have different task to handle: hypervisor handles how to allocate the resources (compute, networking etc.), while OS handles process management, file system, memory (hmm.. We also have a virtual memory. Right?)
I think I haven't asked the question in a trivial manner? But I am confused, so may be I could get a little help to clear my insight.
"Virtual" roughly means "something that is not what it seems". It is a common task in computing to substitute one thing with another.
A "virtual resource" is a common approach for that. It also means that there is an entity in a system that transparently substitutes one portion of resource with another. Memory is one of the most important resources in computing systems, therefore "Virtual Memory" is one of the first terms that historically was introduced.
However, there are other resources that are worth virtualizing. One can virtualize registers, or, more specifically, their values. Input/output devices, time, number of processors, network connections — all these resources can be and are virtualized these days (see: Intel VT-d, Virtual Time papers, Multicore simulators, Virtual switches and network adapters as respective examples). A combination of such things is what roughly constitutes a "Virtualization Technology". It is not a well-defined term, unless you talk about Intel® Virtualization Technology, which is one-vendor trade name.
In this sense, a hypervisor is such an entity that substitutes/manages chosen resources transparently to other controlled entities, which are then said to reside inside "containers", "jails", "virtual machines" — different names exist.
Both operating system and hypervisors have different task to handle
In fact, they don't.
An operating system is just a hypervisor for regular user applications, as it manages resources behind their back and transparently for them. The resources are: virtual memory, because an OS makes it seem that every application has a huge flat memory space for its own needs; virtual time, because each application does not manage context switching points; virtual I/O, because each application uses system calls to access devices instead of directly writing into their registers.
A hypervisor is a fancy way to say a "second level operating system", as it virtualizes resources visible to operating systems. The resources are essentially the same: memory, time, I/O; a new thing are system registers.
It can go on and on, i.e. you can have hypervisors of higher levels that virtualize certain resources for entities of lower level. For Intel systems, it roughly corresponds to the stack SMM -> VMM -> OS -> user application, where SMM (system management mode) is the outermost hypervisor and user application is the inner entity (that actually does useful job of running a web browser and web server you use right now).
Why do we need a virtualization manager aka hypervisor, if we already have an operating system to manage how the resources are shared?
We don't need it if chosen computer architecture supports more than one level of indirection for resource management (e.g. nested virtualization). Thus, it depends on chosen architecture. On certain IBM systems (System/360, years 1960-1970), hypervisors were invented and used much earlier than operating systems had been introduced in a modern sense. More common IBM Personal Computer architecture based on Intel x86 CPUs (around 1975) had deficiencies that did not allow to achieve required level of isolation between multiple OSes without introducing a second layer of abstraction (hypervisors) into the architecture (which happened around 2005).
For example, I heard in class that global variables are just put in a specific location in memory. What is to prevent two programs from accidentally using the same memory location for different variables?
Also, do both programs use the same stack for their arguments and local variables? If so, what's to prevent the variables from interleaving with each other and messing up the indexing?
Just curious.
Most modern processors have a memory management unit (MMU) that provide the OS the ability to create protected separate memory sections for each process including a separate stack for each process. With the help of the MMU the processor can restrict each process to modifying / accessing only memory that has been allocated to it. This prevents one process from writing into a another processes memory space.
Most modern operating systems will use the features of the MMU to provide protection for each process.
Here are some useful links:
Memory Management Unit
Virtual Memory
This is something that modern operating systems do by loading each process in a separate virtual address space. Multiple processes may reference the same virtual address, but the operating system, helped by modern hardware, will map each one to a separate physical address, and make sure that one process cannot access physical memory allocated to another process1.
1 Debuggers are a notable exception: operating system often provide special mechanisms for debuggers to attach to other processes and examine their memory space.
The short answer to your question is that the operating system deals with these issues. They are very serious issues, and a significant percentage of an operating systems job is keeping everything in a separate space. The operating system runs programs that track all the other programs and make sure they are each using a space. This keeps the stacks separate too. Each program is running its own stack assigned by the OS. How the OS does this assigning is actually a complex task.
There are lots of questions on SO asking about the pros and cons of virtualization for both development and testing.
My question is subtly different - in a world in which virtualization is commonplace, what are the things a programmer should consider when it comes to writing software that may be deployed into a virtualized environment? Some of my initial thoughts are:
Detecting if another instance of your application is running
Communicating with hardware (physical/virtual)
Resource throttling (app written for multi-core CPU running on single-CPU VM)
Anything else?
You have most of the basics covered with the three broad points. Watch out for:
Hardware communication related issues. Disk access speeds are vastly different (and may have unusually high extremes - imagine a VM that is shut down for 3 days in the middle of a disk write....). Network access may interrupt with unusual responses
Fancy pointer arithmetic. Try to avoid it
Heavy reliance on unusually uncommon low level/assembly instructions
Reliance on machine clocks. Remember that any calls you're making to the clock, and time intervals, may regularly return unusual values when running on a VM
Single CPU apps may find themselves running on multiple CPU machines, that do funky things like Work Stealing
Corner cases and unusual failure modes are much more common. You might not have to worry as much that the network card will disappear in the middle of your communication on a real machine, as you would on a virtual one
Manual management of resources (memory, disk, etc...). The more automated the work, the better the virtual environment is likely to be at handling it. For example, you might be better off using a memory-managed type of language/environment, instead of writing an application in C.
In my experience there are really only a couple of things you have to care about:
Your application should not fail because of CPU time shortage (i.e. using timeouts too tight)
Don't use low-priority always-running processes to perform tasks on the background
The clock may run unevenly
Don't truss what the OS says about system load
Almost any other issue should not be handled by the application but by the virtualizer, the host OS or your preferred sys-admin :-)
I work for a small company with a .NET product that was acquired by a medium sized company with "big iron" products. Recently, the medium-sized part of the company acquired another small company with a similar .NET product and management went to have a look at their technology. They make heavy use of virtualization in their production environment and it's been decided that we will too.
Our product was not designed to be run in a virtual environment, but some accommodations can be made. For instance; there are times when we're resource bound due to customer initiated processes. This initiation is "bursty" by nature, but the processing can be made asynchronous and throttled. This is something that would need to be done for scalability anyway.
But there is other processing that we do that isn't so easily modified because we're resource bound for extended periods of time.
How do I convince management that heavy use of virtualization is probably not appropriate for us?
If I were your manager, and heard your argument (above), I'd assume that you're just resistant to change. I'd challenge you to show me the data. You haven't really made a case against virtualization. You say that your product "was not designed to be run in a virtual environment". You're in good company, very few apps ARE designed that way. It usually "just works". And if it's too slow, they just throw more resources at it. If they need to move it, make it fault tolerant, expand or contract, it's all transparent. Poorly-behaved apps can be firewalled from other environments, without having to have dedicated hardware. etc., etc.,. What's not to like about that?
You should prepare a better argument, backed up with data from testing. Or you should prepare to be steamrolled by an organization with a lot of time, $$$, and momentum invested in (insert favorite technology here).
It sounds like you're confused about how virtualization works.
You still need to provide enough resources for your virtual machines, the real benefit of virtualization is consolidating 5 machines that only run at 10-15% CPU onto a single machine that will run at 50-75% CPU and which still leaves you 25-50% overhead for those "bursty" times.
If your "bursty" application is slowing down other VM's, then you need to put resource limits in place (e.g. VM#1 can't use more than 3Ghz CPU) and ensure that there are enough resources.
I've seen this in a production environment, where 20 machines were virtualized but each was using as much CPU as it could. This caused problems as a machine was trying to use more Ghz than a single core could provide but the VM would only show a single core. Once we throttled the CPU usage of each VM to the maximum available from any single core, performance skyrocketed. I've seen the same with overallocation of RAM and where the hypervisor keeps paging to disk and killing performance.
Virtualization works, given sufficient resources.
Don't fight the methods, specify requirements.
Do some benchmarks on different sized platforms and establish a rough requirement guideline. If possible, don't say 'this is the minimum needed'; it's better to say "with X resources, we do Y work units per hour, with X', we do Y'. A host that costs Z$ can hold W virtual machines of X' resources", then the bean counters will have beans to count. If after all they decide that virtualization is cost-effective, they might be right.