How many processors does Origin's bigO uses? - cpu-architecture

We had a question in the exam whether a desktop computer is a multiprocessor or not. We are having a discussion now whether the BigO pc from Origin uses a single microprocessor or it uses more than one.

This question is really too broad for SO (for future reference); but Ill provide some insight that will hopefully improve your understanding. Your question is really, really broad in general and since the term microprocessor is a bit general and not all the technical information about all the parts of modern PCs is publicly available, its hard to give an exact number, mostly because alot of the components and subsystems of a modern desktop PC will have some kind of processor; generally these are microcontrollers but they are still processors running some firmware/software to do whatever functionality is required by that subsystem.
Certainly, none of the modern PCs (like the one you mention assuming its this one: https://www.originpc.com/gaming/desktops/big-o/) would be consider single processor system. Everything from desktops to laptops to smart phones these days all have at least 2-4 physical processors (ie, cores) as part of the application SoC; these all have mulitple main cores. So when you read about how this system has an Intel i7-9700K, that "processor" is really made up for 8 of the same x86-64 processors all in one. Its these cores that run all your applications and operating systems; but there are many little processors running their own code to do various other functions. For example, on Intel CPUs, there is a small processor that starts up when the computer first powers on and enables various management and security features (https://en.wikipedia.org/wiki/Intel_Management_Engine). Likewise, theres processors in many of the subsystems, like the audio subsystem for controlling low-level audio features has a small microcontroller/DSP, the graphics system can have tens or more of small processors depending on how you count the cores in the integrated GPU. And all of these as contained inside the package of the i7; there are even more on the motherboard and external components. Depending on what you count, there can be 100s of small processors in a modern computer system.
In the past, the main processor was a single core unit that really only had one microprocessor in it; the term "processor" and "CPU" have kind of held over so you might say that a desktop has an Intel i7 for a processor despite the fact the chip itself contains many main processors/cores and numerous subprocessors/microcontrollers. So while you might say that particular desktop has a single "processor" (as there are systems that can install more than 1 Intel/AMD SoCs, these are usually for high-end workstations or servers, also called multisocketed), note the difference between multiple processors and multiple sockets on the motherboard.
So, to directly answer your questions, it depends what is meant by processor. If the question is, can I fit multiple i7s (ie, is the system multisocketed), then no. If the question is, does a modern PC has multiple processors in terms of CPU cores, then yes. If the question is to count all the processing units on the system, including all the little microprocessors doing their particular job, then its really hard to say but there are alot of them.

Related

Definition of "Application" from Microsoft and uC/OS

uC/OS-III User's Manual says:
The design process of a real-time application involves splitting the work into tasks(also called threads), and each task responsible for a portion of the job.
From this quote, we can inferred that an application consists of tasks (threads).
Also, In Processes and Threads from Microsoft:
An application consists of one or more processes
Why different difinition?
Is this because uC/OS-III is for embedded environment and Microsoft is for PC environment?
In a PC environment, a process is basically the same thing as a program. A process has an address space - a chunk of virtual memory that can only be accessed by that process. It consists of one or several threads, executing in the same address space, sharing the same memory. Different threads can run on different CPU cores, executing simultaneously.
On embedded RTOS systems, we don't really have all the dead weight of a hosted system process. Traditionally, RTOS therefore speaks of tasks, which is essentially the same thing as a thread. Except most microcontrollers are still single core, so the multi-tasking is simulated through task switches, everything running on one core. Older PC worked in the same manner.
Traditional microcontrollers don't have virtual memory, but addresses physical memory addresses directly. Therefore anything running on the microcontroller can access anything, by default.
Nowadays, upper-end embedded systems and hosted system are smeared together, as are the concepts. High-end microcontrollers have memory mapping units (MMU) capable of setting up virtual address spaces. PC programmers trinkle down into embedded systems and start looking for threads. And so on. The various concepts are blurring.
One (of several) dictionary definitions of "application" is:
a program or piece of software designed to fulfil a particular purpose
In that sense both the Microsoft and uC/OS definitions are valid, it is simply that in the specific environments the structure, and execution environment of an application differ. What they describe is what an application is composed of in the context of the specific platforms and execution environments.
I would suggest that "application" has no particular technical meaning; it is simply "the purpose to which a system or software is put" - it is just English, not a specific technical concept.
The boundary of an "application" is context dependent and a Desktop software application is a very different context that an embedded microcontroller application. Equally you could draw your application boundary to encompass entire systems comprising many computers or processors running a variety of software and other equipment.
It means whatever the writer/speaker intends and can normally be inferred by the context. Don't waste your time looking for the one true definition or be confused by different usage.

Is Virtual Memory in some way related to Virtualization Technology?

I think it is a bit vague question. But I was trying to get a clear understanding on how a hypervisor interacts with operating systems under the hood, and what makes them two so different. Let me just drive you through my thought process.
Why do we need a virtualization manager a.k.a. a hypervisor, if we already have an operating system to manage resources which are shared?
One answer that I got was: suppose the system crashes, and if we have no virtualization manager, then it's a full loss. So, virtualization keeps another system unaffected, by providing isolation.
Okay, then why do we need an operating system? Well, both operating systems and hypervisors have different task to handle: hypervisor handles how to allocate the resources (compute, networking etc.), while OS handles process management, file system, memory (hmm.. We also have a virtual memory. Right?)
I think I haven't asked the question in a trivial manner? But I am confused, so may be I could get a little help to clear my insight.
"Virtual" roughly means "something that is not what it seems". It is a common task in computing to substitute one thing with another.
A "virtual resource" is a common approach for that. It also means that there is an entity in a system that transparently substitutes one portion of resource with another. Memory is one of the most important resources in computing systems, therefore "Virtual Memory" is one of the first terms that historically was introduced.
However, there are other resources that are worth virtualizing. One can virtualize registers, or, more specifically, their values. Input/output devices, time, number of processors, network connections — all these resources can be and are virtualized these days (see: Intel VT-d, Virtual Time papers, Multicore simulators, Virtual switches and network adapters as respective examples). A combination of such things is what roughly constitutes a "Virtualization Technology". It is not a well-defined term, unless you talk about Intel® Virtualization Technology, which is one-vendor trade name.
In this sense, a hypervisor is such an entity that substitutes/manages chosen resources transparently to other controlled entities, which are then said to reside inside "containers", "jails", "virtual machines" — different names exist.
Both operating system and hypervisors have different task to handle
In fact, they don't.
An operating system is just a hypervisor for regular user applications, as it manages resources behind their back and transparently for them. The resources are: virtual memory, because an OS makes it seem that every application has a huge flat memory space for its own needs; virtual time, because each application does not manage context switching points; virtual I/O, because each application uses system calls to access devices instead of directly writing into their registers.
A hypervisor is a fancy way to say a "second level operating system", as it virtualizes resources visible to operating systems. The resources are essentially the same: memory, time, I/O; a new thing are system registers.
It can go on and on, i.e. you can have hypervisors of higher levels that virtualize certain resources for entities of lower level. For Intel systems, it roughly corresponds to the stack SMM -> VMM -> OS -> user application, where SMM (system management mode) is the outermost hypervisor and user application is the inner entity (that actually does useful job of running a web browser and web server you use right now).
Why do we need a virtualization manager aka hypervisor, if we already have an operating system to manage how the resources are shared?
We don't need it if chosen computer architecture supports more than one level of indirection for resource management (e.g. nested virtualization). Thus, it depends on chosen architecture. On certain IBM systems (System/360, years 1960-1970), hypervisors were invented and used much earlier than operating systems had been introduced in a modern sense. More common IBM Personal Computer architecture based on Intel x86 CPUs (around 1975) had deficiencies that did not allow to achieve required level of isolation between multiple OSes without introducing a second layer of abstraction (hypervisors) into the architecture (which happened around 2005).

Multitasking RTOS on AVR [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
I have an AT90USB162 AVR chip which I want to run a multitasking RTOS, so I am evaluating possible RTOS for using with my AVR chip. Which multitasking RTOS's are supported by AVR? Maybe QNX? (Is it possible to run a QNX kernel on an AVR microchip?).
Thanks in advance.
The Atmel AT90USB162 is an 8-bit AVR RISC-based microcontroller -- QNX would be a stretch, and AVR is not in their BSP directory
Micrium supports AVR with uC/OS-II
FreeRTOS also supports AVR
When you say "RTOS", I presume you mean pre-emptive multi-tasking? I'm guessing (since this is an 8-bit AVR) you don't need a filesystem, network stack, etc.?
If you're looking for a tiny, pre-emptive multi-tasking kernel, you might want to check out the Quantum Platform - I've used it on very resource-constrained platforms like AVR & MSP430. Co-workers have used it on 8-bit 8051 and HC11 variants as well.
The QP's preemptive kernel (QK) is a run-to-completion kernel, which reduces its stack (RAM) requirements and makes context switching less resource-intensive (no TCBs, less context to save & restore).
There is a QP/C variant, which is "small", and a QP-nano variant, which is "tiny". Since those terms are absolutely meaningless without numbers, the QP-nano page has a comparison of kernel types & their typical sizes. For example (minimum figures provided): typical RTOS, 10K ROM, 10K RAM; QP/C - 8K ROM, 1K RAM; QP-nano - 2K ROM, 100 bytes of RAM.
The good thing is that all the code is available so you can download & try it & see for yourself.
QNX - not a chance! QNX is a relatively large and sophisticated OS for 32bit devices with MMU, providing not only kernel level scheduling but also file systems, fault-tolerant networking, POSIX API, GUI etc. Its most significant feature is its support for memory protection - each thread runs in its own virtual memory segment, so only runs on devices with appropriate hardware support.
What features do you want from your OS? On an 8 bit device it is only reasonable to expect basic priority based pre-emptive scheduling and IPC. Other services such as networking, filesystem, USB etc. are usually add-ons from the RTOS vendor or must be integrated yourself from third-party code.
The obvious choice if you want to spend no money is FreeRTOS. It is competent, though in some ways unconventional architecturally, even if fairly conventional at the API level. In my tests on ARM it had slower context switch times that other kernels I compared it with others I tested, but that may not be the case on AVR, and would only be an issue if you require real-time response times in order of a few microseconds. AVR has a rather large register set, so context switches are typically expensive in any case.
Atmel have a list of third-party support including RTOS at http://www.atmel.com/products/AVR/thirdparty.asp#. They list the following:
CMX Systems, Inc: CMX-RTX, CMX-Tiny+ (Add-ons: CMX-MicroNet, CMX-FFS)
FreeRTOS.org: FreeRTOS
Micriµm, Inc: µC/ OS-II
Nut/OS: RTOS and TCP/IP stack with a Posix-like API.
SEGGER: embOS
I have personal experience of CMX-Tiny+ (on dsPIC), embOS (on ARM), and FreeRTOS (on ARM), and uC/OS-II. They are all competent, uC-OS-II has the minor restriction of only allowing a single task at each priority level (no round-robin scheduling), but consequently probably faster context switches. In the case of embOS I have, successfully integrated third-party file-system and USB code, though the vendor has their own add-ons for these as well.
Though not a direct answer to your question, being 8 bit controller with limited resource, think of the advantage before committing to an OS Layer, the advantage of an OS layer will be beneficiary only when the project has to handle major subsystems that are tedious to code and maintain ex. file system, graphics, audio, networking, etc.
Since most of the suppliers provide integrated development environment and standard libraries and more over you can write code with high level languages like C, C++, for simple controls task sticking to your own frame work will be much more manageable
Athomthreads is a lightweight RTOS supported by AVR. It supports:
Preemptive scheduler with 255 priority levels
Round-robin at same priority level
Semaphore
Mutex
Message Queue
Timers
It is open source and has about 1k lines of code. By comparision, the demo project for AVR build with Eclipse produces a .bin file of 96 to 127 kb. Of course FreeRTOS has more features (like memory management, including dynamic memory) and better security. But if you only need multi-threading atomthreads is nice.
Here is a comprehensive comparison between multiple RTOSs.

OS memory isolation

I am trying to write a very thin hypervisor that would have the following restrictions:
runs only one operating system at a time (ie. no OS concurrency, no hardware sharing, no way to switch to another OS)
it should be able only to isolate some portions of RAM (do some memory translations behind the OS - let's say I have 6GB of RAM, I want Linux / Win not to use the first 100MB, see just 5.9MB and use them without knowing what's behind)
I searched the Internet, but found close to nothing on this specific matter, as I want to keep as little overhead as possible (the current hypervisor implementations don't fit my needs).
What you are looking for already exists, in hardware!
It's called IOMMU[1]. Basically, like page tables, adding a translation layer between the executed instructions and the actual physical hardware.
AMD calls it IOMMU[2], Intel calls it VT-d (please google:"intel vt-d" I cannot post more than two links yet).
[1] http://en.wikipedia.org/wiki/IOMMU
[2] http://developer.amd.com/documentation/articles/pages/892006101.aspx
Here are a few suggestions / hints, which are necessarily somewhat incomplete, as developing a from-scratch hypervisor is an involved task.
Make your hypervisor "multiboot-compliant" at first. This will enable it to reside as a typical entry in a bootloader configuration file, e.g., /boot/grub/menu.lst or /boot/grub/grub.cfg.
You want to set aside your 100MB at the top of memory, e.g., from 5.9GB up to 6GB. Since you mentioned Windows I'm assuming you're interested in the x86 architecture. The long history of x86 means that the first few megabytes are filled with all kinds of legacy device complexities. There is plenty of material on the web about the "hole" between 640K and 1MB (plenty of information on the web detailing this). Older ISA devices (many of which still survive in modern systems in "Super I/O chips") are restricted to performing DMA to the first 16 MB of physical memory. If you try to get in between Windows or Linux and its relationship with these first few MB of RAM, you will have a lot more complexity to wrestle with. Save that for later, once you've got something that boots.
As physical addresses approach 4GB (2^32, hence the physical memory limit on a basic 32-bit architecture), things get complex again, as many devices are memory-mapped into this region. For example (referencing the other answer), the IOMMU that Intel provides with its VT-d technology tends to have its configuration registers mapped to physical addresses beginning with 0xfedNNNNN.
This is doubly true for a system with multiple processors. I would suggest you start on a uniprocessor system, disable other processors from within BIOS, or at least manually configure your guest OS not to enable the other processors (e.g., for Linux, include 'nosmp'
on the kernel command line -- e.g., in your /boot/grub/menu.lst).
Next, learn about the "e820" map. Again there is plenty of material on the web, but perhaps the best place to start is to boot up a Linux system and look near the top of the output 'dmesg'. This is how the BIOS communicates to the OS which portions of physical memory space are "reserved" for devices or other platform-specific BIOS/firmware uses (e.g., to emulate a PS/2 keyboard on a system with only USB I/O ports).
One way for your hypervisor to "hide" its 100MB from the guest OS is to add an entry to the system's e820 map. A quick and dirty way to get things started is to use the Linux kernel command line option "mem=" or the Windows boot.ini / bcdedit flag "/maxmem".
There are a lot more details and things you are likely to encounter (e.g., x86 processors begin in 16-bit mode when first powered-up), but if you do a little homework on the ones listed here, then hopefully you will be in a better position to ask follow-up questions.

Programming considerations for virtualized applications

There are lots of questions on SO asking about the pros and cons of virtualization for both development and testing.
My question is subtly different - in a world in which virtualization is commonplace, what are the things a programmer should consider when it comes to writing software that may be deployed into a virtualized environment? Some of my initial thoughts are:
Detecting if another instance of your application is running
Communicating with hardware (physical/virtual)
Resource throttling (app written for multi-core CPU running on single-CPU VM)
Anything else?
You have most of the basics covered with the three broad points. Watch out for:
Hardware communication related issues. Disk access speeds are vastly different (and may have unusually high extremes - imagine a VM that is shut down for 3 days in the middle of a disk write....). Network access may interrupt with unusual responses
Fancy pointer arithmetic. Try to avoid it
Heavy reliance on unusually uncommon low level/assembly instructions
Reliance on machine clocks. Remember that any calls you're making to the clock, and time intervals, may regularly return unusual values when running on a VM
Single CPU apps may find themselves running on multiple CPU machines, that do funky things like Work Stealing
Corner cases and unusual failure modes are much more common. You might not have to worry as much that the network card will disappear in the middle of your communication on a real machine, as you would on a virtual one
Manual management of resources (memory, disk, etc...). The more automated the work, the better the virtual environment is likely to be at handling it. For example, you might be better off using a memory-managed type of language/environment, instead of writing an application in C.
In my experience there are really only a couple of things you have to care about:
Your application should not fail because of CPU time shortage (i.e. using timeouts too tight)
Don't use low-priority always-running processes to perform tasks on the background
The clock may run unevenly
Don't truss what the OS says about system load
Almost any other issue should not be handled by the application but by the virtualizer, the host OS or your preferred sys-admin :-)