I am wondering if it is, in theory, possible to enable hyperthreads after they have been disabled in the BIOS and vice-versa. As it turns out, if hyper-threads are disabled they do still show up in the MADT tables of ACPI as disabled cores. This is a sample output from the MADT with a processor having 4 cores and 2 threads per core and hyper-threading disabled.
CPU 0: APIC_ID=0 ACPI_PROCESSOR_ID=0 ENABLED=1
CPU 1: APIC_ID=2 ACPI_PROCESSOR_ID=1 ENABLED=1
CPU 2: APIC_ID=4 ACPI_PROCESSOR_ID=2 ENABLED=1
CPU 3: APIC_ID=6 ACPI_PROCESSOR_ID=3 ENABLED=1
CPU 4: APIC_ID=255 ACPI_PROCESSOR_ID=4 ENABLED=0
CPU 5: APIC_ID=255 ACPI_PROCESSOR_ID=5 ENABLED=0
CPU 6: APIC_ID=255 ACPI_PROCESSOR_ID=6 ENABLED=0
CPU 7: APIC_ID=255 ACPI_PROCESSOR_ID=7 ENABLED=0
I'm wondering if (a) there is an option to enable these cores at runtime (without rebooting and going through the BIOS). And (b) what (well defined or not) state is a hyperthread/processor in, if it is not enabled (i.e., is it executing hlt or mwait instructions with local APIC disabled for example?).
What I read in the ACPI specification (5.2.12.2 Processor Local APIC Structure) is the following for the enabled flag:
If zero, this processor is unusable, and the operating system
support will not attempt to use it.
However, if someone knows, I'm interested in knowing about what is the actual state a disabled hyperthread is in. For example, did the MP Initialization Protocol Algorithm as described in Intels Software Developers Manual Volume 3 (Section 8.4.3) execute on the disabled hyper-threads during initialization?
(a) Sorry to say but you cannot with 99.99% certainty unless you have access to the processor initialization code or your BIOS vendor happened to comment out a few different lines of code. The number of cores & threads are locked in at the end of the cold boot process.
(b) I'm pretty sure when the HT disabled bit is set. The logical Processor 1 and secondary APIC are disabled.
Related
First consider the situation when there is only one operating system installed. Now I run some executable. Processor reads instructions from the executable file and preforms these instructions. Even though I can put whatever instructions I want into the file, my program can't read arbitrary areas of HDD (and do many other potentially "bad" things).
It looks like magic, but I understand how this magic works. Operating system starts my program and puts the processor into some "unprivileged" state. "Unsafe" processor instructions are not allowed in this state and the only way to put the processor back to "privileged" state is give the control back to kernel. Kernel code can use all the processor's instructions, so it can do potentially unsafe things my program "asked" for if it decides it is allowed.
Now suppose we have VMWare or VirtualBox on Windows host. Guest operating system is Linux. I run a program in guest, it transfers control to guest Linux kernel. The guest Linux kernel's code is supposed to be run in processor's "privileged" mode (it must contain the "unsafe" processor instructions!). But I strongly doubt that it has an unlimited access to all the computer's resources.
I do not need too much technical details, I only want to understand how this part of magic works.
This is a great question and it really hits on some cool details regarding security and virtualization. I'll give a high-level overview of how things work on an Intel processor.
How are normal processes managed by the operating system?
An Intel processor has 4 different "protection rings" that it can be in at any time. The ring that code is currently running in determines the specific assembly instructions that may run. Ring 0 can run all of the privileged instructions whereas ring 3 cannot run any privileged instructions.
The operating system kernel always runs in ring 0. This ring allows the kernel to execute the privileged instructions it needs in order to control memory, start programs, write to the HDD, etc.
User applications run in ring 3. This ring does not permit privileged instructions (e.g. those for writing to the HDD) to run. If an application attempts to run a privileged instruction, the processor will take control from the process and raise an exception that the kernel will handle in ring 0; the kernel will likely just terminate the process.
Rings 1 and 2 tend not to be used, though they have a use.
Further reading
How does virtualization work?
Before there was hardware support for virtualization, a virtual machine monitor (such as VMWare) would need to do something called binary translation (see this paper). At a high level, this consists of the VMM inspecting the binary of the guest operating system and emulating the behavior of the privileged instructions in a safe manner.
Now there is hardware support for virtualization in Intel processors (look up Intel VT-x). In addition to the four rings mentioned above, the processor has two states, each of which contains four rings: VMX root mode and VMX non-root mode.
The host operating system and its applications, along with the VMM (such as VMWare), run in VMX root mode. The guest operating system and its applications run in VMX non-root mode. Again, both of these modes each have their own four rings, so the host OS runs in ring 0 of root mode, the host OS applications run in ring 3 of root mode, the guest OS runs in ring 0 of non-root mode, and the guest OS applications run in ring 3 of non-root mode.
When code that is running in ring 0 of non-root mode attempts to execute a privileged instruction, the processor will hand control back to the host operating system running in root mode so that the host OS can emulate the effects and prevent the guest from having direct access to privileged resources (or in some cases, the processor hardware can just emulate the effect itself without getting the host involved). Thus, the guest OS can "execute" privileged instructions without having unsafe access to hardware resources - the instructions are just intercepted and emulated. The guest cannot just do whatever it wants - only what the host and the hardware allow.
Just to clarify, code running in ring 3 of non-root mode will cause an exception to be sent to the guest OS if it attempts to execute a privileged instruction, just as an exception will be sent to the host OS if code running in ring 3 of root mode attempts to execute a privileged instruction.
In a single processor system, when powered on the processor starts executing the boot rom code and the multiple stages of the boot. However how does this work in a multi process system? Does one processor act as the master? Who decides which processor is the master and the others helpers?
How and where is it configured?
Are the page tables shared between the processors? The processor caches are obviously different, at least the L1 caches is.
Multiprocessor Booting
1 One processor designated as ‘Boot Processor’ (BSP)
– Designation done either by Hardware or BIOS
– All other processors are designated AP (Application Processors)
2- BIOS boots the BSP
3- BSP learns system configuration
4- BSP triggers boot of other AP
– Done by sending an Startup IPI (inter processor interrupt) signal to
the AP
look here
and here for more details
In FreeBSD kernel, how can I first stop all the cores, and then run my code (can be a kernel module) on all the cores? Also, when finished, I can let them restore contexts and continue executing.
Linux has APIs like this, I believe FreeBSD also has a set of APIs to do this.
edit:
Most likely I did not clarify what I want to do. First, the machine is x86_64 SMP.
I set a timer, when the time is over; to stop all the threads (including kernel threads) on all cores; save context; run my code on one core to do some kernel stuff; when finished, restore the context and let them continue running; periodically. The other kernel threads and processes are not affected (without changing their relative priority).
I assume that your "code" (the kernel module) actually takes advantage of SMP inherently already.
So, one approach you can do is:
Set the affinity of all your processes/threads to your desired cpus (sched_setaffinity)
Set each of your threads to use Real-Time (RT) scheduling.
If it is a kernel module, you can do this manually in your module (I believe), by changing the scheduling policy for your task_struct to SCHED_RR (or SCHED_FIFO) after pinning each process to a core.
In userspace, you can use the FreeBSD rtprio command (http://www.freebsd.org/cgi/man.cgi?query=rtprio&sektion=1):
rtprio, idprio -- execute, examine or modify a utility's or process's
realtime or idletime scheduling priority
The effect will be: Your code will run first before any other non-essential process in the system, until your code finishes.
I am trying to understand hardware assisted virtualization for a project with ARM CortexA8 and using the ARM Trustzone feature. I am new to this topic therefore I started with Wiki entries to understand more.
Wikipedia explains hardware assisted virtialization and adds a line in the definitionas:
Full virtualization is used to simulate a complete hardware
environment, or virtual machine, in which an unmodified guest
operating system (using the same instruction set as the host machine)
executes in complete isolation.
The text in bold is a bit confusing. How is the same instruction set of the processor used to provide two isolated environment? Can someone explain it? ArmTrustzone manual also talk of a "virtual processor core" to provide security. Please throw some light.
thanks
The phrase "using the same instruction set as the host machine" means that the guest OS is not aware of the virtualization layer and behaves as if it is executed on a real machine (with the same instruction set). This is in contrast to the para-virtualization paradigm in which the guest OS is aware of virtualization and calls some specific VMM functions, i.e. hypercalls.
No, CPU has not additional instructions. Virtual machine instruction set is translated by a hypervisor component called VMM (virtual machine manager) to be executed on the physical CPU.
Physical CPU with assisted Virtualization introduced only a new ring 0 mode called VMX that allow the virtual machine to execute some instructions in ring 0.
I have developed a Linux block device driver for CD device. The driver is working well but now there is a requirement that it should run on a SMP system. When I did a test run on the SMP system, I found the performance of the driver to degrade. The Bit rate for DATA CD has gone down tremendously as compared to single core system. So I understand that my driver needs to be modified to make it SMP safe.
In my driver , I have used :
1. Kernel threads
2. Mutex
3. Semaphore
4. Completions
My SMP system is : ARM Cortex-A9 Dual Core 600 MHz
Can some one please tell me what all factors that I should keep in mind while doing this porting?
Normally for SMP systems the shared resources (I/O resources) and global variables must be handled in such a way that simultaneous execution of a task must not overwrite, corrupt the data for this you can use spin_locks, semaphores etc to ensure that only one core will perform operation on that block/task at a time. This is logical implementation you've to identify the potential risky areas in device driver like ISR, read and write operations and have to identify the multiple entry points of your device driver and central task (in driver) toward which they are/will going/go.