Is the x86_64 architecture continuously being updated? - x86-64

As we know, ARM updates the arm architecture continuously with recently releasing v9 I guess.
But is the x86_64 architecture also being updated continuously by Intel or AMD?

x86-64 does extensions by name, with only a de-facto policy (by Intel) of having future CPUs support all the extensions previous CPUs did (i.e. backwards compatibility).
Even that is fragmenting some with Intel introducing new ISA extensions in server CPUs but not in contemporary desktop CPUs, or movbe appeared in Atom significantly before mainstream CPUs (Haswell). And continuing to sell Pentium / Celeron CPUs without AVX or BMI1/BMI2. (Although Ice Lake and later Pentium / Celeron may finally handle 256-bit vectors with AVX2 and thus decode VEX prefixes and be able to enable BMI1/BMI2 as well.)
AMD sometimes even drops support for their ISA extensions if Intel never adopts them. (Like XOP introduced in Bulldozer-family, dropped in Zen. And FMA4 again from Bulldozer, officially dropped in Zen but still works in Zen 1, really gone in Zen 2.) See also Agner Fog's blog article Stop the instruction set war.
There unfortunately isn't an agreed-upon mechanism between vendors for architecture versions, so for example atomicity guarantees for aligned stores of various width are guaranteed by Intel in terms of "486 or later", "Pentium and later", "P6-family and later". See Why is integer assignment on a naturally aligned variable atomic on x86?
Note that the common subset of Intel's and AMD's atomicity guarantees for loads/stores to cacheable memory actually comes from AMD in this case: Intel guarantees no tearing for any 2,4, or 8-byte store that doesn't cross a cache-line boundary. But AMD only guarantees atomicity for those sizes within an aligned 8-byte chunk, and multi-socket K10 truly does tear in transfers between sockets.
Nowhere is there a single document that covers the lowest common denominator of functionality and instruction-set extensions across modern x86-64 CPUs.

Related

Multicore CPUs, Different types of CPUs and operating systems

An operating system should support CPU architecture and not specific CPU, for example if some company has Three types of CPUs all based of x86 architecture,
one is a single core processor, the other one a dual core and the last one has five cores, The operating system isn't CPU type based, it's architecture based, so how would the kernel know if the CPU it is running on supports multi-core processing or how many cores does it even have....
also for example Timer interrupts, Some versions of Intel's i386 processor family use PIT and others use the APIC Timer, to generate periodic timed interrupts, how does the operating system recognize that if it wants for example to config it... ( Specifically regarding timers I know they are usually set by the BIOS but the ISR handles for Timed interrupts should also recognize the timer mechanism it is running upon in order to disable / enable / modify it when handling some interrupt )
Is there such a thing as a CPU Driver that is relevant to the OS and not the BIOS?, also if someone could refer me to somewhere I could gain more knowledge about how Multi-core processing is triggered / implemented by the kernel in terms of "code" It would be great
The operating system kernel almost always has an abstraction layer called the HAL, which provides an interface above the hardware the rest of the kernel can easily use. This HAL is also architecture-dependent and not model-dependent. The CPU architecture has to define some invokation method that will allow the HAL to know about which features are present and which aren't present in the executing processor.
On the IA32/64 architecture, the is an instruction known as CPUID. You may ask another question here:
Was CPUID present from the beginning?
No, CPUID wasn't present in the earliest CPUs. In fact, it came a lot later with the developement in i386 processor. The 21st bit in the EFLAGS register indicates support for the CPUID instruction, according to Intel Manual Volume 2A.
PUSHFD
Using the PUSHFD instruction, you can copy the contents of the EFLAGS register on the stack and check if the 21st bit is set.
How does CPUID return information, if it is just an instruction?
The CPUID instruction returns processor identification and feature information in the EAX, EBX, ECX, and EDX registers. Its output depends on the values put into the EAX and ECX registers before execution.
Each value (which is valid for CPUID) that can be put in the EAX register is known as a CPUID leaf. Some leaves have subleaves, .i.e. they depend on an sub-leaf value in the ECX register.
How is multi-core support detected at the OS kernel level?
There is a standard known as ACPI (Advanced Configuration and Power Interface) which defines a set of ACPI tables. These include the MADT or multiple APIC descriptor table. This table contains entries that have information about local APICs, I/O APICs, Interrupt Redirections and much more. Each local APIC is associated with only one logical processor, as you should know.
Using this table, the kernel can get the APIC-ID of each local APIC present in the system (only those ones whose CPUs are working properly). The APIC id itself is divided into topological Ids (bit-by-bit) whose offsets are taken using CPUID. This allows the OS know where each CPU is located - its domain, chip, core, and hyperthreading id.

What is a "Logical CPU Core"

I am reading some Operating Systems materials. I read this phrase that confused me a little:
"Multicore refers to a computer or processor that has more than one logical CPU core, and that can execute multiple instructions at the same time."
What is a "logical CPU core", is it a processor? Does it correspond to something physical, or is it the OS which sees logical CPU cores but in reality there is less physical processors than logical CPU cores?
A logical CPU core contains the complete architectural context of a uniprocessor. This is the unit for which the OS can do scheduling and control architectural state such as the address for exceptions (for an architecture that does not hardwire such).
There are two common cases where it will not correspond one-to-one with a physical core. First, a single physical core can implement multiple virtual processors, e.g., Intel's Hyper-Threading. In this case the OS scheduler should be aware that virtual processors may share various resources such as instruction fetch, instruction scheduling hardware, and execution units, which generally means that tasks should be scheduled to distinct physical cores to maximize performance. (This issue also applies to a lesser extent to distinct cores that share L2 cache. Such concerns are somewhat related to NUMA optimizations for multi-CPU computers.)
In the second case, a hypervisor's virtualization of the hardware can present an arbitrary number of cores to the OS. While a hypervisor would typically make visible to a guest OS no more logical processors than provided by hardware (i.e., including virtual processors associated with hardware multithreading), theoretically the hypervisor could present an arbitrary number of processors to the OS (just as an OS can present the impression of an arbitrary number of processors to the application layer by using time slicing). In such a software virtualization context, the hypervisor may not expose to the OS the nature of the processors, so the OS could only treat them as abstract units for scheduling.
Somewhat complicating this division, it is also possible for hardware to implement multithreading without providing a full virtual processor for each thread. E.g., the MIPS Multithreading Application Specific Extension makes a distinction between Virtual Processing Elements (which behave as distinct processors in terms of architectural state) and Thread Contexts (which share the system coprocessor among threads in the same VPE). As a further complication, it may be possible for Thread Contexts to be migrated among VPEs. E.g., a physical processor core might have two VPEs and five Thread Contexts and the OS might be allowed to assign a given TC to either VPE such that either VPE could have between one and four TCs. In addition, unprivileged software can FORK and YIELD threads without OS involvement if spare hardware threads are available (in the case of FORK) or at least one thread will still be active (in the case of YIELD).
For MIPS MT-ASE, the OS would generally only be concerned with Thread Contexts, but some optimizations are possible with a more complete knowledge of the actual hardware configuration and some correctness issues are possible if a Thread Context is treated as a Virtual Processing Element.
It might be helpful to have some background knowledge:
Processor
A processor could describe either a single execution core or a single physical multi-core chip. The context of use will define the meaning of the term. e.g Normal PC computer should only have one processor
Chips
A chip refers to a physical integrated circuit (IC) on a computer. A chip is usually referred to an execution unit that can be single- or multi-core technology.
Sockets
The socket refers to a physical connector on a computer motherboard that accepts a single physical chip. Many motherboards can have multiple sockets that can, in turn, accept multi-core chips. 
Cores
Since the advent of multi-core technology, such as dual-cores and quad-cores. Essentially a core comprises a logical execution unit containing an L1 cache and functional units. Cores are able to independently execute programs or threads. Supercomputers are listed as having thousands of cores.
Hyper-threading
Hyper-threading is an Intel technology that originally preceded multi-core systems, and was used to make a single core appear logically as multiple cores on the same chip. Hyper-threading improves performance by sharing the computational workload between multiple cores whenever possible, allowing the operating system to schedule more than one process at a time. For more, see Intel Hyper-Threading Technology. 
Physical/Logical Cores
sockets and cores
As shown in the picture, you have 2 sockets, and each socket has 4 cores, and each core can execute 4 threads currently(due to Hyper-threading). In this case, if you use command lscpu on Linux, you may see you have 32 CPUs. Actually, you have 1 chip, 2 sockets, 8 cores, and 32 CPUs (From Linux perspective)
I guess it referes to ALU (Arithmetic Logical Unit) of CPU.
The ALU unit of any proccessor is the part pf the CPU reasponsible for performing all the arithmetic and logical operations.

On x86-64, is the "movnti" instruction atomic?

On x86-64 CPUs (either Intel or AMD), is the "movnti" instruction that writes 4/8 bytes to a 32/64-bit aligned address atomic?
Yes, movnti is atomic on naturally-aligned addresses, just like all other naturally-aligned 8/16/32/64b stores (and loads) on x86. This applies regardless of memory-type (writeback, write-combining, uncacheable, etc.) See that link for the wording of the guarantees in Intel's x86 manual.
Note that atomicity is separate from memory ordering. Normal x86 stores are release-store operations, but movnt stores are "relaxed".
Fun fact: 32-bit code can use x87 (fild/fistp) or SSE/MMX movq to do atomic 64-bit loads/stores. gcc's std::atomic implementation actually does this. It's only SSE accesses larger than 8B (e.g. movaps or movntps 16B/32B/64B vector stores) that are not guaranteed atomic. (Even 16B operations are atomic are on some hardware, but there's no standard way to detect this).
seems clearly not:
Because the WC protocol uses a weakly-ordered memory consistency model, a fencing operation such as SFENCE should be used in conjunction with MOVNTI instructions if multiple processors might use different memory types to read/write the memory location.

How are the stack pointer and program status word maintained in multiprocessor architecture?

In a multi-processor architecture, how are registers organized?
For example, in a 4 cores processor, a minimum of 4 processes can run at a time.
How are stack pointer, program status registers and program counter organized?
What about other general purpose registers?
My guess is, each core will have a separate set of registers.
Imagine 4 completely separate computers, each with a single-core CPU. A 4-core computer is like that; except:
All CPUs share the same physical address space (and can all use the same RAM, PCI devices, etc)
Interrupt/IRQ controllers may be designed so the OS can tell it which CPU/s should be interrupted by the IRQ
CPUs are typically able to signal each other (e.g. "inter-processor interrupts")
Some CPUs may share some caches
Some CPUs may share some control registers (e.g. for things like power management, cache configuration, etc)
For modern CPUs, some CPUs may share some or all execution units (SMT, hyper-threading, etc)
For modern systems (where memory controller is built into the physical chip) some CPUs may share the same memory controller
Most of this is "invisible" to most software. Unless you're writing part of an OS that controls power management, you don't need to care if power management is shared between CPUs or not; unless you're writing an OSs/kernel's low level IRQ handling you don't need to care how IRQs reach device drivers, etc.
The same applies to how many CPUs actually exist. The OS/kernel normally ensures that applications only need to care about higher level abstractions (e.g. "threads"). How this higher level abstraction works depends on the OS - normally (for most OSs) the OS/kernel attempts to provide the illusion that all threads are running at the same time by switching between them "quickly enough" (where if there's only 4 CPUs a maximum of 4 threads actually do run at the exact same time), but it's usually far more complex than this (involving things like thread priorities, pre-emption rules, etc) and (even though it's relatively rare) it may be very different (e.g. for some systems the same thread may be run on multiple CPUs at the same time for fault tolerance/redundancy purposes; for some systems there might just be a queue of functions and their data, where multiple functions run at the same time; etc).
Multiprocessor means that there are at least two discrete processors on the same platform -- usually on the same motherboard
A subset is distributed multiprocessing, where two PC's for example are programmed to appear as a single system with two processors
Multicore means that the most or all of the CPU is replicated many times on single chip.
- this also means that stack, status, program counter and all generic purpose registers are replicated.
Hyperthreading is a technique, where each stage of the pipeline executes commands from different processes.
Multiprocessing means in OS level that everything a process consists of, is switched every now and then.
Multithreading is a lightweight variant of multiprocessing, where the threads e.g. share the same code segment and same data segment, same file descriptors etc. but have unique stacks (and of course unique status registers and program counters)
Also means multiprocessing in general (hardware architecture)

NUMA documentations for x86-64 processor?

I have already looked for NUMA documentations for X86-64 processors, unfortunately I only found optimization documents for NUMA.
What I want is: how do I initialize NUMA in a system (this would include getting the system's memory topology and processor topology). Does anyone know a good documentation about NUMA for X86-64 AMD and Intel processors?
I know that if you want the system topology, you can get that from the ACPI SLIT (System Locality Information Table) or SRAT (Static Resource Affinity Table). You can read more about this from the ACPI spec here (http://www.acpi.info/spec.htm), specifically sections 5.2.16 and 5.2.17.
Basically, you use the SRAT to determine which memory ranges are associated with which CPUs, and you use the SLIT to determine the relative cost of using a particular CPU/memory range. Both of these tables are optional, but in my experience, most NUMA systems at least have a useful SRAT.
As far as initialization goes, I don't think I can help much. You might want to look in to how processors are brought up on the Linux kernel (or a BSD kernel). You'll probably need to read up on local APICs too, as they are used to init x86 APs.