CPUs with instructions with more than two branch destinations - cpu-architecture

Processors usually come with jmp-instructions to continue from a different fixed location and may depend on some condition. So the out-degree is two at most.
Are there any processors out there that have a single instruction that branches to one of three or more fixed locations?

There are a lot of reasons to assume / guess no, but I'm not familiar with enough ISAs to give a definite no. Especially if we include historical early computers from the 50s and 60s; they often have very odd stuff compared to modern systems.
Normally you just use an indirect branch (target address in a register or from memory, or looked up from a compressed table with ARM tbb) if you need anything other than taken vs. fall-through, so there's very little benefit to spending an opcode on a funky direct branch instruction with 2 non-fallthrough destinations.
Also, you'd need space in the instruction encoding for either 2 separate targets, or else some special rule like fall-through, PC + offset, PC + offset*2 (i.e. jump twice as far forwards or backwards). Using it would require laying out code with targets at specific offsets. You do sometimes make a table of fixed-size blocks of instructions and compute an offset into it (instead of looking up an address from a table of addresses), but having an instruction that forced you to do that sounds unlikely.
The condition itself could be a register being - / 0 / + as a 3-way condition, or FLAGS being less-than, equal, or greater-than. Or something else.
So it sounds very unlikely, and a complication to branch-prediction (unless you just treat it as indirect, in which case why bother).
But I wouldn't be shocked if there's some combination of conditions that make it make sense on some ISA. Maybe if there's a special-case handler address in some special register, and the normal case involves taken or fall-through?
But if we allow one of the target addresses to come from a register or other internal state, any branch that can fault would count. Consider a hypothetical ISA with a compare-and-branch on memory, like Intel with macro-fused cmp [rdi], eax / jne rel32 which decodes to a single internal uop.
Then the possible targets are:
fall-through to RIP
taken to RIP+rel32
#PF fault to the page-fault handler address (loaded from memory on x86-64).

Related

What is the difference between Program Status Word (PSW) and Program Counter (PC)?

In an Operating Systems course, the instructor introduced PSW and PC when he talked about Interrupt Handling.
His explanation was
PC holds the address of the next instruction to be fetched
PSW contains execution status information
But later I searched online and found that PSW = PC + status register. This makes me quite confused.
On the one hand, I am not sure what "execution status information" refers to. On the other hand, if PSW has the functions of a PC, why do we still need it?
Appreciate any explanation.
This isn't really standardized terminology. Most architectures have some register that plays the role of a status word, containing bits to indicate things like whether an add instruction caused a carry. But different architectures give it different names, and what exactly is included can vary widely. I'm not aware of any architecture that includes the program counter as part of their status word, but if they want to do that, well, who's going to stop them?
This is the kind of thing where you just have to look at the definition given by whatever book or article you are reading (or infer it from context), and realize that a different author may use the word differently.
In general, interrupts are hardware level subroutine calls. They do the same thing as a subroutine call (change the algorithm that the processor is executing) however they do it without warning the "executing code" that they are now operating.
In order to not damage the "executing code" all information that it was using must be stored. This includes the Program Counter (usually saved to the stack by the interrupt hardware in the same way that a subroutine call does) and all of the registers that the interrupt function will alter- these must be saved by pushing them onto the stack. The registers etc must be restored before the return from interrupt (RETI) instruction - the PC is restored by the RETI itself.
The PSW (often called the flag register) is a very important register and must generally be saved first. It contains bits like Zero (the last calculation resulted in a zero result) Carry (the last calculation resulted in a carry ie the result number is bigger than the register can hold) and several other flags. I suggest that you read the data sheet of an 8 bit microcontroller for an idea of what these flags might be. suffice it to say that these flags are needed in order to perform conditional jumps. And whilst they will often be ignored you can't take that chance.
You are probably correct in Your instructor using the term PSW to mean all all of the registers.
The subject of interrupts contains concepts that are common to subroutine calls in general (e.g. don't leave data that you don't want overwritten in a register before entering a subroutine). And later on in operating systems, the concept of context switches that occur during multi-tasking.
Peter

Data hazards in hardware platforms

I have got a list of 2 types of hazards:
1a. EX/MEM.RegisterRd = ID/EX.RegisterRs
1b. EX/MEM.RegisterRd = ID/EX.RegisterRt
2a. MEM/WB.RegisterRd = ID/EX.RegisterRs
2b. MEM/WB.RegisterRd = ID/EX.RegisterRt
I am not able to understand the intuition behind these 2 rules which can help me know the technical terms and also explains me the concept? Any explanations are welcome :)
In most 3-operand ISAs (e.g. MIPS documentation like http://www.mrc.uidaho.edu/mrc/people/jff/digital/MIPSir.html uses this convention), rd is the destination register, and Rs, Rt are source registers. e.g. add rd, rs, rt. (rs and rt might be second and third, or source and third, IDK).
If you're reading a register (in ID) which was recently written (the instruction writing it hasn't reached the write-back stage), that's a RAW read-after-write true dependency.
Out-of-order exec also introduces the possibility of write-after-write and write-after-read anti-dependency hazards. https://en.wikipedia.org/wiki/Hazard_(computer_architecture)#Data_hazards. But in a scalar in-order pipeline, only true dependencies are a concern, I think. At least if all instructions have fixed 1-cycle latency, so excluding funky stuff like MIPS mult or div that write the hi:lo pair.

In Simulink, are Goto and From blocks generally considered bad style?

I was working on a Simulink model recently and was using Goto and From blocks to keep a very busy system from becoming a twisted mess of wires. I was informed that I was not to use Goto and From blocks as they are considered bad style (at least, according to my employer).
While I hold that wires should be kept connected whenever possible, I believe that Goto and From blocks can significantly improve the readability of a system/subsystem if the model would result in lots of crossed wires otherwise; especially if the blocks can be color-coded (e.g. purple Goto block goes to all the purple From blocks).
I'd supply an image of the subsystem I'm working with, but I'm not sure I can put it on here. The subsystem itself has about 12 subsystem blocks (and possibly more later) within it, each with two bus-type outputs. The first output of each subsystem goes to a Bus Creator block, and the second output of each goes to a second Bus Creator block. Since the subsystem are aligned vertically and the Bus Creators are to the right, this results in many crossed wires. I was using Goto and From blocks to clean up the system.
I can supply an image of a smaller, but similar model that I put together for this question.
For a system with on the order of 12 subsystems, this becomes very busy. I was using Goto and From blocks to connect the subsystems and the Bus Creators without a plethora of crossed wires.
I believe my employer may be carrying the stigma of using goto statements from text-based languages and applying it to Goto/From blocks in Simulink. Generally speaking, is using Goto and From blocks in this way (or any way) considered to be bad style?
The Mathworks Automotive Advisory Board has published some modeling guidelines (PDF) that include usage of Goto/From. The rules they list are:
Do not have subsystems that are floating, i.e. all inputs / output ports are connected via Gotos. One of the great things about Simulink is the ability to determine signal flow with only a cursory visual inspection, do not destroy this by linking everything with Gotos. At least have one feed-forward and one feedback loop between subsystems connected by signal lines.
My personal opinion on feedback signals is that they should all be connected with signal lines, but I'm sure you can come up with cases where drawing all of them clutters the model.
The second guideline is about the scope of the Goto tag; keep the visibility local as much as possible.
I feel setting visibility to scoped is acceptable also as long as you're not using the matching From more than a couple of levels downstream from the Goto. I've yet to come across a legitimate need for a global Goto tag.
So, all Goto usage isn't bad, and you're right that it can improve readability in some cases. That being said, I don't think Gotos are justified for the picture above. I realize it is just an example, but I should point out that if the buses being created are virtual that order of the inputs at the creator doesn't matter, and rearranging Bus Create and Mux block inputs can work wonders for readability.
The problem with the guidelines above are that there's room for bending them, and developers on your team might do just that. Even if everyone is diligent about following them at first, you may run afoul of these guidelines one day, a long time from now, when you redraw that section of the model for refining / adding functionality. Rearranging inputs and outputs can be especially irritating in middle of implementing some cool new feature. That may be the reason your employer chose to impose a blanket ban. It is inconvenient in some cases, but is easier to enforce.

Looking for the best equivalents of prefetch instructions for ia32, ia64, amd64, and powerpc

I'm looking at some slightly confused code that's attempted a platform abstraction of prefetch instructions, using various compiler builtins. It appears to be based on powerpc semantics initially, with Read and Write prefetch variations using dcbt and dcbtst respectively (both of these passing TH=0 in the new optional stream opcode).
On ia64 platforms we've got for read:
__lfetch(__lfhint_nt1, pTouch)
wherease for write:
__lfetch_excl(__lfhint_nt1, pTouch)
This (read vs. write prefetching) appears to match the powerpc semantics fairly well (with the exception that ia64 allows for a temporal hint).
Somewhat curiously the ia32/amd64 code in question is using
prefetchnta
Not
prefetchnt1
as it would if that code were to be consistent with the ia64 implementations (#ifdef variations of that in our code for our (still live) hpipf port and our now dead windows and linux ia64 ports).
Since we are building with the intel compiler I should be able to many of our ia32/amd64 platforms consistent by switching to the xmmintrin.h builtins:
_mm_prefetch( (char *)pTouch, _MM_HINT_NTA )
_mm_prefetch( (char *)pTouch, _MM_HINT_T1 )
... provided I can figure out what temporal hint should be used.
Questions:
Are there read vs. write ia32/amd64 prefetch instructions? I don't see any in the instruction set reference.
Would one of the nt1, nt2, nta temporal variations be preferred for read vs. write prefetching?
Any idea if there would have been a good reason to use the NTA temporal hint on ia32/amd64, yet T1 on ia64?
Are there read vs. write ia32/amd64 prefetch instructions? I don't see any in the instruction set reference.
Some systems support the prefetchw instructions for writes
Would one of the nt1, nt2, nta temporal variations be preferred for read vs. write prefetching?
If the line is exclusively used by the calling thread, it shouldn't matter how you bring the line, both reads and writes would be able to use it. The benefit for prefetchw mentioned above is that it will bring the line and give you ownership on it, which may take a while if the line was also used by another core. The hint level on the other hand is orthogonal with the MESI states, and only affects how long would the prefetched line survive. This matters if you prefetch long ahead of the actual access and don't want to prefetch to get lost in that duration, or alternatively - prefetch right before the access, and don't want the prefetches to thrash your cache too much.
Any idea if there would have been a good reason to use the NTA temporal hint on ia32/amd64, yet T1 on ia64?
Just speculating - perhaps the larger caches and aggressive memory BW are more vulnerable to bad prefetching and you'd want to reduce the impact through the non-temporal hint. Consider that your prefetcher is suddenly set loose to fetch anything it can, you'd end up swamped in junk prefetches that would through away lots of useful cachelines. The NTA hint makes them overrun each other, leaving the rest undamaged.
Of course this may also be just a bug, I can't tell for sure, only whoever developed the compiler, but it might make sense for the reason above.
The best resource I could find on x86 prefetching hint types was the good ol' article What Every Programmer Should Know About Memory.
For the most part on x86 there aren't different instructions for read and write prefetches. The exceptions seem to be those that are non-temporal aligned, where a write can bypass the cache but as far as I can tell, a read will always get cached.
It's going to be hard to backtrack through why the earlier code owners used one hint and not the other on a certain architecture. They could be making assumptions about how much cache is available on processors in that family, typical working set sizes for binaries there, long term control flow patterns, etc... and there's no telling how much any of those assumptions were backed up with good reasoning or data. From the limited background here I think you'd be justified in taking the approach that makes the most sense for the platform you're developing on now, regardless what was done on other platforms. This is especially true when you consider articles like this one, which is not the only context where I've heard that it's really, really hard to get any performance gain at all with software prefetches.
Are there any more details known up front, like typical cache miss ratios when using this code, or how much prefetches are expected to help?

The stack size used in kernel development

I'm developing an operating system and rather than programming the kernel, I'm designing the kernel. This operating system is targeted at the x86 architecture and my target is for modern computers. The estimated number of required RAM is 256Mb or more.
What is a good size to make the stack for each thread run on the system? Should I try to design the system in such a way that the stack can be extended automatically if the maximum length is reached?
I think if I remember correctly that a page in RAM is 4k or 4096 bytes and that just doesn't seem like a lot to me. I can definitely see times, especially when using lots of recursion, that I would want to have more than 1000 integars in RAM at once. Now, the real solution would be to have the program doing this by using malloc and manage its own memory resources, but really I would like to know the user opinion on this.
Is 4k big enough for a stack with modern computer programs? Should the stack be bigger than that? Should the stack be auto-expanding to accommodate any types of sizes? I'm interested in this both from a practical developer's standpoint and a security standpoint.
Is 4k too big for a stack? Considering normal program execution, especially from the point of view of classes in C++, I notice that good source code tends to malloc/new the data it needs when classes are created, to minimize the data being thrown around in a function call.
What I haven't even gotten into is the size of the processor's cache memory. Ideally, I think the stack would reside in the cache to speed things up and I'm not sure if I need to achieve this, or if the processor can handle it for me. I was just planning on using regular boring old RAM for testing purposes. I can't decide. What are the options?
Stack size depends on what your threads are doing. My advice:
make the stack size a parameter at thread creation time (different threads will do different things, and hence will need different stack sizes)
provide a reasonable default for those who don't want to be bothered with specifying a stack size (4K appeals to the control freak in me, as it will cause the stack-profligate to, er, get the signal pretty quickly)
consider how you will detect and deal with stack overflow. Detection can be tricky. You can put guard pages--empty--at the ends of your stack, and that will generally work. But you are relying on the behavior of the Bad Thread not to leap over that moat and start polluting what lays beyond. Generally that won't happen...but then, that's what makes the really tough bugs tough. An airtight mechanism involves hacking your compiler to generate stack checking code. As for dealing with a stack overflow, you will need a dedicated stack somewhere else on which the offending thread (or its guardian angel, whoever you decide that is--you're the OS designer, after all) will run.
I would strongly recommend marking the ends of your stack with a distinctive pattern, so that when your threads run over the ends (and they always do), you can at least go in post-mortem and see that something did in fact run off its stack. A page of 0xDEADBEEF or something like that is handy.
By the way, x86 page sizes are generally 4k, but they do not have to be. You can go with a 64k size or even larger. The usual reason for larger pages is to avoid TLB misses. Again, I would make it a kernel configuration or run-time parameter.
Search for KERNEL_STACK_SIZE in linux kernel source code and you will find that it is very much architecture dependent - PAGE_SIZE, or 2*PAGE_SIZE etc (below is just some results - many intermediate output are deleted).
./arch/cris/include/asm/processor.h:
#define KERNEL_STACK_SIZE PAGE_SIZE
./arch/ia64/include/asm/ptrace.h:
# define KERNEL_STACK_SIZE_ORDER 3
# define KERNEL_STACK_SIZE_ORDER 2
# define KERNEL_STACK_SIZE_ORDER 1
# define KERNEL_STACK_SIZE_ORDER 0
#define IA64_STK_OFFSET ((1 << KERNEL_STACK_SIZE_ORDER)*PAGE_SIZE)
#define KERNEL_STACK_SIZE IA64_STK_OFFSET
./arch/ia64/include/asm/mca.h:
u64 mca_stack[KERNEL_STACK_SIZE/8];
u64 init_stack[KERNEL_STACK_SIZE/8];
./arch/ia64/include/asm/thread_info.h:
#define THREAD_SIZE KERNEL_STACK_SIZE
./arch/ia64/include/asm/mca_asm.h:
#define MCA_PT_REGS_OFFSET ALIGN16(KERNEL_STACK_SIZE-IA64_PT_REGS_SIZE)
./arch/parisc/include/asm/processor.h:
#define KERNEL_STACK_SIZE (4*PAGE_SIZE)
./arch/xtensa/include/asm/ptrace.h:
#define KERNEL_STACK_SIZE (2 * PAGE_SIZE)
./arch/microblaze/include/asm/processor.h:
# define KERNEL_STACK_SIZE 0x2000
I'll throw my two cents in to get the ball rolling:
I'm not sure what a "typical" stack size would be. I would guess maybe 8 KB per thread, and if a thread exceeds this amount, just throw an exception. However, according to this, Windows has a default reserved stack size of 1MB per thread, but it isn't committed all at once (pages are committed as they are needed). Additionally, you can request a different stack size for a given EXE at compile-time with a compiler directive. Not sure what Linux does, but I've seen references to 4 KB stacks (although I think this can be changed when you compile the kernel and I'm not sure what the default stack size is...)
This ties in with the first point. You probably want a fixed limit on how much stack each thread can get. Thus, you probably don't want to automatically allocate more stack space every time a thread exceeds its current stack space, because a buggy program that gets stuck in an infinite recursion is going to eat up all available memory.
If you are using virtual memory, you do want to make the stack growable. Forcing static allocation of stack sized, like is common in user-level threading like Qthreads and Windows Fibers is a mess. Hard to use, easy to crash. All modern OSes do grow the stack dynamically, I think usually by having a write-protected guard page or two below the current stack pointer. Writes there then tell the OS that the stack has stepped below its allocated space, and you allocate a new guard page below that and make the page that got hit writable. As long as no single function allocates more than a page of data, this works fine. Or you can use two or four guard pages to allow larger stack frames.
If you want a way to control stack size and your goal is a really controlled and efficient environment, but do not care about programming in the same style as Linux etc., go for a single-shot execution model where a task is started each time a relevant event is detected, runs to completion, and then stores any persistent data in its task data structure. In this way, all threads can share a single stack. Used in many slim real-time operating systems for automotive control and similar.
Why not make the stack size a configurable item, either stored with the program or specified when a process creates another process?
There are any number of ways you can make this configurable.
There's a guideline that states "0, 1 or n", meaning you should allow zero, one or any number (limited by other constraints such as memory) of an object - this applies to sizes of objects as well.