assembly code to add two integers - cpu-architecture

I have trouble understanding the following assembly code which is used to add two integers using registers. It's not a very cumbersome question, just that I lack any good reference to learn the syntax. If you can provide me with the insight line by line. I would be extremely grateful.
MOV R1, #100
MOV R2, #100
MOV (R1), #50
ADD R2,(R1)
I get the first two lines which will store number 100 in the given registers, I just don't get the purpose of using brackets in next two lines.
And this is not homework, Just a question to clarify the theory behind it.
Question is what are the values of R1, R2 after the instructions have been executed.

I found the following explanation on another website, which helped me a lot to understand the use of brackets. I believe it would be very clarifying for other people too, so I will post it below:
Lets analyze this program:
MOV AX, 47104
MOV DS, AX
MOV [3998], 36
INT 32
... The first instruction, MOV AX, 47104, tells the computer to copy the number 47104 into the location AX. The next instruction, MOV DS, AX, tells the computer to copy the number in AX into the location DS. The next instruction, MOV [3998], 36 tells the computer to put the number 36 into memory location 3998. Finally, INT 32 exits the program by returning to the operating system.
Before we go on, I would like to explain just how this program works. Inside the CPU are a number of locations, called registers, which can store a number. Some registers, such as AX, are general purpose, and don't do anything special. Other registers, such as DS, control the way the CPU works.
DS just happens to be a segment register, and is used to pick which area of memory the CPU can write to. In our program, we put the number 47104 into DS, which tells the CPU to access the memory on the video card.
The next thing our program does is to put the number 36 into location 3998 of the video card's memory. Since 36 is the code for the dollar sign, and 3998 is the memory location of the bottom right hand corner of the screen, a dollar sign shows up on the screen a few microseconds later.
Finally, our program tells the CPU to perform what is called an interrupt. An interrupt is used to stop one program and execute another in its place. In our case, we want interrupt 32, which ends our program and goes back to MS-DOS, or whatever other program was used to start our program.
We can see from this example that the use of brackets resulted in inputting a value into a memory location, and not into a register. Lately, this value was read by the video card to display a symbol on the screen.
Credits to the writer on: http://www.swansontec.com/sprogram.html

Related

Why does registers exists and how they work together with cpu?

So I am currently learning Operating Systems and Programming.
I want how the registers work in detail.
All I know is there is the main memory and our CPU which takes address and instruction from the main memory by the help of the address bus.
And also there is something MCC (Memory Controller Chip which helps in fetching the memory location from RAM.)
On the internet, it shows register is temporary storage and data can be accessed faster than ram for registers.
But I want to really understand the deep-down process on how they work. As they are also of 32 bits and 16 bits something like that. I am really confused.!!!
I'm not a native english speaker, pardon me for some perhaps incorrect terminology. Hope this will be a little bit helpful.
Why does registers exists
When user program is running on CPU, it works in a 'dynamic' sense. That is, we should store incoming source data or any intermediate data, and do specific calculation upon them. Memory devices are needed. We have a choice among flip-flop, on-chip RAM/ROM, and off-chip RAM/ROM.
The term register for programmer's model is actually a D flip-flop in the physical circuit, which is a memory device and can hold a single bit. An IC design consists of standard cell part (including the register mentioned before, and and/or/etc. gates) and hard macro (like SRAM). As the technology node advances, the standard cells' delay are getting smaller and smaller. Auto Place-n-Route tool will place the register and the related surrounding logic nearby, to make sure the logic can run at the specified 3.0/4.0GHz speed target. For some practical reasons (which I'm not quite sure because I don't do layout), we tend to place hard macros around, leading to much longer metal wire. This plus SRAM's own characteristics, on-chip SRAM is normally slower than D flip-flop. If the memory device is off the chip, say an external Flash chip or KGD (known good die), it will be further slower since the signals should traverse through 2 more IO devices which have much larger delay.
how they work together with cpu
Each register is assigned a different 'address' (which maybe not open to programmer). That is implemented by adding address decode logic. For instance, when CPU is going to execute an instruction mov R1, 0x12, the address decode logic sees the binary code of R1, and selects only those flip-flops corresponding to R1. Then data 0x12 is stored (written) into those flip-flops. Same for read process.
Regarding "they are also of 32 bits and 16 bits something like that", the bit width is not a problem. Both flip-flops and a word in RAM can have a bit width of N, as long as the same address can select N flip-flops or N bits in RAM at one time.
Registers are small memories which resides inside the processor (what you called CPU). Their role is to hold the operands for fast processor calculations and to store the results. A register is usually designated by a name (AL, BX, ECX, RDX, cr3, RIP, R0, R8, R15, etc.) and has a size which is the number of bits it can store (4, 8, 16, 32, 64, 128 bits). Other registers have special meanings, and their bits control the state or provide information about the state of the processor.
There are not many registers (because they are very expensive). All of them have a capacity of only a few kilobytes, so they can't store all the code and data of your program, which can go up to gigabytes. This is the role of the central memory (what you call RAM). This big memory can hold gigabytes of data and each byte has its address. However, it only holds data when the computer is turned on. The RAM reside outside of the CPU Chip and interacts with him via Memory Controller Chip which stands as interface between CPU and RAM.
On top of that, there is the hard drive that stores your data when you turn off your computer.
That is a very simple view to get you started.

Indexed addressing mode and implied addressing mode

Indexed addressing mode is usually used for accessing arrays as arrays are stored contiguosly. We have a index register which gets incremented in every iteration which when added to base address gives the array element address.
I don't understand the actual need of this addressing mode. Why can't we do this with direct addressing ? We have the base address and we can just add 1 to it every time when accessing. Why do we need indexed addressing mode which has a overhead of index register ?
I am not sure about the instruction format for implied addressing mode. Suppose we have a instruction INC AC. Is the address of AC specified in the instruction or is there a special opcode which means 'INC AC' and we don't include the address of AC (accumulator)?
I don't understand the actual need of this addressing mode. Why can't we do this with direct addressing?
You can; MIPS only has one addressing mode and compilers can still generate code for it just fine. But sometimes it has to use an extra shift + add instruction to calculate an address (if it's not just looping through an array).
The point of addressing modes is to save instructions and save registers, especially in 2-operand instruction sets like x86, where add eax, ecx overwrites eax with the result (eax += ecx), unlike MIPS or other 3-instruction ISAs where addu $t2, $t1, $t0 does t2 = t1 + t0. On x86, that would require a copy (mov) and an add. (Or in that special case, lea edx, [eax+ecx]: x86 can copy-and-add (and shift) using the same instruction-encoding it uses for memory operands.)
Consider a histogram problem: you generate array indices in unpredictable order, and have to index an array. On x86-64, add dword [rbx + rdi*4], 1 will increment a 32-bit counter in memory using a single 4-byte instruction, which decodes to only 2 uops for the front-end to issue into the out-of-order core on modern Intel CPUs. (http://agner.org/optimize/). (rbx is the base register, rdi is a scaled index). Having a scaled index is very powerful; x86 16-bit addressing modes support 2 registers, but not a scaled index.
Classic MIPS only has separate shift and add instructions, although MIPS32 did add a scaled-add instruction for address calculation. That would save an instruction here. Being a load-store machine, the loads and stores always have to be separate instructions (unlike on x86 where that add decodes as a micro-fused load+add and a store. See INC instruction vs ADD 1: Does it matter?).
Probably ARM would be a better comparison for MIPS: It's also a load-store RISC machine. But it does have a selection of addressing modes, including scaled index using the barrel shifter. So instead of needing a separate shift / add for each array index, you'd use LDR R0, [R1, R2, LSL #2], add r0, r0, #1 / str with the same addressing mode.
Often when looping through an array, it is best to just increment pointers on x86. But it's also an option to use an index, especially for loops with multiple arrays using the same index, like C[i] = A[i] + B[i]. Indexed addressing mode can sometimes be slightly less efficient in hardware, though, so when a compiler is unrolling a loop it usually should use pointers, even though it has to increment all 3 pointers separately instead of one index.
The point of instruction-set design is not merely to be Turing complete, it's to enable efficient code that gets more work done with fewer clock cycles and/or smaller code-size, or give programmers the option of aiming for either of those goals.
The minimum threshold for a computer to be programmable is extremely low, see for example various One instruction set computer architectures. (None implemented for real, just designed on paper to show that it's possible to write programs with nothing but a subtract-and-branch-if-less-than-zero instruction, with memory operands encoded in the instruction.
There's a tradeoff between easy to decode (especially to decode in parallel) vs. compact. x86 is horrible because it evolved as a series of extensions, often without a lot of planning to leave room for future extensions. If you're interested in ISA design decisions, have a look at Agner Fog's blog for interesting discussion about designing an ISA for high-performance CPUs that combines the best of x86 (lots of work with one instruction, e.g. memory operand as part of an ALU instruction) with the best features of RISC (easy to decode, lots of registers): Proposal for an ideal extensible instruction set.
There's also a tradeoff in how you spend the bits in an instruction word, especially in a fixed instruction width ISA like most RISCs. Different ISAs made different choices.
PowerPC uses lots of the coding space for powerful bitfield instructions like rlwinm (rotate left and mask off a window of bits), and lots of opcodes. IDK if the generally unpronounceable and hard-to-remember mnemonics are related to that...
ARM uses the high 4 bits for predicated execution of any instruction based on condition codes. It uses more bits for the barrel shifter (the 2nd source operand is optionally shifted or rotated by an immediate or a count from another register).
MIPS has relatively large immediate operands, and is basically simple.
x86 32/64-bit addressing modes use a variable-length encoding, with an extra byte SIB (scale/index/base) byte when there's an index, and an optional disp8 or disp32 immediate displacement. (e.g. add esi, [rax + rdx + 12340] takes 2 + 1 + 4 bytes to encode, vs. 2 bytes for add esi, [rax].
x86 16-bit addressing modes are much more limited, and pack everything except the optional disp8/disp16 displacement into the ModR/M byte.
Suppose we have a instruction INC AC. Is the address of AC specified in the instruction or is there a special opcode which means 'INC AC' and we don't include the address of AC (accumulator)?
Yes, the machine-code format for some instructions in some ISAs includes implicit operands. Many machines have push / pop instructions that implicitly use a specific register as the stack pointer. For example, in x86-64's push rax, RAX is an explicit register operand (encoded in the low 3 bits of the one-byte opcode using the push r64 short form), while RSP is an implicit operand.
Older 8-bit CPUs often had instructions like DECA (to decrement the accumulator, A). i.e. there was a specific opcode for that register. This could be the same thing as having a DEC instruction with some bits in the opcode byte specifying which register (like x86 does before x86-64 repurposed the short INC/DEC encodings as REX prefixes: note the "N.E" (Not Encodeable) in the 64-bit mode column for dec r32). But if there's no regular pattern then it can definitely be considered an implicit operand.
Sometimes putting things into neat categories breaks down, so don't worry too much about whether using bits with the opcode byte counts as implicit or explicit for x86. It's a way of spending more opcode space to save code-size for commonly used instructions while still allowing use with different registers.
Some ISAs only use a certain register as the stack pointer by convention, with no implicit uses. MIPS is like this.
ARM32 (in ARM, not Thumb mode) also uses explicit operands in push/pop. Its push/pop mnemonics are just aliases for store-multiple decrement-before / load-multiple increment-after (LDMIA / STMDB) to implement a full-descending stack. See ARM's docs for LDM/STM which explains this, and what you can do with the general case of these instructions, e.g. LDMDB to decrement a pointer and then load (in the opposite direction of POP).

What sort of data is stored in cpu registers

I know cpu registers are used for fast access. But could anyone give me an example of the data content stored in? Why these data are so imporant and have to be stored by operating system during context switching?
I would place registers in two groups:
System Registers
Registers that define the process state
System registers do not change with process contexts. Classically, the second group of registers includes:
A processor status register
General registers
Memory mapping registers
You seen to be most interested in #2 from the call of your question. For simplicity, I will use the VAX processor as the working example (The Intel Kludge-On-A-Chip is overly complex).
The VAX has 16 32-bit registers (R0 - R15). Some of those registers (R12–R15) have have special purposes:
PC = Program Counter points to the next instruction to execute
SP = Stack pointer points to bottom of the stack for the current mode.
AP = Argument Pointer points to the arguments to a function
FP = Frame Pointer used to restore the stack after a function call completes.
That leaves R0–R11 for general use.
R6-R11 can be used by programmers at will.
R0-R5 can be used by programmers but some instructions change their values.
The registers are 32 bits. They can then store:
One-Byte signed or unsigned integer
Two-byte signed or unsigned integer
Four-byte signed or unsigned integer
Four-byte floating point
You can do something like these:
ADDL3 R0, R1, R2 ; Add contents of R0 and R1 and store the result in R2
ADDF3 R0, R1, R2
In the first case, the processor treats the contents of R0 and R1 as 32-bit signed integers. In the second case, it treats the contents of R0 and R1 as 32-bit floating point values.
The interpretation of the register contents depends upon the instruction being executed. Thus, the two instructions above are likely to store different values in R2, even if they have the same values in R0 and R1.
Larger data types, adjacent registers can be combined.
ADDD3 R0, R2, R4
This adds the contents of R0/R1, to the contents of R2/R3, and stores the result in R4/R5, treating the contents of all the register pairs as 64-bit floating point values.
You can even do
ADDH3 R0, R4, R8
This adds the contents of R0/R1/R2/R3 to the contents of R4/R5/R6/R7, and stores the result in R8/R9/R10/R11, treating the contents of all the register quads as 128-bit floating point value.
The VAX has character and come complex matching instructions that use R0-R5 for special purposes (such as loop counters). These are instructions with long execution that can be interrupted. Using the registers to maintain the state of the instruction allows the instruction to be restarted midstream when the process is restarted.
Programmers use R0-R5. There is no problem with that as long as you don't use the instructions that disrupt them.
By Convention R0 and R1 are used for function return values.
So these are the kinds of things you do with registers.
They are not for fast access. They are the core of the cpu and every operation must be done on them. Cpu can add two numbers after you move them from memory to the registers, for example.

How does program counter in 8085 actually work?

I have been reading about Program Counter of 8085. This material here states that the function of the program counter is to point to the memory address from which the next byte is to be fetched. When a byte (machine code) is being fetched, the program counter is incremented by one to point to the next memory location.
My question is how does it handle the condition if instruction size varies. Suppose the current instruction is of 3 bytes then PC should point to current address+3. How does PC knows the size of the current instruction?
I am new to 8085, any help would be appreciated.
Thanks
The material you reference doesn't really say anything about that issue specifically - all it says is that the PC is incremented when a byte is fetched, which is correct (it doesn't say that there couldn't be multiple bytes to an instruction).
In general, a CPU will increment the program counter to point to the next instruction.
More precisely, during the instruction decoding phase, the CPU will read as many bytes as are needed for the instruction and increment the PC accordingly.

How is a relative JMP (x86) implemented in an Assembler?

While building my assembler for the x86 platform I encountered some problems with encoding the JMP instruction:
OPCODE INSTRUCTION SIZE
EB cb JMP rel8 2
E9 cw JMP rel16 4 (because of 0x66 16-bit prefix)
E9 cd JMP rel32 5
...
(from my favourite x86 instruction website, http://siyobik.info/index.php?module=x86&id=147)
All are relative jumps, where the size of each encoding (operation + operand) is in the third column.
Now my original (and thus fault because of this) design reserved the maximum (5 bytes) space for each instruction. The operand is not yet known, because it's a jump to a yet unknown location. So I've implemented a "rewrite" mechanism, that rewrites the operands in the correct location in memory, if the location of the jump is known, and fills the rest with NOPs. This is a somewhat serious concern in tight-loops.
Now my problem is with the following situation:
b: XXX
c: JMP a
e: XXX
...
XXX
d: JMP b
a: XXX (where XXX is any instruction, depending
on the to-be assembled program)
The problem is that I want the smallest possible encoding for a JMP instruction (and no NOP filling).
I have to know the size of the instruction at c before I can calculate the relative distance between a and b for the operand at d. The same applies for the JMP at c: it needs to know the size of d before it can calculate the relative distance between e and a.
How do existing assemblers solve this problem, or how would you do this?
This is what I am thinking which solves the problem:
First encode all the instructions to opcodes between the JMP and it's target, if this region contains a variable-sized opcode, use the maximum size, e.g. 5 for a JMP. Then encode the relative JMP to it's target, by choosing the smallest possible encoding size (3, 4 or 5) and calculate the distance. If any variable-sized opcode is encoded, change all absolute operands before, and all relative instructions that skips over this encoded instruction: they are re-encoded when their operand changes to choose the smallest possible size. This method is guaranteed to end, as variable-sized opcodes only may shrink (because it uses the maximum size of them).
I wonder, perhaps this is an over-engineered solution, that's why I ask this question.
In the first pass you will have a very good approximation to which jmp code to use using a pessimistic byte counting for all jump instructions.
On the second pass you can fill in the jumps with the pessimistic opcode chosen. Very few jumps could then be rewritten to use a byte or two less, only those that were very close to the 8/16 bit or 16/32 byte jump threshold originally. As the candidates are all jumps of many bytes, they are less likely to be in critical loop situations so you are likely to find that further passes offer little or no benefit over a two pass solution.
Here's one approach I've used that may seem inefficient but turns out not to be for most real-life code (pseudo-code):
IP := 0;
do
{
done = true;
while (IP < length)
{
if Instr[IP] is jump
if backwards
{ Target known
Encode short/long as needed }
else
{ Target unknown
if (!marked as needing long encoding) // see below
Encode short
Record location for fixup }
IP++;
}
foreach Fixup do
if Jump > short
Mark Jump location as requiring long encoding
PC := FixupLocation; // restart at instruction that needs size change
done = false;
break; // out of foreach fixup
else
encode jump
} while (!done);