How does program counter in 8085 actually work? - 8085

I have been reading about Program Counter of 8085. This material here states that the function of the program counter is to point to the memory address from which the next byte is to be fetched. When a byte (machine code) is being fetched, the program counter is incremented by one to point to the next memory location.
My question is how does it handle the condition if instruction size varies. Suppose the current instruction is of 3 bytes then PC should point to current address+3. How does PC knows the size of the current instruction?
I am new to 8085, any help would be appreciated.
Thanks

The material you reference doesn't really say anything about that issue specifically - all it says is that the PC is incremented when a byte is fetched, which is correct (it doesn't say that there couldn't be multiple bytes to an instruction).
In general, a CPU will increment the program counter to point to the next instruction.
More precisely, during the instruction decoding phase, the CPU will read as many bytes as are needed for the instruction and increment the PC accordingly.

Related

What is the purpose of the CIR if I have the MDR in Von Neumann Architecture?

From the fetch decode execute cycle of the Von Neumann Architecture, at a basic level, here is what I understand:
Memory address in PC is copied to MAR.
PC +=1
The instruction / data in the address of the MAR is stored in the MDR after being fetched from main memory.
Instruction from MDR is copied to CIR
Instruction / data in memory is decoded & executed by the CU .
Result from the calculation stored in ACC.
Repeat
Now if the MDR value is copied to the CIR, why are they both necessary. I am quite new to systems architecture so I may have gotten the wrong end of the stick, but I've tried my best :)
Think about what happens if the current instruction is a load or store: does anything need to happen after the MDR? If so, how is the CPU going to remember what it's in the middle of doing if it doesn't keep track of that somehow.
Whether that requires the original instruction bits the whole time or not depends on the design.
A CPU that needs to do a lot of decoding (e.g. a CISC with a compact variable-length instruction set, like original 8086) may not keep the actual instruction itself around, but instead just some decode results. For example, actual 8086 decoded incrementally, scanning through prefixes one byte at a time until reaching the opcode. And modern x86 decodes to uops which it sends down the pipeline; the original machine-code bytes aren't needed.
But CPUs like MIPS were specifically designed so parts of the instruction word could be used directly as internal control signals. Still, it's not always necessary to keep the whole instruction around in one piece.
It might make more sense to look at CIR as being the input latches of the decoding process that produces the necessary internal control signals, or sequence of microcode depending on the design. Having a truly physical CIR separate from that is ok if you don't mind redoing decoding at any step you need to consult it to figure out what step to do next.

What is the benefit of having the registers as a part of memory in AVR microcontrollers?

Larger memories have higher decoding delay; why is the register file a part of the memory then?
Does it only mean that the registers are "mapped" SRAM registers that are stored inside the microprocessor?
If not, what would be the benefit of using registers as they won't be any faster than accessing RAM? Furthermore, what would be the use of them at all? I mean these are just a part of the memory so I don't see the point of having them anymore. Having them would be just as costly as referencing memory.
The picture is taken from Avr Microcontroller And Embedded Systems The: Using Assembly and C by Muhammad Ali Mazidi, Sarmad Naimi, and Sepehr Naimi
AVR has some instructions with indirect addressing, for example LD (LDD) – Load Indirect From Data Space to Register using Z:
Loads one byte indirect with or without displacement from the data space to a register. [...]
The data location is pointed to by the Z (16-bit) Pointer Register in the Register File.
So now you can move from a register by loading its data-space address into Z, allowing indirect or indexed register-to-register moves. Certainly one can think of some usage where such indirect access would save the odd instruction.
what would be the benefit of using registers as they won't be any faster than accessing RAM?
accessing General purpose Registers is faster than accessing Ram
first of all let us define how fast measured in microControllers .... fast mean how many cycle the instruction will take to excute ... LOOk at the avr architecture
See the General Purpose Registers GPRs are input for the ALU , and the GPRs are controlled by instruction register (2 byte width) which holds the next instruction from the code memory.
Let us examine simple instruction ADD Rd , Rr; where Rd,Rr are any two register in GPRs so 0<=r,d<=31 so each of r and d could be rebresented in 5 bit,now open "AVR Instruction Set Manual" page number 32 look at the op-code for this simple add instraction is 000011rdddddrrrr and becuse this op-code is two byte(code memory width) this will fetched , Decoded and excuit in one cycle (under consept of pipline ofcourse) jajajajjj only one cycle seems cool to me
I mean these are just a part of the memory so I don't see the point of having them anymore. Having them would be just as costly as referencing memory
You suggest to make the all ram as input for the ALU; this is a very bad idea: a memory address takes 2 bytes.
If you have 2 operands per instruction as in Add instruction you will need 4 Byte for saving only the operands .. and 1 more byte for the op-code of the operator itself in total 5 byte which is waste of memory!
And furthermore this architecture could only fetch 2 bytes at a time (instruction register width) so you need to spend more cycles on fetching the code from code memory which is waste of cycles >> more slower system
Register numbers are only 4 or 5 bits wide, depending on the instruction, allowing 2 per instruction with room to spare in a 16-bit instruction word.
conclusion GPRs' existence are crucial for saving code memory and program execution time
Larger memories have higher decoding delay; why is the register file a part of the memory then?
When cpu deal with GPRs it only access the first 32 position not all the data space
Final comment
don't disturb yourself by time diagram for different ram technology because you don't have control on it ,so who has control? the architecture designers , and they put the limit of the maximum crystal frequency you can use with there architecture and everything will be fine .. you only concern about cycles consuming with your application

How can I learn the size of the EEPROM on a chip if documentation is unavailable?

If I have an EEPROM integrated circuit but documentation is not available for it, how can I find out how much memory is available to me?
My first thought was to write some distinct bytes to the first several sequential addresses and then loop through the memory reading each byte until I read my distinct bytes and count how many bytes exist between reading the distinct bytes the first time and the second time. But then I realised that my unsigned data type could be too small and wrap from its largest value back to zero before the last address in the EEPROM was actually reached.
Any software or hardware tricks to learn this information about an unidentified EEPROM integrated circuit would be very much appreciated.
My solution to this problem ends up being pretty close to my theory stated in my question, where I write some recognisable pattern of bytes starting at byte zero of the EEPROM. I then loop through the EEPROM memory, starting at byte zero, and keep track of how many bytes exist between the first time we read our "recognisable pattern of bytes" and the second time. To ensure that I don't read from byte zero a second time before reading every other byte in the EEPROM memory once (due to our counting variable being too small to count up to the size of the EEPROM memory), I then increase the size of my counting variable datatype to be able to count to a higher number if needed. If the number of bytes between the first read of the "recognisable pattern of bytes" and the second is the same with the two different sized counting variable data types, then I know that I have found the correct size of the EEPROM in bytes.

How is program counter unaffected by multiple clock cycles

If the number of clock cycles it takes to complete an instruction is more than one does that mean program counter gets incremented more than once in the same instruction cycle. I am getting this doubt because from my knowledge registers gets updated on each clock pulse.
Does this mean that if a system is waiting for memory for 3 clock cycles pc will be pc +12?
Each instruction cycle of the example processor contains one to x machine cycles. The download phase consists of as many machine cycles as the number of bytes must be sent from the operating memory to the processor under one instruction. The duration of the execution cycle depends on the type of order downloaded.

Re: I2C, what does "clocked out" mean?

I'm not trained in EE.
I'm programming a master-receiver device which controls a MAX11644/MAX11645. The datasheet explains the read cycle, saying:
A read cycle must be initiated to obtain conversion results. Read cycles begin with the bus master issuing a START condition followed by seven address bits and a read bit (R/W = 1). If the address byte is successfully received, the MAX11644/MAX11645 (slave) issues an acknowledge. The master then reads from the slave. The result is transmitted in 2 bytes; first 4 bits of the first byte are high, then MSB through LSB are consecutively clocked out.
All of this I understand, except the very last part: "MSB through LSB are consecutively clocked out". Most significant bit? Isn't this the first bit? We already know the first bit in the first byte is hi. And what does "clocked out" mean?
Most significant bit? Isn't this the first bit?
It may or may not be. There's no unambiguous definition of "first". RS232, for example, outputs the least significant bit first. If you mean the one that happens to be output first, then yes, that's what the next part is saying.
We already know the first bit in the first byte is hi.
Right. But the device outputs it anyway.
And what does "clocked out" mean?
It means that they are produced as output on consecutive clock cycles. That is, each time the clock advances, the next bit (in the order defined there) is placed on the output pin.