Can I overlap registers in ModBus? - modbus

I want to use modbus in my project.
I want to use it this way:
if I ask (or transmit) data, I use register number as code, and this code will be generated by script as a CRC16 from the function name.
It may happen, that areas of RegNum+RequestSize will overlap on each other, so it will not have the same meaning as in classic modbus, where reading register truely means reading register.
Here is some illustration of what I mean:
Classic modbus:CMD1(blue) reads data from register 0x00 with size 1, and CMD2(red) reads data from register 0x07 with size 2
My variant: CMD1 "reads data from" register 0x00 with size 3, and CMD2 "reads data from" register 0x02 with size 3. Im device there is no memory blocks overlap, but in the modbus request there is, and if some program creates something like memory amp, there will be a cross over
Is it legal in modern SCADA systems in particular and modern ModBus in general?
P.S. by saying "modern" I mean in modern realizations

There is no Modbus Police Department as far as I know.
If you have control over the devices on both sides of the bus, and you know what you are doing, why wouldn't you be able to?
You seem to have a couple of strange ideas regarding Modbus:
There is no meaning attached to registers, they are just numbers, you can read them (or write them) any way you want as long as you calculate the CRC of the transactions correctly.
Modbus is a standard, not a music style. There is no classic and/or modern Modbus. What you do have is devices that comply with the standard and others that are just inspired by it.
Obviously, if you are only reading, you will be fine no matter what you do. As soon as you start writing registers you should have a very clear understanding of what you are doing.
Maybe if you post code, somebody would be able to give opinions on whether it is legal, in the sense that you would be able to comply with the certification requirements.
From a more philosophical point of view, I can give you an example where I've seen what you describe: imagine a tool with two sensors on each side and four on its front, all of them giving integer values as their readings. On the left, we have values stored in registers 0x00 and 0x01; the right side goes to 0x06 and 0x07 and the four sensors on the front would be stored in registers 0x02 to 0x05. What would I do if I need readings from the front sensors twice as frequently as those coming from the sides? I can send a query to read registers 0x00 to 0x05 followed by another one to read 0x02 to 0x07.
As long as the refresh rate of all sensors and the timings where I need readings are correct for that particular process, my readings are overlapping registers 0x02 to 0x05 but I'm as legal as legal paper can be.

Related

Need clarity on Sleep mode in MLX90614 IR sensor

I am working on the MLX90614 IR sensor. In the datasheet, they have given some steps to put the sensor but somehow I am not able to understand it clearly. A detailed description of the RAM and EEPROM access is given there. However, how to put the sensor in sleep mode is not much clear.
In another section of commands, they have given an opcode for entering sleep mode. But again there is no much information about usage of the opCode.
I am quite successful in using the sensor to read the object's temperature. But putting sleep mode is not helping me anywhere.
As per page 22 of the datasheet, you need to send a write with 0xFF to the sensor.
PEC is some CRC and and they apparently already did the math for you.
So you need to send:
0xB4 0xFF 0xE8
(Double check the I2C address and read/write bit, I'm never sure if the given address is shifted or not. Edit: 0xB4 is shifted and 8th bit 0 for write already added, so no need to do anything else).

What is the purpose of the CIR if I have the MDR in Von Neumann Architecture?

From the fetch decode execute cycle of the Von Neumann Architecture, at a basic level, here is what I understand:
Memory address in PC is copied to MAR.
PC +=1
The instruction / data in the address of the MAR is stored in the MDR after being fetched from main memory.
Instruction from MDR is copied to CIR
Instruction / data in memory is decoded & executed by the CU .
Result from the calculation stored in ACC.
Repeat
Now if the MDR value is copied to the CIR, why are they both necessary. I am quite new to systems architecture so I may have gotten the wrong end of the stick, but I've tried my best :)
Think about what happens if the current instruction is a load or store: does anything need to happen after the MDR? If so, how is the CPU going to remember what it's in the middle of doing if it doesn't keep track of that somehow.
Whether that requires the original instruction bits the whole time or not depends on the design.
A CPU that needs to do a lot of decoding (e.g. a CISC with a compact variable-length instruction set, like original 8086) may not keep the actual instruction itself around, but instead just some decode results. For example, actual 8086 decoded incrementally, scanning through prefixes one byte at a time until reaching the opcode. And modern x86 decodes to uops which it sends down the pipeline; the original machine-code bytes aren't needed.
But CPUs like MIPS were specifically designed so parts of the instruction word could be used directly as internal control signals. Still, it's not always necessary to keep the whole instruction around in one piece.
It might make more sense to look at CIR as being the input latches of the decoding process that produces the necessary internal control signals, or sequence of microcode depending on the design. Having a truly physical CIR separate from that is ok if you don't mind redoing decoding at any step you need to consult it to figure out what step to do next.

Why does registers exists and how they work together with cpu?

So I am currently learning Operating Systems and Programming.
I want how the registers work in detail.
All I know is there is the main memory and our CPU which takes address and instruction from the main memory by the help of the address bus.
And also there is something MCC (Memory Controller Chip which helps in fetching the memory location from RAM.)
On the internet, it shows register is temporary storage and data can be accessed faster than ram for registers.
But I want to really understand the deep-down process on how they work. As they are also of 32 bits and 16 bits something like that. I am really confused.!!!
I'm not a native english speaker, pardon me for some perhaps incorrect terminology. Hope this will be a little bit helpful.
Why does registers exists
When user program is running on CPU, it works in a 'dynamic' sense. That is, we should store incoming source data or any intermediate data, and do specific calculation upon them. Memory devices are needed. We have a choice among flip-flop, on-chip RAM/ROM, and off-chip RAM/ROM.
The term register for programmer's model is actually a D flip-flop in the physical circuit, which is a memory device and can hold a single bit. An IC design consists of standard cell part (including the register mentioned before, and and/or/etc. gates) and hard macro (like SRAM). As the technology node advances, the standard cells' delay are getting smaller and smaller. Auto Place-n-Route tool will place the register and the related surrounding logic nearby, to make sure the logic can run at the specified 3.0/4.0GHz speed target. For some practical reasons (which I'm not quite sure because I don't do layout), we tend to place hard macros around, leading to much longer metal wire. This plus SRAM's own characteristics, on-chip SRAM is normally slower than D flip-flop. If the memory device is off the chip, say an external Flash chip or KGD (known good die), it will be further slower since the signals should traverse through 2 more IO devices which have much larger delay.
how they work together with cpu
Each register is assigned a different 'address' (which maybe not open to programmer). That is implemented by adding address decode logic. For instance, when CPU is going to execute an instruction mov R1, 0x12, the address decode logic sees the binary code of R1, and selects only those flip-flops corresponding to R1. Then data 0x12 is stored (written) into those flip-flops. Same for read process.
Regarding "they are also of 32 bits and 16 bits something like that", the bit width is not a problem. Both flip-flops and a word in RAM can have a bit width of N, as long as the same address can select N flip-flops or N bits in RAM at one time.
Registers are small memories which resides inside the processor (what you called CPU). Their role is to hold the operands for fast processor calculations and to store the results. A register is usually designated by a name (AL, BX, ECX, RDX, cr3, RIP, R0, R8, R15, etc.) and has a size which is the number of bits it can store (4, 8, 16, 32, 64, 128 bits). Other registers have special meanings, and their bits control the state or provide information about the state of the processor.
There are not many registers (because they are very expensive). All of them have a capacity of only a few kilobytes, so they can't store all the code and data of your program, which can go up to gigabytes. This is the role of the central memory (what you call RAM). This big memory can hold gigabytes of data and each byte has its address. However, it only holds data when the computer is turned on. The RAM reside outside of the CPU Chip and interacts with him via Memory Controller Chip which stands as interface between CPU and RAM.
On top of that, there is the hard drive that stores your data when you turn off your computer.
That is a very simple view to get you started.

Why do we need to specify the number of flash wait cycles?

Especially when working with "faster" devices like STMF4xx/F7xx we need to specify the number of flash wait cycles, based on the supply voltage and the sys-clock frequency.
When the CPU fetches instructions/or constants this is done over the FLITF. Am I right with the assumption that the FLITF holds a CPU request as long as it can provide the requested data, making it impossible for other Bus-Masters to access flash meanwhile.
If this was true, why should it be important to any interface to know flash wait cycles. Like Cache does preload instructions so or so, independent if it knows how long to wait, no?
Because the flash interface isn't magic.
It has to meet the necessary setup and hold times for addressing and reading out the flash cells, which will vary somewhat depending on voltage. Taking the STM32F411 as an example (because I have that TRM handy), doing some maths with the voltage/frequency/wait-state table implies that a read from flash on one of those takes in the order of ~30ns above 2.7V, down to ~60ns below 2.1V.
Since the flash interface doesn't have its own asynchronous nanosecond-precision timekeeping ability (because that would be needlessly complicated, power-hungry, and silly), that translates to asserting its signals for n clock cycles, after which it can assume the data signals from the cells are stable enough to read back*. How does it know what the clock frequency is, and therefore what n should be? Simple: you, as the programmer who set the clock, tell it. Some hardware things are just infinitely easier to let software deal with.
* and then going through the further shenanigans of extracting the relevant 8, 16 or 32 bits out of the 128-bit line it's read, to finally spit that out the other side onto the AHB bus to the waiting CPU, obviously.

Communication between processor and high speed perihperal

Considering that a processor runs at 100 MHz and the data is coming to the processor from an external device/peripheral at the rate of 1000 Mbit/s (8 Bits/Clockcycle # 125 MHz), which is the best way to handle traffic that comes at a higher speed to the processor ?
First off, you can't do it in software. There would be no way to sample the digital lines at a sufficient rate, or to doing anything useful with it.
You need to use a hardware FIFO buffer or memory cell. When a data burst comes in, it can be buffered in the high speed FIFO and then read out as needed by the processor.
Drop in high speed FIFO chips are surprisingly expensive (though most are dual ported). To cut cost, you would be best off using an SRAM chip, and a hardware adder to increment the address lines on incoming data.
This is not an uncommon situation for software. semaj said the right word. This is a system engineering issue. Other folks have the right answer too. If you want to look at or process that data with the 100MHz processor, it is not going to happen, dont bother trying. You CAN look at snapshots of it or have the hardware filter out a specific percentage of it that you are looking for. At the end of the day though it is a systems issue, what does the hardware provide, where does it put this data, what is the softwares task for this data, does it see X buffers of data come in on the goesinta, and the notify the goesouta hardware that there are X buffers ready to go? Does the hardware examine and align the buffers so that you can look at a header, and then decide where to route the hardware? Once you do your system engineering you will know if you can use that processor or not, and if you can use it what its job is and how to do it.
Your direct question. What is the best way to handle it. The best way to handle it is to have hardware (fpga, asic, etc) move it into and out of some storage device (ram of some sort probably). Not necessarily the same ram the processor runs out of (DMA is a good thing to avoid). The hardware is something the software can talk to but you cannot examine all of that data so dont try. Without knowing what kind of data this is, what form, what the software looks at how much work you are willing to force the hardware to do, etc determines the rest of the answer. If you expect a certain (guaranteed) percentage to be bad or not belong to this processor, etc have the hardware filter that out and then what is left you can process.
Networking is a good example of this, PCs have gige ports but cannot process GigE line rate data. That is why we use switches now instead of hubs, hardware slices out a percentage of the data so the pc can handle it, the protocols take care of the data that cannot be processed by resending it later. And the switches processors dont look at all of the data, the hardware slices it up so the software can examine just the header. Or sometimes the software simply manages tables that drive the hardware and the hardware does all the work of processing the data.
Do your system engineering the answers will simply fall out.
You buffer it. Typically data from a device is written to a memory buffer (circular queue) using DMA (no cpu involved). The cpu reads from the memory buffer at a constant rate. Usually devices send data in bursts. This keeps the buffer from filling up. If there is too much data, buffer overflow.
DMA (direct memory access) is possibly the solution, however, it seems unlikely that the memory bus could run faster than the processor core, so the receiving peripheral would have to accept data into a larger register than 8 bit because 125MHz could not be sustained. For example a 16bit register would allow memory writes at 62.5MHz which may be achievable. Also the receiving device would have to be able to accept an external clock that is both faster and asynchronous to the core clock. Also of course the receiving peripheral must have support for DMA.
Unless you are more specific about your hardware and the communication protocol it is difficult to give anything other than a general answer.