Profibus synchronisation using Linux (Raspberry Pi) - raspberry-pi

I am planning to develop a simple Profibus master (FDL level) in Linux, more specifically on a Raspberry Pi. I have an RS485 transceiver based on a MAX 481. The master must work on a bus where there are multiple masters.
According to the Profibus specification, you must count the number of '1' bits on the bus to determine when it is time to rotate the access token. Specifically after 11 '1' bits the next frame starts. 11 bits is also exactly one frame.
In Linux, how can I detect these 11 '1' bits? They won't be registered by the driver as there is no start bit. So I need a stream of bits, instead of decoded bytes.
What would be the best approach?

Unfortunately, making use of microcontroller/microprocessor UART is a BAD choice.
You can generate 11 bits setting START_BIT, STOP_BIT, and PARTITY_BIT (even) in your microcontroller UART peripheral. Maybe you will be lucky to receive whole bytes from a datagram without losses.
However, PROFIBUS DP datagram is up to 244 bytes and PROFIBUS DP requires NO IDLE bits between bytes during datagram transmission. You need a UART hardware or UART microcontroller peripheral with a FIFO or register that supports up to 244 bytes - Which is very uncommon, once this requirement is very specific from PROFIBUS.
Another aspect is related to the compatibility of baud rates. Usually, the whole range of PROFIBUS PD baud rates is not fully available on common microcontrollers UART.
My suggestions:
Implement this UART part on FPGA and interface with Raspberry Pi using e.g. SPI. You can decide on the extension of PROFIBUS stack portion you can 'outsource' to FPGA and the part you can keep on RPi.
Use an ASIC (maybe ASPC2, but outdated) and add another compatible processor to implement a deterministic portion of the stack. Later you can interface this processor with your RPi.
Implement using an industrial communication dedicated processor (Like TI Sitara am335x).

Related

STM32/ESP32/PIC32 + multiple SPI devices + Ethernet

I am developing a measurement system, comprised of a MCU (being STM, ESP or PIC), multiple (let's say 8) ADCs sending data over SPI. ADCs are to be triggered using a SYNC signal so that they sample at the same time. It's crucial to access the data at the same time (or almost at the same time), the sampling frequency will be 1 or 2 kHz. I'm wondering how should I approach this: use a single physical SPI bus, and perhaps a DMA, or get a MCU with 8 physical SPI buses allowing them to operate in parallel?
Additionally, I would like this MCU to support Ethernet connection, to send the data to a post-processing unit.
My initial thought was to simply get a MCU with 8 SPIs, but maybe it's an overkill?

How maximum baudrate will be determined for i2c

I've seen standard baud for typicall i2c is 100k ,fast mode 400k
How one can determine it?
These numbers are defined in the I2C standard, published by NXP (previously Philips).
In practice the limits are defined by hardware, you would have to ask on Electronics stack exchange about that.

What are IO ports, serial ports and what's the difference between them?

I'm confused.
I have recently started working on building an operating system while using bochs as an emulator and a certain manual online.
In the manual to move the vga framebuffer cursor I'm using the IO ports using the command 'out'. I get how to control it but I don't know what is it that I'm controlling, and after some reading it seems like everywhere it was addressed as an abstract thing that for example makes the cursor to change its position on the screen.
What I want to know: what are they physically? are they cables? if yes from where to where they are connected? can I input from them also as there name suggest? and why do I need the out command and cant write directly to their place in the memory?
If in your answer you can also include the serial ports and the difference between them and the IO ones it will be amazing,
with respect,
revolution
(btw the operating system is 32 bits)
An IO port is basically memory on the motherboard that you can write/read. The motherboard makes some memory available other than RAM. The CPU has a control bus which allows it to "tell" the motherboard that what it outputs on the data bus is to be written somewhere else than RAM. When you output to the VGA buffer, you write to video memory on the motherboard. The out/in instructions are used to write/read IO ports instead of writing to RAM. When you use out/in instructions, you instruct the CPU to set a certain line on its control bus to tell the motherboard to write/read a certain byte to an IO port instead of RAM.
Today, a lot of RAM memory is used for hardware mapping instead of IO ports. This is often called the PCI hole. It is memory mapped IO. So you will write to RAM and it will send the data to hardware like graphics memory. All of this is transparent to OS developers. You are simply using very abstract hardware interfaces which are either conventional (open source) or proprietary.
Serial ports in the meantime are simply ports which are serial in nature. A serial port is defined to be a port where data is transferred one bit at a time. USB is serial (universal serial bus). VGA is serial and others are too. These ports are not like IO ports. You can output to them indirectly using IO ports.
IO ports offer various hardware interfaces which allow to drive hardware. For example, if you have a VGA compatible screen and set text mode, the motherboard will make certain IO ports available and, when you write to these IO ports, video memory will vary depending on what you output to these ports. Eventually, the VGA screen will refresh when the video controller will output data written to video memory through the actual VGA port. I'm not totally aware of how all of this works since I'm not an electrical engineer and I never read about this stuff. To what I know, you can see the pins of the VGA port and what they do independently on wikipedia. VGA works with RGBHV. RGB stands for red, green and blue while HV stand for horizontal/vertical sync. As stated on wiki in the article on analog television:
Synchronizing pulses added to the video signal at the end of every scan line and video frame ensure that the sweep oscillators in the receiver remain locked in step with the transmitted signal so that the image can be reconstructed on the receiver screen. A sync separator circuit detects the sync voltage levels and sorts the pulses into horizontal and vertical sync.
The horizontal synchronization pulse (horizontal sync, or HSync), separates the scan lines. The horizontal sync signal is a single short pulse which indicates the start of every line. The rest of the scan line follows, with the signal ranging from 0.3 V (black) to 1 V (white), until the next horizontal or vertical synchronization pulse.
Memory in itself takes various forms in hardware. Video memory is often called VRAM (Video RAM) or the Frame Buffer as you can read in a Wikipedia article. So in itself video memory is an array of DRAM. DRAM today is one capacitor (which stores the data) and one mosfet transistor (which controls the flow of the data). So you have special wiring on the motherboard between the data bus of the processor and the VRAM. When you output data to video memory, you write to VRAM on the motherboard. Where you write and how just depends on the video mode you set up.
Most modern systems work with HDMI/Display port along with graphics card. These graphics card are other hardware interfaces which are often complex and they often cannot be known because the drivers for the cards are provided by the manufacturers. osdev.org has information on Intel HD Graphics which has a special interface to interact with. It can be used to gather info on the monitor and to determine what RAM address to use to write to the monitor.

The relationship between RGMII to MDI in Ethernet communication

Let's say I am talking to a PHY chip via RGMII.
What is the relationship between the serial information transmitted on the RGMII to the signals that go out to the MDI?
I understood from the timing diagram of RGMII that the rising edge is 4 bits and the falling edge is 4 bits. So for each clock that gives 8 bits.
For 100Mbps, the clock required is 25MHz. So for every 25MHz clock cycle, 8 bits are transmitted.
Does the PHY chip simply send each 8 bits over the MDI immediately?
If that is the case, then how do I correctly package these serial 8 bits of data into a proper ethernet frame?
I a trying to troubleshoot a piece of hardware where the PHY does not work properly but the only way troubleshoot is if I can control the RGMII. However, I do not understand this relationship between the RGMII and how it affects the MDI.
I presume that if I look at wireshark, it will not show any packets of information unless I send a string of serialized data in a proper Ethernet frame.
The PHY should have some documentation and sample code. Without, finding out how exactly it works can be a very tedious task.
You can find the general RGMII description here: https://web.archive.org/web/20160303212629/http://www.hp.com/rnd/pdfs/RGMIIv1_3.pdf

How to sync microcontroller to MIDI controller output

I am looking to receive MIDI messages to control a microcontroller based synthesizer and I am working on understanding the MIDI protocol so I may implement a MIDI handler. I've read MIDI is transmitted at 31.25kHz without a dedicated clock line - must I sample the line at 31.25kHz with the microcontroller in order to receive MIDI bytes?
The MIDI specification says:
The hardware MIDI interface operates at 31.25 (+/- 1%) Kbaud, asynchronous, with a start bit, 8 data bits (D0 to D7), and a stop bit. […] Bytes are sent LSB first.
This describes a standard UART protocol; you can simply use the UART hardware that most microcontrollers have built in. (The baud rate of 31250 Hz was chosen because it can be easily derived from a 1 Mhz (or multiple) clock.)
If you really wanted to implement the receiver in software, you would sample the input signal at a higher rate to able to reliably detect the level in the middle of each bit; for details, see What exactly is the start bit error in UART? and How does UART know the difference between data bits and start/stop bits?