Sending data over UART from a buffer filled by DMA - stm32

I have an ADC set up with DMA to fill a buffer in circular mode. I've verified with the debugger that this buffer is being filled with the expected data.
I am using a FreeRTOS task to send this data over UART based on the interrupts provided by the DMA functionality. When the buffer is half full, I send the first half; when the buffer is full, I send the second half.
I am using a Nucleo L476RG.
Here is my code:
#include "ADC.hpp"
#include <string.h>
#include "FreeRTOS.h"
#include "semphr.h"
#include "task.h"
#include "main.h"
#include "utils/print.hpp"
#include "stm32l4xx_hal.h"
extern ADC_HandleTypeDef hadc3;
extern UART_HandleTypeDef huart2;
extern volatile SemaphoreHandle_t print_lock;
namespace ADC
{
volatile SemaphoreHandle_t adc_lock;
volatile uint16_t adc_buf[ADC_BUF_SIZE];
volatile uint8_t buf[] = "abcdefg\r\n";
void task(void *arg)
{
adc_lock = xSemaphoreCreateBinary();
HAL_ADC_Start_DMA(&hadc3, (uint32_t *)adc_buf, ADC_BUF_SIZE);
HAL_StatusTypeDef result;
uint16_t offset = ADC_BUF_SIZE >> 1;
while (1)
{
xSemaphoreTake(adc_lock, HAL_MAX_DELAY);
offset = (offset + (ADC_BUF_SIZE >> 1)) % ADC_BUF_SIZE;
xSemaphoreTake(print_lock, HAL_MAX_DELAY);
while ((result = HAL_UART_Transmit(&huart2, (uint8_t *)adc_buf+offset*2, ADC_BUF_SIZE, HAL_MAX_DELAY)) == HAL_BUSY) {}
xSemaphoreGive(print_lock);
}
vTaskDelete(nullptr);
}
} // namespace ADC
void HAL_ADC_ConvHalfCpltCallback(ADC_HandleTypeDef *hadc)
{
HAL_GPIO_WritePin(LD2_GPIO_Port, LD2_Pin, GPIO_PIN_SET);
xSemaphoreGiveFromISR(ADC::adc_lock, nullptr);
}
void HAL_ADC_ConvCpltCallback(ADC_HandleTypeDef *hadc)
{
HAL_GPIO_WritePin(LD2_GPIO_Port, LD2_Pin, GPIO_PIN_RESET);
xSemaphoreGiveFromISR(ADC::adc_lock, nullptr);
}
Here is a screenshot from the debugger showing correct data in adc_buf
When looking at the received data using screen /dev/ttyACM0 115200, then converting to hex using vscode, this is what is received. I have already verified that the baudrate is correct.
I've tried adding a buffer with a long string message and sending that over and over, and it is sent and received correctly. Only data from the DMA buffer is being sent as 0xFD rather than the correct bytes. All other data is sent and received correctly.
Here are the settings for the UART and the ADC/DMA.

When looking at the received data using screen /dev/ttyACM0 115200, then converting to hex using vscode, this is what is received.
Rather than perform an extra conversion step, suggest that you use a better tool such as minicom, which has a built-in hex display mode.
Addendum
Switching to minicom fixed it!
So apparently there's an issue when you perform the conversion "using vscode".
This result is consistent with your other testing of "adding a buffer with a long string message and sending that over and over, and it is sent and received correctly."
In hindsight, the next step in your testing could have been to perform the same conversion "using vscode" on this text data. Presumably you would not get the expected ASCII code values, and would then suspect that this conversion step was faulty.
It's being sent from the USB-A port on the Nucleo, so maybe that's why it's ttyACM0?
Yes. That is a USB-to-USB connection using CDC ACM, a protocol that can be used for emulating serial ports over USB. The PC is the USB host (as always), and the Nucleo board is a USB gadget implementing a virtual COM port.
There is no (actual) UART or USART involved with such a USB connection, so you are not "send[ing] this data over UART", nor is the USART2 Mode and Configuration relevant.
When your Nucleo program uses the HAL to access the huart2 pseudo-device, apparently this third "UART" is the virtual COM port using the USB connection. Assuming that the huart2 pseudo-device refers to the third USART device, USART2, is incorrect.

Related

How to use printf to print spi data in stm32

printf() can be used in UART data output, how about SPI data?
I rewrite the fputc() and using hal_spi_transmite(), will the data be printed via SPI?
int fputc(int ch, FILE *f)
{
LoRa_transmit(&myLoRa, &send_data[i], 128, 500);
return ch;
}
printf can be used in uart data output,how about SPI data?
I don't know exactly what LoRa_transmit() do but the data to be printed in this function is int ch and not send_data[i]:
int fputc(int ch, FILE *f)
{
LoRa_transmit(&myLoRa, &ch, 128, 500);
return ch;
}
will the data printed via spi?
It will work, but not as you expect (I guess) because of a few details:
SPI communication involves always one master and at least one slave device. The role cannot be changed because only the master device has control of the chip-select pin.
SPI protocols usually involve sending a control command before sending any data to the slave device.
So what will happen is that only the master device can use the "printf()" approach and it will only work if the slave device is expecting data like an input stream.
Supposing you call printf("Hello SPI");, the master device will start transmission (CS and clock), send the first byte of the string, close the transmission and repeat these steps until reaching the end of the string.

I2C transmit with DMA and HAL not working

This seems to be a problem that is somewhat common, but I have been unsuccessful with any of the solutions I have found online. Specifically I am trying to transmit a 1024 byte buffer (full 128x64 px image) to a SSD1306 display via I2C/DMA and the HAL generated in cubeIDE. I am using a STML432 nucleo board. I have no problem transmitting the buffer without DMA using HAL_I2C_Mem_Write
Based on other questions I have seen, the problem lies in the fact that the DMA finishes while the I2C bus is still working on the transmit. I just don't know how to remedy this and the examples given usually don't use the HAL (unfortunately, despite my efforts I am not quite competent to correctly apply them to the HAL myself I guess). I have tried using the interrupts for I2c and DMA with no luck, only about the first 254 bytes get transferred (just shy of two rows showing on the screen).
Here is my code for sending the buffer:
static void ssd1306_WriteMData_DMA(const uint8_t *data, uint16_t size)
{
while(HAL_I2C_GetState(&hi2c1) != HAL_I2C_STATE_READY);
HAL_I2C_Mem_Write_DMA(&hi2c1, I2C_ADDR, SSD1306_REG_MDAT, 1, (uint8_t*)data, size);
}
and the code for each interrupt handler:
void I2C1_EV_IRQHandler(void)
{
/* USER CODE BEGIN I2C1_EV_IRQn 0 */
if(I2C1->ISR & I2C_ISR_TCR){
I2C1->CR2 |= (I2C_CR2_STOP);// stop i2c
I2C1->ICR |= (I2C_ICR_STOPCF);// Reset the ICR flag.
// stop DMA
DMA1->IFCR |= DMA_IFCR_CTCIF6;
// clear flag
DMA1_Channel6->CCR &= ~DMA_CCR_EN;
}
/* USER CODE END I2C1_EV_IRQn 0 */
//HAL_I2C_EV_IRQHandler(&hi2c1);
/* USER CODE BEGIN I2C1_EV_IRQn 1 */
/* USER CODE END I2C1_EV_IRQn 1 */
}
void DMA1_Channel6_IRQHandler(void)
{
/* USER CODE BEGIN DMA1_Channel6_IRQn 0 */
// stop DMA
DMA1->IFCR |= DMA_IFCR_CTCIF6;
// clear flag
DMA1_Channel6->CCR &= ~DMA_CCR_EN;
/* USER CODE END DMA1_Channel6_IRQn 0 */
HAL_DMA_IRQHandler(&hdma_i2c1_tx);
/* USER CODE BEGIN DMA1_Channel6_IRQn 1 */
/* USER CODE END DMA1_Channel6_IRQn 1 */
}
I think that is all the pertinent code, let me know if there is something else I am missing. All of the initialization code for the peripherals was done through cubeMX, but I can post that if need be, or the settings. I feel like it is something really simple that I'm missing, but this is a bit over my head to be honest so I don't quite grasp exactly what's going on...
Thanks for any help!
Problem is in your custom DMA1_Channel6_IRQHandler and I2C1_EV_IRQHandler. Those functions will be called right after I2C transfers 255 bytes, which is MAX_NBYTE_SIZE for NBYTES. HAL already have all required interrupt routines inside stm32l4xx_hal_i2c.c:
Sets I2C transfer IRQ handler to I2C_Master_ISR_DMA;
Checks if data size is larger than 255 bytes and uses reload mode.
Sets I2C DMA complete callback to I2C_DMAMasterTransmitCplt;
Starts DMA using HAL_DMA_Start_IT()
Configures I2C registers using I2C_TransferConfig()
HAL driver will handle all I2C+DMA interrupts using I2C_Master_ISR_DMA and I2C_DMAMasterTransmitCplt:
I2C_DMAMasterTransmitCplt will restart DMA for each chunk of 255 (MAX_NBYTE_SIZE) or less bytes.
I2C_Master_ISR_DMA will reset RELOAD/NBYTES registers using I2C_TransferConfig.
For last block of data I2C_AUTOEND_MODE is used.
So all you need is
remove "user code" from DMA1_Channel6_IRQHandler and I2C1_EV_IRQHandler functions
enable I2C1 event interrupt in STM32 Device Configuration Tool
configure DMA with data width byte/byte
perform a single call of HAL_I2C_Mem_Write_DMA(...) to start transfer
check HAL_I2C_STATE_READY before next transfer
See HAL_I2C_Mem_Write_DMA, I2C_Master_ISR_DMA and I2C_DMAMasterTransmitCplt source code in stm32l4xx_hal_i2c.c to understand how it works.
About why DMA finishes while I2C is still working: HAL driver sends I2C data over DMA using 255 byte chunks, stops DMA, starts DMA, clears I2C_CR2 NBYTES/RELOAD, enables DMA. DMA may be run continuously using DMA_CIRCULAR mode, but currently it is not implemented in HAL I2C drivers. Here is example of using I2C with DMA_CIRCULAR mode:
// DMA enabled single time
hi2c1.hdmatx->XferCpltCallback = MY_I2C_DMAMasterTransmitCplt;
HAL_DMA_Start_IT(hi2c1.hdmatx, (uint32_t)&i2cBuffer, (uint32_t)&hi2c1.Instance->TXDR, I2C_BUFFER_SIZE);
MY_I2C_TransferConfig(&hi2c1, (uint16_t)DAC_ADDR, 254, I2C_RELOAD_MODE, I2C_GENERATE_START_WRITE); // in first call using I2C_GENERATE_START_WRITE
uint32_t tmpisr = I2C_IT_TCI;
__HAL_I2C_ENABLE_IT(&hi2c1, tmpisr);
hi2c1.Instance->CR1 |= I2C_CR1_TXDMAEN;
Still need to clear I2C_CR2 NBYTES/RELOAD using MY_I2C_TransferConfig each 254 bytes (I do not use 255 to align interrupt firing to even index in array):
static HAL_StatusTypeDef MY_I2C_Master_ISR_DMA(struct __I2C_HandleTypeDef *hi2c, uint32_t ITFlags, uint32_t ITSources)
{
if (__HAL_I2C_GET_FLAG(&hi2c1, I2C_FLAG_TCR) == SET)
{
MY_I2C_TransferConfig(&hi2c1, (uint16_t)DAC_ADDR, 254, I2C_RELOAD_MODE, I2C_NO_STARTSTOP); // in repeated calls using I2C_NO_STARTSTOP
}
return HAL_OK;
}
With this approach DMA circular buffer size is not limited to 255 bytes:
#define I2C_BUFFER_SIZE 1024
uint8_t i2cBuffer[I2C_BUFFER_SIZE];
Main.c should have MY_I2C_TransferConfig() function, which is copy pasted version of private function HAL_I2C_TransferConfig() from stm32l4xx_hal_i2c.c. On earlier STM32 microcontrollers there is no NBYTES/RELOAD fields and I2C_CR2 does not need to be updated this way.
Using DMA in circular mode allows to achieve highest frame rate, you just need to fill DMA buffers in time using XferHalfCpltCallback and XferCpltCallback callbacks. Frames may be copied from larger buffer by using memcpy() or DMA MEMTOMEM transfer.
You haven't said which STM32 you are using. They have different bit definitions (because the I2C peripherals in the earlier released parts were rubbish) but it looks like you are using one of the later ones.
Basically you can find what you need in the bit definitions for the I2C registers in the reference manual. If you are setting stop before it has finished you need to look for a BUSY bit that gets cleared or BTF (byte transfer finished) bit that gets set when it is time for you to send stop.

STM32F446 HAL_UART_Receive_DMA writing directly to GPIOA->ODR doesn't work

I'm experimenting with the STM32 nucleo board STM32F446.
uint8_t data[x];
HAL_UART_Receive_DMA(&huart2, &data, x);
This piece of code works when I send bytes to the PA3 and through DMA it writes to data the x bytes I sent.
However, when &data is replaced with 0x40020014 (GPIOA->ODR) or the bit-band aliased address 0x42400294 for PA5 LED, the bit for toggling the LED isn't set when I sent a byte to PA3, and HAL_UART_RxCpltCallback may or may not be called depending on x. Why?
Link to code: https://github.com/pterodragon/stm32_try/tree/question

Accessing STM32L4 bootloader over USART: No ACK

I'm trying to access the bootloader for a Nucleo L476RG "slave" board.
The "master" board is a Nucleo L496ZG board. In my program, I have a DigitalOut defined on the master board called extBoot0, extReset. These go off to the boot0 and NRST pins on the slave board. Additionally, I have a Serial instance called usart on the master, which is attached to UART2 on the slave board. Also, it appears that there BOOT1 is preset to run the bootloader, i.e. it's asserted low and cannot be changed to run whatever's in SRAM.
Currently, in resetToBootloader, I set BOOT0 high and drop NRST low for 0.1 seconds, and bring it back up high. I've observed that running this function indeed resets the device and prevents the program from running.
In initBootloader, I format the serial per AN2606: 8-bit, even parity, 1 stop bit. I then send 0x7F over that serial bus to the slave board. I'm not getting any response and using a logic analyzer, I've confirmed that the slave is getting it on the right pin and there is no changes in the slave's TX input. What else needs to be done to start the bootloader?
Here's my relevant code:
DigitalOut extBoot0(D7);
DigitalOut extBoot1(D6);
DigitalOut extReset(D5);
Serial usart(/* tx, rx */ D1, D0);
uint8_t rxBuffer[1];
event_callback_t serialEventCb;
void serialCb(int events) {
printf("something happened!\n");
}
void initBootloader() {
wait(5); // just in case?
// Once initialized the USART1 configuration is: 8-bits, even parity and 1 Stop bit
serialEventCb.attach(serialCb);
usart.format(8, SerialBase::Even, 1);
uint8_t buffer[1024];
// write 0x7F
buffer[0] = 0x7F;
usart.write(buffer, 1, 0, 0);
printf("sending cmd\n");
// should ack 0x79
usart.read(rxBuffer, 1, serialEventCb, SERIAL_EVENT_RX_ALL, 0x79);
}
If it helps at all, here's a picture of my board setup.
I believed I solved this by using USART1 instead of USART2. The documentation states that both USART1 and USART2 can be used, but I only receive a 0x79 from USART1.
Additionally, I had to switch from Serial to UARTSerial. The slave first sends an incorrect packet, 0xC0 with an incorrect parity bit. Not really sure why it does that, but it causes the regular Serial instance to not handle the proceeding byte.

Serial driver limitations on iMX processor

I'm developing on an embedded Linux device that uses an ARM iMX6 processor. The main purpose is to read an incoming serial stream from an external source.
Due to the atypical nature of the serial stream, I've run into a few roadblocks with the Linux serial driver for imx processors. But nothing that is beyond the capability of the iMX6. For example, the incoming serial stream is inverted logic. The iMX6 has a specific register setting to invert the RX signal. From what I can tell, the Linux driver does not expose it.
Another complication is that the incoming serial data arrives in 3ms bursts. The external source transmits continuously for 3ms, then 3ms of idle, then 3ms of data, then idle, etc. In order to sync up with the first byte of each burst, it's very useful to be able to detect when the line is idle. Again, the iMX6 has a register value specifically for indicating that the RX line is idle, but the Linux driver doesn't expose it.
I am also very confused how buffering works in the driver. I know the iMX6 has a 32byte FIFO buffer, but I can't tell if the driver uses that buffer or uses external RAM for buffering. I'm running into an issue where the read command hangs for a second every so often when I'm in blocking mode, which should never happen because the data stream is continuous.
For reference, here's how I configured the serial port in my C code and read 50 bytes (I've changed it to non-blocking for now):
#include <stropts.h>
#include <asm/termios.h>
#include <unistd.h>
#include <fcntl.h>
int main()
{
int fd;
struct termios2 terminal;
unsigned char v[50];
fd = open ("/dev/ttymxc2", O_RDONLY | O_NOCTTY | O_NONBLOCK );
ioctl(fd, TCGETS2, &terminal);
terminal.c_cflag |= (CLOCAL | CREAD) ;
terminal.c_cflag |= PARENB ; //enable parity
terminal.c_cflag &= ~PARODD ; //even parity
terminal.c_cflag |= CSTOPB ; //2 stop bits
terminal.c_cflag &= ~CSIZE ;
terminal.c_cflag |= CS8 ;
terminal.c_lflag &= ~(ICANON | IEXTEN | ECHO | ECHOE | ISIG) ;
terminal.c_oflag &= ~OPOST ;
terminal.c_cflag &= ~CBAUD;
terminal.c_cflag |= BOTHER;
terminal.c_ispeed = 100000; //100kHz baud
terminal.c_ospeed = 100000;
ioctl(fd, TCSETS2, &terminal);
...
for(i=0;i<50;i++)
{
read(fd,v+i,1)
}
...
}
So I have two questions:
What is the "proper" way to get the capability out of the serial port that the processor has available but the driver doesn't expose? I can't imagine I'm the first person to want to use such basic functionality of the processor, but I don't want to reinvent the wheel. Do I need to get into writing my own drivers?
Does comprehensive documentation on the iMX serial driver exist anywhere? The code is poorly commented and I get lost quickly trying to find my way around it. For example, I don't know where to start investigating the buffering problem that causes it to hang when receiving a continuous stream of data.
I've forgone with the serial driver entirely and instead wrote some functions to access the register memory directly (modeled after devmem2.c source code). Now I can directly set the INVR bit to invert the RX signal, use the IDLE bit to detect when the line has gone idle, and retrieve the incoming data bytes as soon as they arrive without delay.
I found something on another forum about the UART DMA needs the RX line to go idle for at least 8ms before it services the buffer. That was apparently the cause of the 1sec lag I was experiencing.