Failing to receive data from UART in DMA mode - stm32

I am trying to receive 8 bytes from my pc on my NUCLEO F446RE stm32 board.
Transmitting to the pc works.
The problem is, I am unable to receive data using DMA.
I saw an example with almost the same code and it has worked for the person.
If I use the interrupt mode (just change HAL_UART_Receive_DMA to HAL_UART_Receive_IT, it does work and the RX Complete callback is being called.
Here is the complete main.c. DMA is in circular mode.
main.c
https://pastebin.com/1W4BCjxB

I got it solved, it is actually ridiculous.
So, this is part of the code that CubeMX generates:
MX_GPIO_Init();
MX_USART2_UART_Init();
MX_DMA_Init();
If I order it as follows:
MX_GPIO_Init();
MX_DMA_Init();
MX_USART2_UART_Init();
It works!!!

I had the same problem. Here is the solution with using the CubeMX integrated view.
In the CubeMX->Project Manager->Advanced Settings you can select the order of functions to be generated. I moved my MX_DMA_Init to the top to ensure that the DMA is ready before any other peripherals are initialised.

You haven't initialized the DMA variables as well as handler for the DMA interrupt. You will need to do something along these lines
Initialize DMA:
hdma_usart2_rx.Instance = DMA2_Stream1;
hdma_usart2_rx.Init.Channel = DMA_CHANNEL_2;
hdma_usart2_rx.Init.Direction = DMA_PERIPH_TO_MEMORY;
hdma_usart2_rx.Init.PeriphInc = DMA_PINC_DISABLE;
hdma_usart2_rx.Init.MemInc = DMA_MINC_DISABLE;
hdma_usart2_rx.Init.PeriphDataAlignment = DMA_PDATAALIGN_BYTE;
hdma_usart2_rx.Init.MemDataAlignment = DMA_MDATAALIGN_BYTE;
hdma_usart2_rx.Init.Mode = DMA_CIRCULAR;
hdma_usart2_rx.Init.Priority = DMA_PRIORITY_HIGH;
hdma_usart2_rx.Init.FIFOMode = DMA_FIFOMODE_DISABLE;
HAL_DMA_Init(&hdma_usart2_rx);
void DMA2_Stream2_IRQHandler(void)
{
HAL_NVIC_ClearPendingIRQ(DMA2_Stream2_IRQn);
HAL_DMA_IRQHandler(&hdma_usart1_rx);
}
HAL_UART_Receive_DMA only starts the DMA and does not handle the interrupt and the data transfer.

Related

STM32H747 DCMI DMA stops after one transfer

I have a problem with STM32H747 Discovery board and OV5640 camera shield.
I configured DCMI interface and it works fine. I can see values in hdcmi->Instance->DR register changing and vsync and hsync interrupts work.
I want to transfer single line of pixels from DCMI to buffer in RAM. So I start DMA in circular mode with data length equal to line buffer size in words (32bit). Then I start DCMI in circular mode. Since then DMA should transfer each received line to buffer and then call transfer completed callback.
But there is a problem with DMA that transfers data. First line is transferred correctly, but then data in the buffer stays always the same. DMA transfer completed callback is called every line (checked with counting hsync and DMA TC interrupts).
This is how DMA is initialized in Cube:
cube dcmi dma init
Line buffer initialization:
uint8_t cameraLineBuffer[CAMERA_LINE_SIZE] __attribute__ ((aligned (32)));
Function starting DCMI with DMA:
HAL_StatusTypeDef DCMI_Start_DMA_line(DCMI_HandleTypeDef *hdcmi, uint32_t DCMI_Mode)
{
/* Process Locked */
__HAL_LOCK(hdcmi);
/* Lock the DCMI peripheral state */
hdcmi->State = HAL_DCMI_STATE_BUSY;
/* Enable DCMI by setting DCMIEN bit */
__HAL_DCMI_ENABLE(hdcmi);
/* Configure the DCMI Mode */
hdcmi->Instance->CR &= ~(DCMI_CR_CM);
hdcmi->Instance->CR |= (uint32_t)(DCMI_Mode);
/* Set DMA callbacks */
hdcmi->DMA_Handle->XferCpltCallback = DCMI_DMA_LineTransferCompletedCallback;
hdcmi->DMA_Handle->XferErrorCallback = DCMI_DMA_Error;
/* Enable the DMA Stream */
uint32_t pLineData = (uint32_t) cameraLineBuffer;
HAL_DMA_Start_IT(hdcmi->DMA_Handle, (uint32_t)&hdcmi->Instance->DR, pLineData, CAMERA_LINE_SIZE/4);
/* Enable Capture */
hdcmi->Instance->CR |= DCMI_CR_CAPTURE;
/* Release Lock */
__HAL_UNLOCK(hdcmi);
/* Return function status */
return HAL_OK;
}
What can cause such strange behavior? I looked at examples in FP-AI-VISION1 and AN5020 manual but I couldn't find anything that I missed.
Ok, so I finally figured out the problem. DMA just writes data to RAM memory. CPU tries to access it, but it was using cached memory. After first CPU access to buffer DMA stopped working, probably because data was transferred to cache and so was visible by CPU and in the debugger.
Final solution was to just disable D-Cache at all. This website explains the problem well.

Receive UART messages in DMA

I am trying to receive messages in DMA mode, on a STM32L432KCU. The pins PA2 and PA3 are configured as DMA pins. The baudrate is 115200 and the global interrupt for USART2 is turned on. In the main function, I have the initialization of the peripherals:
MX_GPIO_Init();
MX_USART2_UART_Init();
MX_DMA_Init();
, which is followed by the functions that turn on the idle receive mode of the DMA and disable the half transfer interrupt:
HAL_UARTEx_ReceiveToIdle_DMA(&huart2, UART2_rxBuffer, 12);
__HAL_DMA_DISABLE_IT(&hdma_usart2_rx, DMA_IT_HT);
Here I have the callback:
void HAL_UARTEx_RxEventCallback(UART_HandleTypeDef *huart, uint16_t Size){
if(huart->Instance == USART2){
memcpy(mainbuff, UART2_rxBuffer, Size);
HAL_UARTEx_ReceiveToIdle_DMA(&huart2, UART2_rxBuffer, 12);
__HAL_DMA_DISABLE_IT(&hdma_usart2_rx, DMA_IT_HT);
}
}
It checks if the message is received from the second uart, then copies it into the main buffer, that stores all the data. The receive is enabled again and the half transfer interrupt is disabled. Unfortunately, when I am trying to debug, the breakpoint inside the callback never gets hit. I've also tried to display the message. It didn't work. What could cause this problem?
Try changing the order of the initialization procedures.
MX_GPIO_Init();
MX_DMA_Init();
MX_USART2_UART_Init();
The DMA needs to be initialised prior to using the UART.

I2C transmit with DMA and HAL not working

This seems to be a problem that is somewhat common, but I have been unsuccessful with any of the solutions I have found online. Specifically I am trying to transmit a 1024 byte buffer (full 128x64 px image) to a SSD1306 display via I2C/DMA and the HAL generated in cubeIDE. I am using a STML432 nucleo board. I have no problem transmitting the buffer without DMA using HAL_I2C_Mem_Write
Based on other questions I have seen, the problem lies in the fact that the DMA finishes while the I2C bus is still working on the transmit. I just don't know how to remedy this and the examples given usually don't use the HAL (unfortunately, despite my efforts I am not quite competent to correctly apply them to the HAL myself I guess). I have tried using the interrupts for I2c and DMA with no luck, only about the first 254 bytes get transferred (just shy of two rows showing on the screen).
Here is my code for sending the buffer:
static void ssd1306_WriteMData_DMA(const uint8_t *data, uint16_t size)
{
while(HAL_I2C_GetState(&hi2c1) != HAL_I2C_STATE_READY);
HAL_I2C_Mem_Write_DMA(&hi2c1, I2C_ADDR, SSD1306_REG_MDAT, 1, (uint8_t*)data, size);
}
and the code for each interrupt handler:
void I2C1_EV_IRQHandler(void)
{
/* USER CODE BEGIN I2C1_EV_IRQn 0 */
if(I2C1->ISR & I2C_ISR_TCR){
I2C1->CR2 |= (I2C_CR2_STOP);// stop i2c
I2C1->ICR |= (I2C_ICR_STOPCF);// Reset the ICR flag.
// stop DMA
DMA1->IFCR |= DMA_IFCR_CTCIF6;
// clear flag
DMA1_Channel6->CCR &= ~DMA_CCR_EN;
}
/* USER CODE END I2C1_EV_IRQn 0 */
//HAL_I2C_EV_IRQHandler(&hi2c1);
/* USER CODE BEGIN I2C1_EV_IRQn 1 */
/* USER CODE END I2C1_EV_IRQn 1 */
}
void DMA1_Channel6_IRQHandler(void)
{
/* USER CODE BEGIN DMA1_Channel6_IRQn 0 */
// stop DMA
DMA1->IFCR |= DMA_IFCR_CTCIF6;
// clear flag
DMA1_Channel6->CCR &= ~DMA_CCR_EN;
/* USER CODE END DMA1_Channel6_IRQn 0 */
HAL_DMA_IRQHandler(&hdma_i2c1_tx);
/* USER CODE BEGIN DMA1_Channel6_IRQn 1 */
/* USER CODE END DMA1_Channel6_IRQn 1 */
}
I think that is all the pertinent code, let me know if there is something else I am missing. All of the initialization code for the peripherals was done through cubeMX, but I can post that if need be, or the settings. I feel like it is something really simple that I'm missing, but this is a bit over my head to be honest so I don't quite grasp exactly what's going on...
Thanks for any help!
Problem is in your custom DMA1_Channel6_IRQHandler and I2C1_EV_IRQHandler. Those functions will be called right after I2C transfers 255 bytes, which is MAX_NBYTE_SIZE for NBYTES. HAL already have all required interrupt routines inside stm32l4xx_hal_i2c.c:
Sets I2C transfer IRQ handler to I2C_Master_ISR_DMA;
Checks if data size is larger than 255 bytes and uses reload mode.
Sets I2C DMA complete callback to I2C_DMAMasterTransmitCplt;
Starts DMA using HAL_DMA_Start_IT()
Configures I2C registers using I2C_TransferConfig()
HAL driver will handle all I2C+DMA interrupts using I2C_Master_ISR_DMA and I2C_DMAMasterTransmitCplt:
I2C_DMAMasterTransmitCplt will restart DMA for each chunk of 255 (MAX_NBYTE_SIZE) or less bytes.
I2C_Master_ISR_DMA will reset RELOAD/NBYTES registers using I2C_TransferConfig.
For last block of data I2C_AUTOEND_MODE is used.
So all you need is
remove "user code" from DMA1_Channel6_IRQHandler and I2C1_EV_IRQHandler functions
enable I2C1 event interrupt in STM32 Device Configuration Tool
configure DMA with data width byte/byte
perform a single call of HAL_I2C_Mem_Write_DMA(...) to start transfer
check HAL_I2C_STATE_READY before next transfer
See HAL_I2C_Mem_Write_DMA, I2C_Master_ISR_DMA and I2C_DMAMasterTransmitCplt source code in stm32l4xx_hal_i2c.c to understand how it works.
About why DMA finishes while I2C is still working: HAL driver sends I2C data over DMA using 255 byte chunks, stops DMA, starts DMA, clears I2C_CR2 NBYTES/RELOAD, enables DMA. DMA may be run continuously using DMA_CIRCULAR mode, but currently it is not implemented in HAL I2C drivers. Here is example of using I2C with DMA_CIRCULAR mode:
// DMA enabled single time
hi2c1.hdmatx->XferCpltCallback = MY_I2C_DMAMasterTransmitCplt;
HAL_DMA_Start_IT(hi2c1.hdmatx, (uint32_t)&i2cBuffer, (uint32_t)&hi2c1.Instance->TXDR, I2C_BUFFER_SIZE);
MY_I2C_TransferConfig(&hi2c1, (uint16_t)DAC_ADDR, 254, I2C_RELOAD_MODE, I2C_GENERATE_START_WRITE); // in first call using I2C_GENERATE_START_WRITE
uint32_t tmpisr = I2C_IT_TCI;
__HAL_I2C_ENABLE_IT(&hi2c1, tmpisr);
hi2c1.Instance->CR1 |= I2C_CR1_TXDMAEN;
Still need to clear I2C_CR2 NBYTES/RELOAD using MY_I2C_TransferConfig each 254 bytes (I do not use 255 to align interrupt firing to even index in array):
static HAL_StatusTypeDef MY_I2C_Master_ISR_DMA(struct __I2C_HandleTypeDef *hi2c, uint32_t ITFlags, uint32_t ITSources)
{
if (__HAL_I2C_GET_FLAG(&hi2c1, I2C_FLAG_TCR) == SET)
{
MY_I2C_TransferConfig(&hi2c1, (uint16_t)DAC_ADDR, 254, I2C_RELOAD_MODE, I2C_NO_STARTSTOP); // in repeated calls using I2C_NO_STARTSTOP
}
return HAL_OK;
}
With this approach DMA circular buffer size is not limited to 255 bytes:
#define I2C_BUFFER_SIZE 1024
uint8_t i2cBuffer[I2C_BUFFER_SIZE];
Main.c should have MY_I2C_TransferConfig() function, which is copy pasted version of private function HAL_I2C_TransferConfig() from stm32l4xx_hal_i2c.c. On earlier STM32 microcontrollers there is no NBYTES/RELOAD fields and I2C_CR2 does not need to be updated this way.
Using DMA in circular mode allows to achieve highest frame rate, you just need to fill DMA buffers in time using XferHalfCpltCallback and XferCpltCallback callbacks. Frames may be copied from larger buffer by using memcpy() or DMA MEMTOMEM transfer.
You haven't said which STM32 you are using. They have different bit definitions (because the I2C peripherals in the earlier released parts were rubbish) but it looks like you are using one of the later ones.
Basically you can find what you need in the bit definitions for the I2C registers in the reference manual. If you are setting stop before it has finished you need to look for a BUSY bit that gets cleared or BTF (byte transfer finished) bit that gets set when it is time for you to send stop.

STM32: Use USART with character match ISR and DMA buffer

I'm using a STM32L432 device with FreeRTOS and STM32CubeMX.
I try to implement a M2M-Communication via USART based on an ASCII protocol. The protocol sequences can differ in length but have a maximum length and a defined end character ('\r' / 0x0D).
So I thought about collecting all RX-USART data with DMA (like a FIFO) and using the address match isr based on the USART_ICR_CMCF flag to determine an end character.
Initialize USART1 and enable address match isr
void HAL_UART_MspInit(UART_HandleTypeDef* uartHandle) {
GPIO_InitTypeDef GPIO_InitStruct = {0};
if(uartHandle->Instance==USART1) {
/* USART1 clock enable */
__HAL_RCC_USART1_CLK_ENABLE();
__HAL_RCC_GPIOA_CLK_ENABLE();
GPIO_InitStruct.Pin = GPIO_PIN_9|GPIO_PIN_10;
GPIO_InitStruct.Mode = GPIO_MODE_AF_PP;
GPIO_InitStruct.Pull = GPIO_NOPULL;
GPIO_InitStruct.Speed = GPIO_SPEED_FREQ_VERY_HIGH;
GPIO_InitStruct.Alternate = GPIO_AF7_USART1;
HAL_GPIO_Init(GPIOA, &GPIO_InitStruct);
/* USART1 interrupt Init */
HAL_NVIC_SetPriority(USART1_IRQn, 5, 0);
HAL_NVIC_EnableIRQ(USART1_IRQn);
/* USER CODE BEGIN USART1_MspInit 1 */
USART1->CR2 |= 0x0D000000; // \r 0x0D
__HAL_UART_ENABLE_IT(&huart1,UART_IT_CM);
}
}
USART1 isr handler
void USART1_IRQHandler(void) {
if (USART1->ISR & USART_ISR_CMF) {
data = USART1->RDR;
SET_BIT(USART1->ICR,USART_ICR_CMCF);
}
HAL_UART_IRQHandler(&huart1);
}
Right now, the address match isr works fine, but I don't have a idea how to implement the DMA / FIFO support.
BTW:
I was very surprised, that the device doesn't support a USART HW FIFO. Is my idea to use DMA to reproduce the FIFO commonly used?
The point of DMA is to not involve the CPU in every byte being transferred. If your ISR is being called for every byte then CPU gets involved so simultaneously enabling DMA, if at all that is possible, won't yield any performance benefits. Get rid of any one of the two - per byte interrupts or the DMA. If you most definitely want to check for a particular character as it arrives then DMA would not help.
Another popular approach to detect end of input when using arbitrary length input along with DMA is to use the USART idle interrupt. This interrupt is triggered when one byte time (time required to transfer one byte at current baud rate) elapses without any transfer. In this interrupt you can transfer the DMA buffer contents to another memory location then reinitialize DMA for future input and leave. Or you can process the input then and there. You can do whatever you want in the Idle ISR as long as the ISR completes execution quickly.
If your input has large continuous runs of data then the idle interrupt would trigger after a long time and you might have overwritten your buffer by then. You can use other DMA interrupts like Half Complete and Full Complete to handle this. So that can be taken care of too. I personally found this method to be buggy during stress testing. But there is no reason for it to be so, I didn't get enough time to debug it when I tried to use it but you will find articles online about this technique.

Problem related to programing STM32 microcontroller with CAN bus

I am new to STM32 microcontrollers and CAN bus communication protocol and I am working on programing an
STM32F103xx
microcontroller.
I want to use CAN bus for transmitting data to another microcontroller from the same family.
I set up all the necessary settings but when debugging the code it gets stuck in the transmitting pending function and doesn't transmit.
I want the data to be transmitted but it is not.
I don't believe I have a problem with my hardware.
PS:
I have tried both normal mode and LOOPBACK mode for CAN handler and they both didn't work.
int main(void)
{
HAL_Init();
SystemClock_Config();
uint32_t BUTTON_0;
uint32_t BUTTON_1;
uint8_t Data_0[5] = "aaaaa";
uint8_t Data_1[5] = "ZZZZZ";
MX_GPIO_Init();
MX_CAN_Init();
if(HAL_CAN_Init(&hcan) != HAL_OK){
Error_Handler();
}
if(HAL_CAN_Start(&hcan) != HAL_OK){
Error_Handler();
}
while (1)
{
TxHeader.DLC = 5;
TxHeader.StdId = 0x65D;
TxHeader.IDE = CAN_ID_STD;
TxHeader.RTR = CAN_RTR_DATA;
BUTTON_0 = HAL_GPIO_ReadPin(GPIOA, GPIO_PIN_0);
BUTTON_1 = HAL_GPIO_ReadPin(GPIOA, GPIO_PIN_1);
if (BUTTON_0 == 0U){
if (HAL_CAN_AddTxMessage(&hcan, &TxHeader, Data_0, &TxMailbox) != HAL_OK ){
Error_Handler();
}
}
if (BUTTON_1 == 0U){
if (HAL_CAN_AddTxMessage(&hcan, &TxHeader, Data_1, &TxMailbox) != HAL_OK){
Error_Handler();
}
}
while (HAL_CAN_IsTxMessagePending(&hcan, TxMailbox));
if (BUTTON_0 && BUTTON_1 == 0U){
printf("Please Press a Button");
}
}
}
You are using
STM32CubeF1 HAL
libraries (probably through STM32CubeMX).
Please check the corresponding
User Manual
- section 9.2.1 recommends the following procedure:
Initialize the CAN low level resources by implementing the HAL_CAN_MspInit():
Enable the CAN interface clock using __HAL_RCC_CANx_CLK_ENABLE()
Configure CAN pins
Enable the clock for the CAN GPIOs
Configure CAN pins as alternate function open-drain
In case of using interrupts [...]
Initialize the CAN peripheral using HAL_CAN_Init() function.
This function resorts to HAL_CAN_MspInit() for low-level initialization.
Configure the reception filters using the following configuration functions:
HAL_CAN_ConfigFilter()
Start the CAN module using HAL_CAN_Start() function.
At this level the node is active on the bus:
it receive messages, and can send messages.
To manage messages transmission, the following Tx control functions can be used:
HAL_CAN_AddTxMessage() to request transmission of a new message.
[...]
HAL_CAN_IsTxMessagePending() to check if a message is pending in a Tx mailbox.
[...]
When a message is received into the CAN Rx FIFOs,
it can be retrieved using the HAL_CAN_GetRxMessage() function.
The function HAL_CAN_GetRxFifoFillLevel() allows to know how many Rx message
are stored in the Rx Fifo.
Calling the HAL_CAN_Stop() function stops the CAN module.
The deinitialization is achieved with HAL_CAN_DeInit() function.
[...]
Polling mode operation / Transmission:
Monitor the Tx mailboxes availability until at least one Tx mailbox is free,
using HAL_CAN_GetTxMailboxesFreeLevel().
Then request transmission of a message using HAL_CAN_AddTxMessage().
Your code sample doesn't show the sub-functions called from main() so you have to check yourself :-) that
CAN/GPIO clocks have been enabled before the corresponding registers are assigned.
GPIO pins are configured as recommended.
Another thought - could it be that you have to check HAL_CAN_GetTxMailboxesFreeLevel() after starting the CAN, even before adding the first message for transmission?
Steps (2.), (4.), (5.) are already taken care of by your code, and
steps (3.), (6.), (7.), (8.) are not related to your problem (but only to reception / deinit).
If you don't want to do all the manual work yourself, you can also use one of the following tools as a starting point.
Both tools are far from perfect (and some of our StackOverflow peers disagree to recommend them at all), but often they already provide a basic structure with most of the relevant steps you need:
The firmware example collection (see their
Application Note
for details).
Code generator
STM32CubeMX