I'm using STM32F205's SPI1 in master mode. And the RXNE flag is never set during transactions. Here's a part of SPI initialization:
SPI1->CR1 = SPI_CR1_MSTR | SPI_CR1_SSI | SPI_CR1_SSM | SPI_CR1_SPE;
SPI1->CR2 = 0;
Then I'm trying to perform transmition/receiving of a single byte:
while(!(SPI1->SR & SPI_FLAG_TXE)) {} // wait for compeltion of the previous Tx
SPI1->DR = 0xAB; // transmit some byte
while(!(SPI1->SR & SPI_FLAG_RXNE)) {} // wait for byte to be received
uint8_t result = SPI1->DR;
This code stucks at the waiting for RXNE flag. I tried to wait for busy flag BSY = 0 instead of RXNE = 1 and SPI began to work. It seems RXNE is never set.
I quess Your's SPI communication is 8-bit? (uint8_t result = SPI1->DR). On STM32F051 I have the same problem RXNE flag was not set after recieving 1 byte. The problem is that STM triggers this flag after 16-bit by default.
In STM32F0 I found solution (it propably will be the same in F2):
The RXNE flag is set depending on the FRXTH bit value in the SPIx_CR2 register: If FRXTH is set, RXNE goes high and stays high until the RXFIFO level is greater or equal to 1/4 (8-bit). If FRXTH is cleared (default), RXNE goes high and stays high until the RXFIFO level is greater than or equal to 1/2 (16-bit).
So set this bit you can do it using API instruction:
SPI_RxFIFOThresholdConfig(SPIx, SPI_RxFIFOThreshold_QF);
Hope it helps,
Best wishes from Poland
suggestion only:
SPI1->CR1 = SPI_CR1_MSTR | SPI_CR1_SSI | SPI_CR1_SSM | SPI_CR1_SPE;
SPI1->CR2 = 0;
you are enablling SPI before writing to CR2 register. reverse the above OR enable SPI only after writing to CR2.
I had the same problem with STM32L072CZ.
My solution was to enable the SPI_CR2_SSOE bit.
Still I am not using the SPI1 slave select pin for selecting slave, I am using a software GPIO pin for that.
SPI1->CR1 = SPI_CR1_MSTR | SPI_CR1_BR | SPI_CR1_SSM;
SPI1->CR2 = SPI_CR2_SSOE |SPI_CR2_RXNEIE;
SPI1->CR1 |= SPI_CR1_SPE;
Related
This seems to be a problem that is somewhat common, but I have been unsuccessful with any of the solutions I have found online. Specifically I am trying to transmit a 1024 byte buffer (full 128x64 px image) to a SSD1306 display via I2C/DMA and the HAL generated in cubeIDE. I am using a STML432 nucleo board. I have no problem transmitting the buffer without DMA using HAL_I2C_Mem_Write
Based on other questions I have seen, the problem lies in the fact that the DMA finishes while the I2C bus is still working on the transmit. I just don't know how to remedy this and the examples given usually don't use the HAL (unfortunately, despite my efforts I am not quite competent to correctly apply them to the HAL myself I guess). I have tried using the interrupts for I2c and DMA with no luck, only about the first 254 bytes get transferred (just shy of two rows showing on the screen).
Here is my code for sending the buffer:
static void ssd1306_WriteMData_DMA(const uint8_t *data, uint16_t size)
{
while(HAL_I2C_GetState(&hi2c1) != HAL_I2C_STATE_READY);
HAL_I2C_Mem_Write_DMA(&hi2c1, I2C_ADDR, SSD1306_REG_MDAT, 1, (uint8_t*)data, size);
}
and the code for each interrupt handler:
void I2C1_EV_IRQHandler(void)
{
/* USER CODE BEGIN I2C1_EV_IRQn 0 */
if(I2C1->ISR & I2C_ISR_TCR){
I2C1->CR2 |= (I2C_CR2_STOP);// stop i2c
I2C1->ICR |= (I2C_ICR_STOPCF);// Reset the ICR flag.
// stop DMA
DMA1->IFCR |= DMA_IFCR_CTCIF6;
// clear flag
DMA1_Channel6->CCR &= ~DMA_CCR_EN;
}
/* USER CODE END I2C1_EV_IRQn 0 */
//HAL_I2C_EV_IRQHandler(&hi2c1);
/* USER CODE BEGIN I2C1_EV_IRQn 1 */
/* USER CODE END I2C1_EV_IRQn 1 */
}
void DMA1_Channel6_IRQHandler(void)
{
/* USER CODE BEGIN DMA1_Channel6_IRQn 0 */
// stop DMA
DMA1->IFCR |= DMA_IFCR_CTCIF6;
// clear flag
DMA1_Channel6->CCR &= ~DMA_CCR_EN;
/* USER CODE END DMA1_Channel6_IRQn 0 */
HAL_DMA_IRQHandler(&hdma_i2c1_tx);
/* USER CODE BEGIN DMA1_Channel6_IRQn 1 */
/* USER CODE END DMA1_Channel6_IRQn 1 */
}
I think that is all the pertinent code, let me know if there is something else I am missing. All of the initialization code for the peripherals was done through cubeMX, but I can post that if need be, or the settings. I feel like it is something really simple that I'm missing, but this is a bit over my head to be honest so I don't quite grasp exactly what's going on...
Thanks for any help!
Problem is in your custom DMA1_Channel6_IRQHandler and I2C1_EV_IRQHandler. Those functions will be called right after I2C transfers 255 bytes, which is MAX_NBYTE_SIZE for NBYTES. HAL already have all required interrupt routines inside stm32l4xx_hal_i2c.c:
Sets I2C transfer IRQ handler to I2C_Master_ISR_DMA;
Checks if data size is larger than 255 bytes and uses reload mode.
Sets I2C DMA complete callback to I2C_DMAMasterTransmitCplt;
Starts DMA using HAL_DMA_Start_IT()
Configures I2C registers using I2C_TransferConfig()
HAL driver will handle all I2C+DMA interrupts using I2C_Master_ISR_DMA and I2C_DMAMasterTransmitCplt:
I2C_DMAMasterTransmitCplt will restart DMA for each chunk of 255 (MAX_NBYTE_SIZE) or less bytes.
I2C_Master_ISR_DMA will reset RELOAD/NBYTES registers using I2C_TransferConfig.
For last block of data I2C_AUTOEND_MODE is used.
So all you need is
remove "user code" from DMA1_Channel6_IRQHandler and I2C1_EV_IRQHandler functions
enable I2C1 event interrupt in STM32 Device Configuration Tool
configure DMA with data width byte/byte
perform a single call of HAL_I2C_Mem_Write_DMA(...) to start transfer
check HAL_I2C_STATE_READY before next transfer
See HAL_I2C_Mem_Write_DMA, I2C_Master_ISR_DMA and I2C_DMAMasterTransmitCplt source code in stm32l4xx_hal_i2c.c to understand how it works.
About why DMA finishes while I2C is still working: HAL driver sends I2C data over DMA using 255 byte chunks, stops DMA, starts DMA, clears I2C_CR2 NBYTES/RELOAD, enables DMA. DMA may be run continuously using DMA_CIRCULAR mode, but currently it is not implemented in HAL I2C drivers. Here is example of using I2C with DMA_CIRCULAR mode:
// DMA enabled single time
hi2c1.hdmatx->XferCpltCallback = MY_I2C_DMAMasterTransmitCplt;
HAL_DMA_Start_IT(hi2c1.hdmatx, (uint32_t)&i2cBuffer, (uint32_t)&hi2c1.Instance->TXDR, I2C_BUFFER_SIZE);
MY_I2C_TransferConfig(&hi2c1, (uint16_t)DAC_ADDR, 254, I2C_RELOAD_MODE, I2C_GENERATE_START_WRITE); // in first call using I2C_GENERATE_START_WRITE
uint32_t tmpisr = I2C_IT_TCI;
__HAL_I2C_ENABLE_IT(&hi2c1, tmpisr);
hi2c1.Instance->CR1 |= I2C_CR1_TXDMAEN;
Still need to clear I2C_CR2 NBYTES/RELOAD using MY_I2C_TransferConfig each 254 bytes (I do not use 255 to align interrupt firing to even index in array):
static HAL_StatusTypeDef MY_I2C_Master_ISR_DMA(struct __I2C_HandleTypeDef *hi2c, uint32_t ITFlags, uint32_t ITSources)
{
if (__HAL_I2C_GET_FLAG(&hi2c1, I2C_FLAG_TCR) == SET)
{
MY_I2C_TransferConfig(&hi2c1, (uint16_t)DAC_ADDR, 254, I2C_RELOAD_MODE, I2C_NO_STARTSTOP); // in repeated calls using I2C_NO_STARTSTOP
}
return HAL_OK;
}
With this approach DMA circular buffer size is not limited to 255 bytes:
#define I2C_BUFFER_SIZE 1024
uint8_t i2cBuffer[I2C_BUFFER_SIZE];
Main.c should have MY_I2C_TransferConfig() function, which is copy pasted version of private function HAL_I2C_TransferConfig() from stm32l4xx_hal_i2c.c. On earlier STM32 microcontrollers there is no NBYTES/RELOAD fields and I2C_CR2 does not need to be updated this way.
Using DMA in circular mode allows to achieve highest frame rate, you just need to fill DMA buffers in time using XferHalfCpltCallback and XferCpltCallback callbacks. Frames may be copied from larger buffer by using memcpy() or DMA MEMTOMEM transfer.
You haven't said which STM32 you are using. They have different bit definitions (because the I2C peripherals in the earlier released parts were rubbish) but it looks like you are using one of the later ones.
Basically you can find what you need in the bit definitions for the I2C registers in the reference manual. If you are setting stop before it has finished you need to look for a BUSY bit that gets cleared or BTF (byte transfer finished) bit that gets set when it is time for you to send stop.
I'm trying to toggle an LED at PC13 by toggling PC14, the problem is that the interrupt handler is kept being called without toggling PC14 and the the pending interrupt is not cleared using EXTI->PR register, nor cleared manually using the debugger. I tried also clearing it in NVIC->ICPR, I'm not sure why there are two registers for clearing the same interrupt.
here is my code
and you can find the header in
https://github.com/AymenSekhri/tinyHAL-STM32f103/tree/master/STM32F103-HAL/tinyHAL
/*
* Description:
* Toggle LED at C13 whenever C14 goes from HIGH to LOW.
*
*/
#include "tinyHAL/stm32f103_hal.h"
int main(){
//Enable AFIO clock from RCC
enablePeripheralClock(Peripheral_AFIO);
//Enable and configure C13 & C14
enablePeripheralClock(Peripheral_GPIOC);
configureGPIO(Peripheral_GPIOC, 13, GPIO_MODE_OUT_50MHZ, GPIO_CONF_OUT_PUSHPULL);
configureGPIO(Peripheral_GPIOC, 14, GPIO_MODE_IN, GPIO_CONF_IN_PUSHUP_PULLDOWN);
//Link EXTI14 to C14
AFIO->EXTICR[3] = (AFIO->EXTICR[3] & ~(0xF<<8)) | 2;
//Configure inturrput at EXTI14 falling edge
EXTI->FTSR |= 1<<14;
//Unmask interrupt 40 (EXTI10-15)
EXTI->IMR |= 1<<14;
//Set Priority to interrupt 40 (EXTI10-15)
NVIC->IP[40] |= 0x10;
//Enable interrupt 40 (EXTI10-15)
NVIC->ISER[40>>5] |= (1 << (40&0x1F));
while(1);
}
void EXTI15_10_IRQHandler(void){
toggleGPIOBit(Peripheral_GPIOC, 13);
if (EXTI->PR & (1 << 14)){
EXTI->PR |= (1 << 14);
}
//NVIC->ICPR[40>>5] |= (1 << (40&0x1F));
__COMPILER_BARRIER();
}
The best solution to get rid of the electronic noise at the pin that (over-)triggers your EXTI is to improve the hardware - but this is the software board, not the electronic one.
If you had a TIM channel connected to that pin, I would recommend to use it to filter the signal coming in. But I think that PC14 doesn't have a timer.
The second-best solution (and this is where workarounds have already started!) is to use a timer (the TIM, not its channel), either to establish a periodic time base to sample the pin (by DMA or by an ISR, and feed samples into a software-based filtering...) - or to deactivate the EXTI interrupt in the EXTI ISR, start the timer and re-activate the EXTI interrupt when the timer expired.
Both of these µC-based approaches are clumsy and clearly inferior to developing a good hardware. This doesn't say that with a "good" hardware you shouldn't add some debouncing or noise protection inside your software!
As #P__J__ suggest add some denouncing logic. There are two methods for de-bouncing like suing RC filter and using software de-bouncing logic.
Due to noise on the pins ISR is getting executed continously.
You can check one more thing.
Try pulling UP/Down the pin and observe the behaviour. ISR should not get executed if logic level doesn't change on Pin.
Maybe It's because of TAMPER/RTC second output redirected to PC13. I also faced the same problem before when using PC13 as EXTI. Set BKP_RTCCR->ASOE to 1 to force it as alarm output that will never triggered because i dont't set any rtc alarm.
I'm using a STM32L432 device with FreeRTOS and STM32CubeMX.
I try to implement a M2M-Communication via USART based on an ASCII protocol. The protocol sequences can differ in length but have a maximum length and a defined end character ('\r' / 0x0D).
So I thought about collecting all RX-USART data with DMA (like a FIFO) and using the address match isr based on the USART_ICR_CMCF flag to determine an end character.
Initialize USART1 and enable address match isr
void HAL_UART_MspInit(UART_HandleTypeDef* uartHandle) {
GPIO_InitTypeDef GPIO_InitStruct = {0};
if(uartHandle->Instance==USART1) {
/* USART1 clock enable */
__HAL_RCC_USART1_CLK_ENABLE();
__HAL_RCC_GPIOA_CLK_ENABLE();
GPIO_InitStruct.Pin = GPIO_PIN_9|GPIO_PIN_10;
GPIO_InitStruct.Mode = GPIO_MODE_AF_PP;
GPIO_InitStruct.Pull = GPIO_NOPULL;
GPIO_InitStruct.Speed = GPIO_SPEED_FREQ_VERY_HIGH;
GPIO_InitStruct.Alternate = GPIO_AF7_USART1;
HAL_GPIO_Init(GPIOA, &GPIO_InitStruct);
/* USART1 interrupt Init */
HAL_NVIC_SetPriority(USART1_IRQn, 5, 0);
HAL_NVIC_EnableIRQ(USART1_IRQn);
/* USER CODE BEGIN USART1_MspInit 1 */
USART1->CR2 |= 0x0D000000; // \r 0x0D
__HAL_UART_ENABLE_IT(&huart1,UART_IT_CM);
}
}
USART1 isr handler
void USART1_IRQHandler(void) {
if (USART1->ISR & USART_ISR_CMF) {
data = USART1->RDR;
SET_BIT(USART1->ICR,USART_ICR_CMCF);
}
HAL_UART_IRQHandler(&huart1);
}
Right now, the address match isr works fine, but I don't have a idea how to implement the DMA / FIFO support.
BTW:
I was very surprised, that the device doesn't support a USART HW FIFO. Is my idea to use DMA to reproduce the FIFO commonly used?
The point of DMA is to not involve the CPU in every byte being transferred. If your ISR is being called for every byte then CPU gets involved so simultaneously enabling DMA, if at all that is possible, won't yield any performance benefits. Get rid of any one of the two - per byte interrupts or the DMA. If you most definitely want to check for a particular character as it arrives then DMA would not help.
Another popular approach to detect end of input when using arbitrary length input along with DMA is to use the USART idle interrupt. This interrupt is triggered when one byte time (time required to transfer one byte at current baud rate) elapses without any transfer. In this interrupt you can transfer the DMA buffer contents to another memory location then reinitialize DMA for future input and leave. Or you can process the input then and there. You can do whatever you want in the Idle ISR as long as the ISR completes execution quickly.
If your input has large continuous runs of data then the idle interrupt would trigger after a long time and you might have overwritten your buffer by then. You can use other DMA interrupts like Half Complete and Full Complete to handle this. So that can be taken care of too. I personally found this method to be buggy during stress testing. But there is no reason for it to be so, I didn't get enough time to debug it when I tried to use it but you will find articles online about this technique.
I am using stm32 to read the ds18b20 with HAL library
I think the init is correct but the read and write is not
anyone can tell me why it is not right?
for the write,here is the code
if ((data & (1 << i)) != 0)
{
MX_GPIO_Set(1);
HAL_GPIO_WritePin(GPIOA, GPIO_PIN_1, GPIO_PIN_RESET);
delay_ms(1);
MX_GPIO_Set(0);
delay_ms(60);
}
else
{
MX_GPIO_Set(1);
HAL_GPIO_WritePin(GPIOA, GPIO_PIN_1, GPIO_PIN_RESET);
delay_ms(60);
MX_GPIO_Set(0);
}
it is write one bit data.
and for the read code
MX_GPIO_Set(1);
HAL_GPIO_WritePin(GPIOA, GPIO_PIN_1, GPIO_PIN_RESET);
delay_ms(2);
MX_GPIO_Set(0);
if (HAL_GPIO_ReadPin(GPIOA, GPIO_PIN_1) == GPIO_PIN_SET)
{
value |= 1 << i;
}
delay_ms(60);
the MX_GPIO_Set(1) means set the GPIO output
where is wrong?
please do not tell me use library or code in github.I want to write code myself so I can understand the ds18b20.
The DS18B20 uses the One-Wire protocol.
https://en.wikipedia.org/wiki/1-Wire
Each bit takes about 60 microseconds to transmit.
1s are HIGH during most of the transmission and 0s are LOW during most of the transmission. The start of the next bit is indicated by a pulse.
One thing that stands out to me is that you're using delay_ms (milliseconds), when you likely want to be using delay_us (microseconds).
Also, you're relying on the bit's timing to be exact (which it probably won't be). Instead, base your timing on the pulse.
It's more complicated than that.
When reading, you need to be continually checking the pin's value and interpreting what it means rather than putting in delays and hoping that the timing matches up.
I have not tested this code and it's incomplete.
This is just to illustrate a technique.
To start off, we're going to set our output to LOW and wait
for the sensor to go LOW for at least 200us. (Ideally 500us. 200us is our minimum requirement.)
This is the "RESET" sequence that tells us that new data is about to start.
const int SleepIntervalMircoseconds = 5;
// Start off by setting our output to LOW (aka GPIO_PIN_RESET).
MX_GPIO_Set(1);
HAL_GPIO_WritePin(GPIOA, GPIO_PIN_1, GPIO_PIN_RESET);
// Switch back to reading mode.
MX_GPIO_Set(0);
const int ResetRequiredMiroseconds = 200;
int pinState = GPIO_PIN_SET;
int resetElapsedMicroseconds = 0;
while (pinState != GPIO_PIN_RESET || resetElapsedMicroseconds < ResetRequiredMiroseconds) {
pinState = HAL_GPIO_ReadPin(GPIOA, GPIO_PIN_1);
if (pinState != GPIO_PIN_RESET) {
resetElapsedMicroseconds = 0;
continue;
}
delay_us(SleepIntervalMircoseconds);
// Note that the elapsed microseconds is really an estimate here.
// The actual elapsed time will be 5us + the amount of time it took to execute the code.
resetElapsedMicroseconds += SleepIntervalMircoseconds;
}
This only gets us started.
After you've received the reset signal, you need to indicate to the other side that you've received it by setting your value HIGH for certain amount of time.
I'm unable to comment on your code, because important parts like the GPIO setup and the source for the functions called are missing. However, if the bit timing gives you trouble, you might try this.
Using a UART to Implement a 1-Wire Bus Master
then you don't have to deal with delays and timings other than calculating the UART baud rate.
All STM32 UARTs support one wire operation with an open drain GPIO pin. Connect the I/O pin of the device to an UART TX pin. Configure the pin as open-drain alternate function output, with the alternate function number for the UART if applicable. Enable the UART and set it to single wire operation in the control registers.
Set the UART baud rate to 7407, and send 0xF0, that's the reset pulse. Wait for RXNE and read the UART data register. If it's not 0xF0, then the device is answering with a presence pulse.
Set the UART baud rate to 133333, and you can start communicating. To send a 0 bit, write 0x00 to the UART data register. To send a 1 bit, write 0xFF to the UART data register. To receive a bit, write 0xFF, wait for RXNE, and read the data register. If the byte read is 0xFF, then it's a 1, otherwise (any other value read) it's a 0.
I am having a problem when Sending char from STM32F411 to PC it reads into garbage, but when I do the opposite operation the MCU correctly reads char sent.
I perform following actions:
Enable GPIOA clock and configure pins 9 and 10 alternate function.
Enable USART1, leave default values for M (message length), stop bits, DMA
Set USARTDIV to result in 9600 baud at 16Mhz (HSI) *
Configure USART to send idle frame as first transmission
* I have also tried with 100Mhz APB2 bus frequency with the same result.
Configuring USART
// 1. Enable USART
SET_BIT(USART1->CR1, USART_CR1_UE);
// 5. Select the desired baud rate in BRR
SET_BIT(USART1->BRR, 0x683); // USARTDIV
// 6. Set TE in CR1 to send an idle frame as first transmission
SET_BIT(USART1->CR1, USART_CR1_TE);
After that I am trying to accept an a character with RealTerm2.0 with following configuration: 9600 8N1 None
Character is sent by following code:
void SendChar_USART(char pChar)
{
// Transmitter 7, 8
// 7. Write the data to send in the DR register (this clears TXE)
USART1->DR = pChar;
while(!READ_BIT(USART1->SR, USART_SR_TXE));
}
Update 1
Switching to USART2 with absolute same configuration solves the problem and it is possible to recover text from serial terminal, however this question unanswered "Why USART1 does not work as expected?"
There is a capacitor on the way to the PA9 pin of the extension connector filtering out the USART1 TX. Peter Harrison explains the issue very well, i think.
http://www.micromouseonline.com/2013/05/05/using-usart1-on-the-stm32f4discovery/