Disabling SPI peripheral on STM32H7 between two transmissions? - stm32

I'm still doing SPI experiments between two Nucleo STM32H743 boards.
I've configured SPI in Full-Duplex mode, with CRC enabled, with a SPI frequency of 25MHz (so Slave can transmit without issue).
DSIZE is 8 bits and FIFO threshold is 4.
On Master side, I'm sending 4 bytes then wait for 5 bytes from the Slave. I know I could use half-duplex or simplex mode but I want to understand what's going on in full-duplex mode.
volatile unsigned long *CR1 = (unsigned long *)0x40013000;
volatile unsigned long *CR2 = (unsigned long *)0x40013004;
volatile unsigned long *TXDR = (unsigned long *)0x40013020;
volatile unsigned long *RXDR = (unsigned long *)0x40013030;
volatile unsigned long *SR = (unsigned long *)0x40013014;
volatile unsigned long *IFCR = (unsigned long *)0x40013018;
volatile unsigned long *TXCRC = (unsigned long *)0x40013044;
volatile unsigned long *RXCRC = (unsigned long *)0x40013048;
volatile unsigned long *CFG2 = (unsigned long *)0x4001300C;
unsigned long SPI_TransmitCommandFullDuplex(uint32_t Data)
{
// size of transfer (TSIZE)
*CR2 = 4;
/* Enable SPI peripheral */
*CR1 |= SPI_CR1_SPE;
/* Master transfer start */
*CR1 |= SPI_CR1_CSTART;
*TXDR = Data;
while ( ((*SR) & SPI_FLAG_EOT) == 0 );
// clear flags
*IFCR = 0xFFFFFFFF;
// disable SPI
*CR1 &= ~SPI_CR1_SPE;
return 0;
}
void SPI_ReceiveResponseFullDuplex(uint8_t *pData)
{
unsigned long temp;
// size of transfer (TSIZE)
*CR2 = 5;
/* Enable SPI peripheral */
*CR1 |= SPI_CR1_SPE;
/* Master transfer start */
*CR1 |= SPI_CR1_CSTART;
*TXDR = 0;
*((volatile uint8_t *)TXDR) = 0;
while ( ((*SR) & SPI_FLAG_EOT) == 0 );
*((uint32_t *)pData) = *RXDR;
*((uint8_t *)(pData+4)) = *((volatile uint8_t *)RXDR);
// clear flags
*IFCR = 0xFFFFFFFF;
// disable SPI
*CR1 &= ~SPI_CR1_SPE;
return temp;
}
This is working fine (both functions are just called in sequence in the main).
Then I tried to remove the SPI disabling between the two steps (ie. I don't clear and set again the bit SPE) and I got stuck in function SPI_ReceiveResponseFullDuplex in the while.
Is it necessary to disable SPI between two transmissions or did I make a mistake in the configuration ?
The behaviour of SPE bit is not very clear in the reference manual. For example is it written clearly that, in half-duplex mode, the SPI has to be disabled to change the direction of communication. But nothing in fuill-duplex mode (or I missed it).

This errata item might be relevant here.
Master data transfer stall at system clock much faster than SCK
Description
With the system clock (spi_pclk) substantially faster than SCK (spi_ker_ck divided by a prescaler), SPI/I2S master data transfer can stall upon setting the CSTART bit within one SCK cycle after the EOT event (EOT flag raise) signaling the end of the previous transfer.
Workaround
Apply one of the following measures:
• Disable then enable SPI/I2S after each EOT event.
• Upon EOT event, wait for at least one SCK cycle before setting CSTART.
• Prevent EOT events from occurring, by setting transfer size to undefined (TSIZE = 0)
and by triggering transmission exclusively by TXFIFO writes.
Your SCK frequency is 25 MHz, the system clock can be 400 or 480MHz, 16 or 19 times SCK. When you remove the lines clearing and setting SPE, only these lines remain in effect after detecting EOT
*IFCR = 0xFFFFFFFF;
*CR2 = 5;
*CR1 |= SPI_CR1_CSTART;
When this sequence (quite probably) takes less than 16 clock cycles, then there is the problem described above. Looks like someone did a sloppy work again at the SPI clock system. What you did first, clearing and setting SPE is one of the recommended workarounds.
I would just set TSIZE=9 at the start, then write the command and the dummy bytes in one go, it makes no difference in full-duplex mode.
Keep in mind that in full duplex mode, another 4 bytes are received which must be read and discarded before getting the real answer. This was not a problem with your original code, because clearing SPE discards data still in the receive FIFO, but it would become one if the modified code worked, e.g there were some more delay before enabling CSTART again.

Related

Unreset GPIO Pins While System Resetting

Is it possible to unreset some GPIO pins while using NVIC_SystemReset function at STM32 ?
Thanks in advance for your answers
Best Regards
I try to reach NVIC_SystemReset function. But not clear inside of this function.
Also this project is running on KEIL
Looks like it is not possible, NVIC_SystemReset issues a general reset of all subsystems.
But probably, instead of system reset, you can just reset all peripheral expect one you need keep working, using peripheral reset registers in Reset and Clock Control module (RCC): RCC_AHB1RSTR, RCC_AHB2RSTR, RCC_APB1RSTR, RCC_APB1RSTR.
Example:
// Issue reset of SPI2, SPI3, I2C1, I2C2, I2C3 on APB1 bus
RCC->APB1RSTR = RCC_APB1RSTR_I2C3RST | RCC_APB1RSTR_I2C2RST | RCC_APB1RSTR_I2C1RST
| RCC_APB1RSTR_SPI3RST | RCC_APB1RSTR_SPI2RST;
__DMB(); // Data memory barrier
RCC->APB1RSTR = 0; // deassert all reset signals
See detailed information in RCC registers description in Reference Manual for your MCU.
You may also need to disable all interrupts in NVIC:
for (int i = 0 ; i < 8 ; i++) NVIC->ICER[i] = 0xFFFFFFFFUL; // disable all interrupts
for (int i = 0 ; i < 8 ; i++) NVIC->ICPR[i] = 0xFFFFFFFFUL; // clear all pending flags
if you want to restart the program, you can reload stack pointer to its top value, located at offset 0 of the flash memory and jump to the start address which is stored at offset 4. Note: flash memory is addressed starting from 0x08000000 address in the address space.
uint32_t stack_top = *((volatile uint32_t*)FLASH_BASE);
uint32_t entry_point = *((volatile uint32_t*)(FLASH_BASE + 4));
__set_MSP(stack_top); // set stack top
__ASM volatile ("BX %0" : : "r" (entry_point | 1) ); // jump to the entry point with bit 1 set

two wire ADC, SPI read

I use Arduino UNO (Atmega328) and a 12bit ADC component (AD7893ANZ-10), datasheet available on https://www.analog.com/media/en/technical-documentation/data-sheets/AD7893.pdf
The problem:
I tried already few different codes but the read value is wrong always, even when the SDATA pin of ADC is not connected to MISO pin of Arduino, I get the same values (See figure1 here). I simulated it in proteus(See figure2 here) and used the virtual serial monitor in proteus. The MOSI and SS pin of Arduino are not connected but I set SS pin in code to LOW and HIGH to fullfill libraries requirements. More information about the timing of ADC is added as comments into the code below. Or availabe in the datasheet. I would be thanksfull if you take a look on it due I cant figure out what I did wrong.
PS: The ADC has just to pins to communicate with SPI: 1.SDATA(slaveout) and 2.SCLK. The pin CONVST on ADC is to initiate a conversion.
#include <SPI.h>
//source of code https://www.gammon.com.au/spi
void setup() {
Serial.begin (115200);
pinMode(7, OUTPUT); // this pin is connected to convst on adc to initiate conversion
// Put SCK, MOSI, SS pins into output mode (introductions from source)
// also put SCK, MOSI into LOW state, and SS into HIGH state.
// Then put SPI hardware into Master mode and turn SPI on
pinMode(SCK, OUTPUT);
pinMode(MOSI, OUTPUT);
pinMode(SS, OUTPUT);
digitalWrite(SS, HIGH);
digitalWrite(SCK, LOW);
digitalWrite(MOSI, LOW);
SPCR = (1<<MSTR);
SPI.begin ();
SPI.beginTransaction(SPISettings(2000000, MSBFIRST, SPI_MODE1)); // set the clk frequency; take the MSB first;
//mode1 for CPOL=0 and CPHA=1 according to datasheet p.9 "CPOL bit set to a logic zero and its CPHA bit set to a logic one."
}
int transferAndWait (const byte what) //methode to read data
{
int a = SPI.transfer(what); //send zero(MOSI not connected)and read the first 8 bits and save
Serial.println (a); // show the value, in serial monitor -1
a =(a << 8); //shift the saved first 8 bits to left
Serial.println (a); // show the value, in serial monitor 255
a = a | SPI.transfer(what); //read and save the last 8 bits
Serial.println (a); // show the value, in serial monitor -256
delayMicroseconds (10);
return a;
}
void loop() {
int k;
digitalWrite(7, HIGH); //set pin high to initiate the conversion
delayMicroseconds(9); //the conversion time needed, actually 6 mikroseconds
digitalWrite(SS, LOW); // enable Slave Select to get the library working
k = transferAndWait (0); //call the method
delayMicroseconds(1);
digitalWrite(7, LOW);
digitalWrite(SS, HIGH); //disable chip select
delay(2000); //delay just to keep the overview on serial monitor
Serial.println (k); // show the value, in serial monitor -1
}
First the the return variable should be an unsigned int instead of a signed int.
Also the CONVST should only be low for a short period as conversion is started afterwards. See Timing sequence.

Is the sampling data missing or incorrect in DMA memory buffer, when using ADC with DMA circle mode?

My purpose is sampling signal by ADC channel with DMA data moving in STM32Fx board. Generate a square wave to ADC channel. If using DMA mode, some data is out of order or called mess. Same result happened on STM32F207 and STM32F373 board.
(1) When I collect converted data by using ADC EOC interrupt, the data array looks like a square wave. This is OK.
(2) I would like to try DMA circle instead of EOC IRQ, but the data array seems to mess up, some data missing or incorrect. It could worse if sampling rate was increasing. Below are my test results.
The picture shows my test result: EOC IRQ vs DMA circle mode
EOC IRQ vs DMA with sampling 62.5KHz The waveform in DMA became shorter.
The picture shows very worse DMA with sampling 200KHz The data on DMA mode mess up, but it's consistent by using ADC EOC IRQ.
enter code here
<<<< ADC config >>>>
/* ADC Common Init */
ADC_CommonInitStructure.ADC_Mode = ADC_Mode_Independent;
ADC_CommonInitStructure.ADC_Prescaler = ADC_Prescaler_Div8;
ADC_CommonInitStructure.ADC_DMAAccessMode = ADC_DMAAccessMode_Disabled;
ADC_CommonInitStructure.ADC_TwoSamplingDelay =
ADC_TwoSamplingDelay_5Cycles;
ADC_CommonInit(&ADC_CommonInitStructure);
/* ADC1 DeInit */
ADC_StructInit(&ADC_InitStructure);
/* Configure the ADC1 in continuous mode */
ADC_InitStructure.ADC_Resolution = ADC_Resolution_12b;
ADC_InitStructure.ADC_ScanConvMode = ENABLE;
ADC_InitStructure.ADC_ContinuousConvMode = ENABLE;
ADC_InitStructure.ADC_DataAlign = ADC_DataAlign_Right;
ADC_InitStructure.ADC_NbrOfConversion = 1;
ADC_Init(ADC1, &ADC_InitStructure);
/* ADC1 regular channels 6 configuration */
ADC_RegularChannelConfig(ADC1, ADC_Channel_6, 1,
ADC_SampleTime_480Cycles);
ADC_EOCOnEachRegularChannelCmd(ADC1, ENABLE);
#ifdef __DMA_ENABLE__
/* Enable DMA request after last transfer (Single-ADC mode) */
ADC_DMARequestAfterLastTransferCmd(ADC1, ENABLE);
/* Enable ADC1 DMA since ADC1 is the Master*/
ADC_DMACmd(ADC1, ENABLE);
#else
ADC_ITConfig(ADC1, ADC_IT_EOC, ENABLE);
ADC_ITConfig(ADC1, ADC_IT_OVR, ENABLE);
NVIC_InitStructure.NVIC_IRQChannel = ADC_IRQn;
NVIC_InitStructure.NVIC_IRQChannelPreemptionPriority = 0;
NVIC_InitStructure.NVIC_IRQChannelSubPriority = 1;
NVIC_InitStructure.NVIC_IRQChannelCmd = ENABLE;
NVIC_Init(&NVIC_InitStructure);
#endif
/* Enable ADC1 */
ADC_Cmd(ADC1, ENABLE);
<<<< DMA config >>>>
/* DMA1 clock enable */
RCC_AHB1PeriphClockCmd( RCC_AHB1Periph_DMA2, ENABLE );
/* DMA1 Channel1 Config */
DMA_DeInit(DMA2_Stream0);
DMA_DoubleBufferModeConfig(DMA2_Stream0, (uint32_t)ADCValB, DMA_Memory_1);
DMA_DoubleBufferModeCmd(DMA2_Stream0, ENABLE);
DMA_InitStructure.DMA_Channel = DMA_Channel_0;
DMA_InitStructure.DMA_Memory0BaseAddr = (uint32_t)ADCValA;
DMA_InitStructure.DMA_PeripheralBaseAddr = ((uint32_t) ADC1) + 0x4C;
DMA_InitStructure.DMA_DIR = DMA_DIR_PeripheralToMemory;
DMA_InitStructure.DMA_BufferSize = NUM_OF_ADC; // 512 buffer size
DMA_InitStructure.DMA_PeripheralInc = DMA_PeripheralInc_Disable;
DMA_InitStructure.DMA_MemoryInc = DMA_MemoryInc_Enable;
DMA_InitStructure.DMA_PeripheralDataSize = DMA_PeripheralDataSize_HalfWord;
DMA_InitStructure.DMA_MemoryDataSize = DMA_MemoryDataSize_HalfWord;
DMA_InitStructure.DMA_Mode = DMA_Mode_Circular;
DMA_InitStructure.DMA_Priority = DMA_Priority_Medium;
DMA_InitStructure.DMA_FIFOMode = DMA_FIFOMode_Disable;
DMA_InitStructure.DMA_FIFOThreshold = DMA_FIFOThreshold_HalfFull;
DMA_InitStructure.DMA_MemoryBurst = DMA_MemoryBurst_Single;
DMA_InitStructure.DMA_PeripheralBurst = DMA_PeripheralBurst_Single;
DMA_Init(DMA2_Stream0, &DMA_InitStructure);
DMA_ITConfig(DMA2_Stream0, DMA_IT_TC, ENABLE);
NVIC_InitStructure.NVIC_IRQChannel = DMA2_Stream0_IRQn;
NVIC_InitStructure.NVIC_IRQChannelPreemptionPriority = 0;
NVIC_InitStructure.NVIC_IRQChannelSubPriority = 0;
NVIC_InitStructure.NVIC_IRQChannelCmd = ENABLE;
NVIC_Init(&NVIC_InitStructure);
/* DMA1 Channel1 enable */
DMA_Cmd(DMA2_Stream0, ENABLE);
Finally, I expect the result should be same as that using ADC EOC IRQ.
I have encountered the same problem, as a part of my complicated application, the ADC reads continuously an input PWM data and configured with DMA request in circular mode.
First, I tried to store the converted data in a memory buffer at each End Of Conversion Interrupt. This works well, so I am confident that the ADC is correctly converting the data.
Second, I started the ADC with DMA request in circular mode, (for each End Of Conversion Interrupt, the DMA handler get the ADC converted data and store in a memory buffer). At this stage, when I check the memory buffer via the Debugger - It seams that the data are messed and it's like the DMA was skipping values).
Third, I just wanted to verify, if it's a DMA problem or it's a Debugger one. I started the ADC with DMA request in "normal" mode. And all of a sudden, when opening the Debugger, the memory buffer data are correctly stored.
To summarize, the main difference between the second and the third method when opening the Debugger, is that the DMA handler is still running (in circular mode) and as a result the Debugger can't be able to show correctly the memory buffer data, due to the speed of the ADC conversion time/ DMA Handler requests (~1 µs). And in the other hand (in normal mode), the DMA handlers stops once it has completed filling the buffer.
To conclude, the DMA handler works fine and you can output the square wave with the DAC using your buffer. And if you want to view correctly the data using the Debugger you need to stop the DMA (after a period of time; for example).
HAL_ADC_Stop_DMA(&hadc);
Based on the tabular view of the data you provided, it appears that the DMA is configured properly as the data looks nearly identical to the data obtained from ADC EOC IRQ.
The only variable between relying on DMA and an IRQ is that there may be unexpected bus "collisions" between the DMA Controller and the CPU as, unlike when using an IRQ, they are both running concurrently, potentially resulting in wait states.
From the STM32 Reference Manual Section 13.4:
The DMA controller performs direct memory transfer by sharing the system bus with the Cortex-M4 ® F core. The DMA request may stop the CPU access to the system bus for some bus cycles, when the CPU and DMA are targeting the same destination (memory or peripheral).
And your observed sampling rate-dependent degradation in performance certainly corroborates this hypothesis, as the busmatrix must arbitrate more frequent accesses between the DMA Controller and the CPU.
Without seeing the rest of your code that sets up and reads from the buffer, it is hard to say what aspect of your application code may be causing this issue.

STM32 SPI data is sent the reverse way

I've been experimenting with writing to an external EEPROM using SPI and I've had mixed success. The data does get shifted out but in an opposite manner. The EEPROM requires a start bit and then an opcode which is essentially a 2-bit code for read, write and erase. Essentially the start bit and the opcode are combined into one byte. I'm creating a 32-bit unsigned int and then bit-shifting the values into it. When I transmit these I see that the actual data is being seen first and then the SB+opcode and then the memory address. How do I reverse this to see the opcode first then the memory address and then the actual data. As seen in the image below, the data is BCDE, SB+opcode is 07 and the memory address is 3F. The correct sequence should be 07, 3F and then BCDE (I think!).
Here is the code:
uint8_t mem_addr = 0x3F;
uint16_t data = 0xBCDE;
uint32_t write_package = (ERASE << 24 | mem_addr << 16 | data);
while (1)
{
/* USER CODE END WHILE */
/* USER CODE BEGIN 3 */
HAL_SPI_Transmit(&hspi1, &write_package, 2, HAL_MAX_DELAY);
HAL_Delay(10);
}
/* USER CODE END 3 */
It looks like as your SPI interface is set up to process 16 bit halfwords at a time. Therefore it would make sense to break up the data to be sent into 16 bit halfwords too. That would take care of the ordering.
uint8_t mem_addr = 0x3F;
uint16_t data = 0xBCDE;
uint16_t write_package[2] = {
(ERASE << 8) | mem_addr,
data
};
HAL_SPI_Transmit(&hspi1, (uint8_t *)write_package, 2, HAL_MAX_DELAY);
EDIT
Added an explicit cast. As noted in the comments, without the explicit cast it wouldn't compile as C++ code, and cause some warnings as C code.
You're packing your information into a 32 bit integer, on line 3 of your code you have the decision about which bits of data are placed where in the word. To change the order you can replace the line with:
uint32_t write_package = ((data << 16) | (mem_addr << 8) | (ERASE));
That is shifting data 16 bits left into the most significant 16 bits of the word, shifting mem_addr up by 8 bits and or-ing it in, and then adding ERASE in the least significant bits.
Your problem is the Endianness.
By default the STM32 uses little edian so the lowest byte of the uint32_t is stored at the first adrress.
If I'm right this is the declaration if the transmit function you are using:
HAL_StatusTypeDef HAL_SPI_Transmit(SPI_HandleTypeDef *hspi, uint8_t *pData, uint16_t Size, uint32_t Timeout)
It requires a pointer to uint8_t as data (and not a uint32_t) so you should get at least a warning if you compile your code.
If you want to write code that is independent of the used endianess, you should store your data into an array instead of one "big" variable.
uint8_t write_package[4];
write_package[0] = ERASE;
write_package[1] = mem_addr;
write_package[2] = (data >> 8) & 0xFF;
write_package[3] = (data & 0xFF);

cortex-m0 systick interrupt doesn't happens

I am writting firmware for stm32f072.
The problem is that SysTick interrupt doesn't happens.
Here is simple code for SysTick configuring:
SysTick_Config(1000);
This function is taken from CMSIS's core_cm0.h file:
__STATIC_INLINE uint32_t SysTick_Config(uint32_t ticks)
{
if ((ticks - 1) > SysTick_LOAD_RELOAD_Msk) return (1); /* Reload value impossible */
SysTick->LOAD = (ticks & SysTick_LOAD_RELOAD_Msk) - 1; /* set reload register */
NVIC_SetPriority (SysTick_IRQn, (1<<__NVIC_PRIO_BITS) - 1); /* set Priority for Systick Interrupt */
SysTick->VAL = 0; /* Load the SysTick Counter Value */
SysTick->CTRL = SysTick_CTRL_CLKSOURCE_Msk |
SysTick_CTRL_TICKINT_Msk |
SysTick_CTRL_ENABLE_Msk; /* Enable SysTick IRQ and SysTick Timer */
return (0); /* Function successful */
}
System timer counts as expected.
SysTick->CTRL's overcounting bit is set to 1 but there is no interrupt happens! Firmware doesn't jump to SysTick_Handler().
What I miss? This code is enough for stm32f1 and stm32f4 devices but not work for stm32f0.
I recommend you to take a look at the Code Snippets from ST. These are low level programs for F0 (and L0) families. Some of them use the SysTick (e.g. first two example from CLOCK CONTROLLER projects) and all things are preconfigured and hopefully works on your board too. It is written originally for the STM32F072 Discovery board. I used it with my custom board with some tiny modifications.