Currently, I'm facing a weird problem with the STM32. I just generated code with the STM32Cube IDE for the chosen MCU (STM32L031G6). I nearly didn't change anything, except configuring one GPIO as output and trying to let a connected LED blink.
Now the problem:
If I run the code, nothing happens, no blink at all.
Stepping through the code, I can enable the LED once when the WritePin is called. Afterwards, just one step further, the LED is off again, although there is no further WritePin call executed. The LED never gets back on again.
What can be the problem with this code? There is nothing special about it? Did I miss something which is required for generated STM32 code?
For the following code I removed any unused line and comment.
#include "main.h"
void SystemClock_Config(void);
static void MX_GPIO_Init(void);
int main(void) {
HAL_Init();
SystemClock_Config();
MX_GPIO_Init();
while(1) {
//HAL_GPIO_TogglePin(LED_GPIO_Port, LED_Pin);
HAL_GPIO_WritePin(LED_GPIO_Port, LED_Pin, GPIO_PIN_SET);
HAL_Delay(1000);
HAL_GPIO_WritePin(LED_GPIO_Port, LED_Pin, GPIO_PIN_RESET);
HAL_Delay(1000);
}
}
void SystemClock_Config(void) {
RCC_OscInitTypeDef RCC_OscInitStruct = {0};
RCC_ClkInitTypeDef RCC_ClkInitStruct = {0};
__HAL_PWR_VOLTAGESCALING_CONFIG(PWR_REGULATOR_VOLTAGE_SCALE1);
RCC_OscInitStruct.OscillatorType = RCC_OSCILLATORTYPE_MSI;
RCC_OscInitStruct.MSIState = RCC_MSI_ON;
RCC_OscInitStruct.MSICalibrationValue = 0;
RCC_OscInitStruct.MSIClockRange = RCC_MSIRANGE_6;
RCC_OscInitStruct.PLL.PLLState = RCC_PLL_NONE;
if (HAL_RCC_OscConfig(&RCC_OscInitStruct) != HAL_OK) {
Error_Handler();
}
RCC_ClkInitStruct.ClockType = RCC_CLOCKTYPE_HCLK|RCC_CLOCKTYPE_SYSCLK
|RCC_CLOCKTYPE_PCLK1|RCC_CLOCKTYPE_PCLK2;
RCC_ClkInitStruct.SYSCLKSource = RCC_SYSCLKSOURCE_MSI;
RCC_ClkInitStruct.AHBCLKDivider = RCC_SYSCLK_DIV1;
RCC_ClkInitStruct.APB1CLKDivider = RCC_HCLK_DIV1;
RCC_ClkInitStruct.APB2CLKDivider = RCC_HCLK_DIV1;
if (HAL_RCC_ClockConfig(&RCC_ClkInitStruct, FLASH_LATENCY_0) != HAL_OK) {
Error_Handler();
}
}
static void MX_GPIO_Init(void) {
GPIO_InitTypeDef GPIO_InitStruct = {0};
__HAL_RCC_GPIOA_CLK_ENABLE();
HAL_GPIO_WritePin(LED_GPIO_Port, LED_Pin, GPIO_PIN_RESET);
GPIO_InitStruct.Pin = LED_Pin;
GPIO_InitStruct.Mode = GPIO_MODE_OUTPUT_PP;
GPIO_InitStruct.Pull = GPIO_NOPULL;
GPIO_InitStruct.Speed = GPIO_SPEED_FREQ_LOW;
HAL_GPIO_Init(LED_GPIO_Port, &GPIO_InitStruct);
}
void Error_Handler(void) {
__disable_irq();
while (1) {}
}
Update 1:
As seen in the comments the HAL_Delay is not working properly. But how to fix it? And why the code does not let the led flicker when the HAL_Delay is removed?
Update 2:
It is also not possible to use the loop the following way, but the led is not turned on in any way.
while (1) {
HAL_GPIO_TogglePin(LED_GPIO_Port, LED_Pin);
}
See the following images for the configuration.
Update 3:
When executing the code on the STM32L031G6, the debugger stops pretty soon. Stepping through the code works (sometimes). Here is the debug log when clicking "Run" in the STM32Cube IDE.
SEGGER J-Link GDB Server V7.58 Command Line Version
JLinkARM.dll V7.58 (DLL compiled Nov 4 2021 16:23:13)
Command line: -port 2331 -s -device STM32L031G6 -endian little -speed 4000 -if swd -vd
-----GDB Server start settings-----
GDBInit file: none
GDB Server Listening port: 2331
SWO raw output listening port: 2332
Terminal I/O port: 2333
Accept remote connection: localhost only
Generate logfile: off
Verify download: on
Init regs on start: off
Silent mode: off
Single run mode: on
Target connection timeout: 0 ms
------J-Link related settings------
J-Link Host interface: USB
J-Link script: none
J-Link settings file: none
------Target related settings------
Target device: STM32L031G6
Target interface: SWD
Target interface speed: 4000kHz
Target endian: little
Connecting to J-Link...
J-Link is connected.
Firmware: J-Link V11 compiled Dec 9 2021 14:14:49
Hardware: V11.00
S/N: 261014681
OEM: SEGGER-EDU
Feature(s): FlashBP, GDB
Checking target voltage...
Target voltage: 3.34 V
Listening on TCP/IP port 2331
Connecting to target...
Connected to target
Waiting for GDB connection...Connected to 127.0.0.1
GDB closed TCP/IP connection (Socket 1132)
Connected to 127.0.0.1
Reading all registers
Read 4 bytes # address 0x1FF000FC (Data = 0x89B8D002)
Read 2 bytes # address 0x1FF000FC (Data = 0xD002)
Received monitor command: WriteDP 0x2 0xF0
O.K.
Received monitor command: ReadAP 0x2
O.K.:0xF0000003
Read 4 bytes # address 0x1FF000E4 (Data = 0x05408A28)
Read 2 bytes # address 0x1FF000E4 (Data = 0x8A28)
Read 4 bytes # address 0x1FF000E4 (Data = 0x05408A28)
Read 2 bytes # address 0x1FF000E4 (Data = 0x8A28)
Reading 32 bytes # address 0xF0000FD0
Connected to 127.0.0.1
Reading all registers
Read 4 bytes # address 0x1FF000FC (Data = 0x89B8D002)
Read 2 bytes # address 0x1FF000FC (Data = 0xD002)
Received monitor command: reset
Resetting target
Downloading 192 bytes # address 0x08000000 - Verified OK
Downloading 6072 bytes # address 0x080000C0 - Verified OK
Downloading 28 bytes # address 0x08001878 - Verified OK
Downloading 8 bytes # address 0x08001894 - Verified OK
Downloading 4 bytes # address 0x0800189C - Verified OK
Downloading 4 bytes # address 0x080018A0 - Verified OK
Downloading 12 bytes # address 0x080018A4 - Verified OK
Writing register (PC = 0x 80006d0)
Starting target CPU...
GDB closed TCP/IP connection (Socket 1128)
Debugger requested to halt target...
...Target halted (PC = 0x1FF000E4)
Reading all registers
Read 4 bytes # address 0x1FF000E4 (Data = 0x05408A28)
Read 2 bytes # address 0x1FF000E4 (Data = 0x8A28)
GDB closed TCP/IP connection (Socket 1152)
Restoring target state and closing J-Link connection...
Shutting down...
On the other hand, the same code works on a STM32L031K6 of the nucleo board with the ST link disconnected.
*Update 4:
Since I'm using a custom board, there may be a flaw in the schematics. I don't see any issues with the circuit, but maybe, you see some. There is no crystal since it shouldn't be required regarding the datasheet. There are internal oszillators available.
The TOUCH net is just a circuit which connects GND to the pin if a button is pressed.
This is the circuit of the STM32L031G6U6.
This is the circuit of the LEDs that should be controlled. In the previous code I just try to control the LED with the net label STATUS_LED. Since I got the LED to blink while stepping through the code, the Mosfet circuit should work.
I'm currently very confused why I got so many problems. I tried a second and third PCB of the same circuit, but the problems are the same.
I figured out, that I cannot use any clock configurations although they are offered by the STM32Cube IDE. Using the MSI just doesn't work for some frequencies. The code will stall in the SystemClock_Config setting the oscillator or the clock.
Why the HAL_Delay sometimes doesn't work and sometimes it does?
Why the system doesn't start at all when trying to run the code (even with everything disconnected and just the power supply reconnecting)?
Why stepping through the code does work but running the code does not?
Problem solved. The Altium package I downloaded was for the wrong package of the STM32L031. It ia for the STM32L031G6U6S and not the STM32L031G6U6.
Related
I am using a STM32MP157C-DK2 board to communicate via CAN. To do this, I have created a program in STM32CubeIDE and set the FDCAN parameters:
FDCAN configuration
I have been testing in engineering mode and I was able to receive and send CAN messages (I have connected to an IXXAT device) using the following function in the main function:
/* USER CODE BEGIN 3 */
if(HAL_FDCAN_GetRxFifoFillLevel(&hfdcan1, FDCAN_RX_FIFO0) > 0)
{
HAL_GPIO_TogglePin(GPIOF, GPIO_PIN_3);
// Recoger mensaje de la cola
if(HAL_FDCAN_GetRxMessage(&hfdcan1, FDCAN_RX_FIFO0, &RxHeader, RxData) != HAL_OK)
{
Error_Handler();
}
if(HAL_FDCAN_AddMessageToTxFifoQ(&hfdcan1, &TxHeader, TxData))
{
Error_Handler();
} */
}
Then the tests have been carried out in production mode. In this case, first the A7 processor is initialized and then the M4 processor program is started with the following commands (Core_Communication is the name of the program):
echo Core_Communication_CM4.elf > /sys/class/remoteproc/remoteproc0/firmware
echo start > /sys/class/remoteproc/remoteproc0/state
In this case, I try to send CAN messages through IXXAT but I get the following error:
ACK error
Is there a parameter that I have to configure?
I want to use a GPIO pin as a new chip select for SPI on an Up Squared board. The Up squared uses an Intel Pentium N4200, so it's a x86 machine. I have managed to this on a Raspberry Pi by using Device Tree Overlays but as this is an x86 machine I may have to use ACPI overlays.
The Up squared has two spi available and they propose here to use ACPI overlays, this repo, which actually works very well. Below one of the asl files they use
/*
* This ASL can be used to declare a spidev device on SPI0 CS0
*/
DefinitionBlock ("", "SSDT", 5, "INTEL", "SPIDEV0", 1)
{
External (_SB_.PCI0.SPI1, DeviceObj)
Scope (\_SB.PCI0.SPI1)
{
Device (TP0) {
Name (_HID, "SPT0001")
Name (_DDN, "SPI test device connected to CS0")
Name (_CRS, ResourceTemplate () {
SpiSerialBus (
0, // Chip select
PolarityLow, // Chip select is active low
FourWireMode, // Full duplex
8, // Bits per word is 8 (byte)
ControllerInitiated, // Don't care
1000000, // 10 MHz
ClockPolarityLow, // SPI mode 0
ClockPhaseFirst, // SPI mode 0
"\\_SB.PCI0.SPI1", // SPI host controller
0 // Must be 0
)
})
}
}
}
I compiled this file using
$ sudo iasl spidev1.0.asl > /dev/null
$ sudo mv spidev1.0.asl /lib/firmware/acpi-upgrades
$ sudo update-initramfs -u -k all
Then I reboot an I can see a device and communicate through it.
up#up:~$ ls /dev/spi*
/dev/spidev1.0
Thus, I decided to write my own overlay based on themeta-acpi samples from intel and I wrote this:
/*
* This ASL can be used to declare a spidev device on SPI0 CS2
*/
DefinitionBlock ("", "SSDT", 5, "INTEL", "SPIDEV2", 1)
{
External (_SB_.PCI0.SPI1, DeviceObj)
External (_SB_.PCI0.GIP0.GPO, DeviceObj)
Scope (\_SB.PCI0.SPI1)
{
Name (_CRS, ResourceTemplate () {
GpioIo (Exclusive, PullUp, 0, 0, IoRestrictionOutputOnly,
"\\_SB.PCI0.GIP0.GPO", 0) {
22 // pin 22 is BCM25 or 402 in linux
}
})
Name (_DSD, Package() {
ToUUID("daffd814-6eba-4d8c-8a91-bc9bbf4aa301"),
Package () {
Package () { "compatible", "spidev" }, // not sure if this is needed
Package () {
"cs-gpios", Package () {
0,
0,
^SPI1, 0, 0, 0, // index 0 in _CRS -> pin 22
}
},
}
})
Device (TP2) {
Name (_HID, "SPT0001")
Name (_DDN, "SPI test device connected to CS2")
Name (_CRS, ResourceTemplate () {
SpiSerialBus (
2, // Chip select
PolarityLow, // Chip select is active low
FourWireMode, // Full duplex
8, // Bits per word is 8 (byte)
ControllerInitiated, // Don't care
1000000, // 10 MHz
ClockPolarityLow, // SPI mode 0
ClockPhaseFirst, // SPI mode 0
"\\_SB.PCI0.SPI1", // SPI host controller
0 // Must be 0
)
})
}
}
}
But I cannot see the new device. What am I missing?
Edit:
I have modified the code with a code which actually worked. I can see now a device on /dev/spidev1.2.
However, the CS on pin 22 is low all the time which shouldn't be the case. is the number of the pin correct? I'm using pin numbering from here
Edit 2:
Here is the output of my kernel version
Linux up 5.4.65-rt38+ #1 SMP PREEMPT_RT Mon Feb 28 13:42:31 CET 2022 x86_64 x86_64 x86_64 GNU/Linux
I compiled this up linux repository with the RT patch for the right kernel version.
I also installed the upboard-extras package and I'm actually able to communicate through spi for devices /dev/spidev1.0 and /dev/spidev1.1. So I think I have configured the up squared correctly.
There is nongpio file under /sys/class/gpio
up#up:~/aru$ ls /sys/class/gpio
export gpiochip0 gpiochip267 gpiochip310 gpiochip357 gpiochip434 unexport
I can set the GPIO to 1 or 0 and I can see the output on a multimeter, so I think I have right permissions for GPIO.
Edit 3:
Please find in this link the .dat result from acpidump -o up2-tables.dat
I assume that you are using this board. To be able to use I/O pins(i2c, spi etc.), you need to enable them firstly. Easy way to check you already enabled them or not is that typing in terminal:
uname -a
Output of this will be look like:
Linux upxtreme-UP-WHL01 5.4.0-1-generic #2~upboard2-Ubuntu SMP Thu Jul 25 13:35:27 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
In here #2~upboard2-Ubuntu part can be changed accordingto your board type. However if you don't see a similar result, then you didn't configure your board yet. Otherway to check it, go to folder: /sys/class/gpio and check the ngpio file. Inside it, there should be written 28.
To use any I/O pins(i2c, spi etc.), you don't need to change anything on BIOS side, because its coming to you defaultly enabled.
Please check the up-wiki page, and update your board kernel after linux installation. Then your I/O configurations will be enabled. Up-wiki main page.
I have a simple program that uses ADC to convert 2 inputs, I use DMA in circular mode. Everything works great. The only problem I have is that in order to reflash the STM32, I have to first erase the chip. This is only happening when I set the ADC to circular mode.
Hopefully someone can help with this. Thanks!
Here is the code for setting up DMA/ADC
void HAL_ADC_MspInit(ADC_HandleTypeDef *hadc)
{
GPIO_InitTypeDef GPIO_InitStruct = {0};
if (hadc->Instance == ADC1)
{
__HAL_RCC_ADC1_CLK_ENABLE();
GPIO_InitStruct.Pin = GPIO_PIN_0|GPIO_PIN_1;
GPIO_InitStruct.Mode = GPIO_MODE_ANALOG;
GPIO_InitStruct.Pull = GPIO_NOPULL;
HAL_GPIO_Init(GPIOB, &GPIO_InitStruct);
hdma_adc1.Instance = DMA2_Stream0;
hdma_adc1.Init.Channel = DMA_CHANNEL_0;
hdma_adc1.Init.Direction = DMA_PERIPH_TO_MEMORY;
hdma_adc1.Init.PeriphInc = DMA_PINC_DISABLE;
hdma_adc1.Init.MemInc = DMA_MINC_ENABLE;
hdma_adc1.Init.PeriphDataAlignment = DMA_PDATAALIGN_HALFWORD;
hdma_adc1.Init.MemDataAlignment = DMA_MDATAALIGN_HALFWORD;
hdma_adc1.Init.Mode = DMA_CIRCULAR;
hdma_adc1.Init.Priority = DMA_PRIORITY_LOW;
hdma_adc1.Init.FIFOMode = DMA_FIFOMODE_ENABLE;
hdma_adc1.Init.FIFOThreshold = DMA_FIFO_THRESHOLD_HALFFULL;
if (HAL_DMA_Init(&hdma_adc1) != HAL_OK)
{
Error_Handler();
}
__HAL_LINKDMA(hadc,DMA_Handle,hdma_adc1);
}
}
void MX_ADC1_Init(void)
{
ADC_ChannelConfTypeDef sConfig = {0};
hadc1.Instance = ADC1;
hadc1.Init.ClockPrescaler = ADC_CLOCK_SYNC_PCLK_DIV4;
hadc1.Init.Resolution = ADC_RESOLUTION_12B;
hadc1.Init.ScanConvMode = ENABLE;
hadc1.Init.ContinuousConvMode = ENABLE;
hadc1.Init.DiscontinuousConvMode = DISABLE;
hadc1.Init.ExternalTrigConvEdge = ADC_EXTERNALTRIGCONVEDGE_NONE;
hadc1.Init.ExternalTrigConv = ADC_SOFTWARE_START;
hadc1.Init.DataAlign = ADC_DATAALIGN_RIGHT;
hadc1.Init.NbrOfConversion = 2;
hadc1.Init.DMAContinuousRequests = ENABLE;
hadc1.Init.EOCSelection = ADC_EOC_SINGLE_CONV;
if (HAL_ADC_Init(&hadc1) != HAL_OK)
{
Error_Handler();
}
sConfig.Channel = ADC_CHANNEL_8;
sConfig.Rank = 1;
sConfig.SamplingTime = ADC_SAMPLETIME_3CYCLES;
if (HAL_ADC_ConfigChannel(&hadc1, &sConfig) != HAL_OK)
{
Error_Handler();
}
sConfig.Channel = ADC_CHANNEL_9;
sConfig.Rank = 2;
sConfig.SamplingTime = ADC_SAMPLETIME_3CYCLES;
if (HAL_ADC_ConfigChannel(&hadc1, &sConfig) != HAL_OK)
{
Error_Handler();
}
}
And here is the output from jlink. Again, if I erase the chip everything works fine.
/Applications/SEGGER/JLink_V686c/JLinkGDBServerCLExe -select USB -device STM32F401RE -endian
little -if SWD -speed auto -ir -noLocalhostOnly
SEGGER J-Link GDB Server V6.86c Command Line Version
JLinkARM.dll V6.86c (DLL compiled Oct 6 2020 14:16:39)
Command line: -select USB -device STM32F401RE -endian little -if SWD -speed auto -ir -
noLocalhostOnly
-----GDB Server start settings-----
GDBInit file: none
GDB Server Listening port: 2331
SWO raw output listening port: 2332
Terminal I/O port: 2333
Accept remote connection: yes
Generate logfile: off
Verify download: off
Init regs on start: on
Silent mode: off
Single run mode: off
Target connection timeout: 0 ms
------J-Link related settings------
J-Link Host interface: USB
J-Link script: none
J-Link settings file: none
------Target related settings------
Target device: STM32F401RE
Target interface: SWD
Target interface speed: auto
Target endian: little
Connecting to J-Link...
J-Link is connected.
Firmware: J-Link EDU Mini V1 compiled Oct 2 2020 17:00:40
Hardware: V1.00
S/N: 801018944
Feature(s): FlashBP, GDB
Checking target voltage...
Target voltage: 3.28 V
Listening on TCP/IP port 2331
Connecting to target...
Connected to target
Waiting for GDB connection...Connected to 127.0.0.1
Reading all registers
Read 2 bytes # address 0x0800EF10 (Data = 0xB508)
Read 2 bytes # address 0x080080A0 (Data = 0x4770)
Read 4 bytes # address 0x00000000 (Data = 0x20018000)
Read 2 bytes # address 0x00000000 (Data = 0x8000)
Downloading 408 bytes # address 0x08000000
Downloading 16048 bytes # address 0x080001C0
Downloading 16144 bytes # address 0x08004070
Downloading 16048 bytes # address 0x08007F80
Downloading 15968 bytes # address 0x0800BE30
Downloading 16000 bytes # address 0x0800FC90
Downloading 7876 bytes # address 0x08013B10
Downloading 5908 bytes # address 0x080159D8
Downloading 956 bytes # address 0x080170EC
Downloading 1032 bytes # address 0x080174A8
Downloading 24 bytes # address 0x080178B0
Downloading 4 bytes # address 0x080178C8
Downloading 2496 bytes # address 0x080178CC
ERROR: Failed to prepare for programming.
RAM check failed # addr 0x20000E6C.
RAM check failed while testing 0x1050 bytes # addr 0x200006CC.
Writing register (PC = 0x 800c8f0)
Read 2 bytes # address 0x0800EF10 (Data = 0xB508)
Read 2 bytes # address 0x080080A0 (Data = 0x4770)
Received monitor command: reset
Resetting target
Resetting target
Debugger connected to 127.0.0.1:2331
Starting target CPU...
Hopefully anyone is able to help me with my problem.
I'm currently setting um a I2C bus with a FT2232H as master and a STM32F407VGT6 (Discovery-Board) as a slave. I was able to send the slave address correctly from the master but I'm not sure if the slave is setup correctly because I dont get any ACK bit as a response from the slave. I was looking over the settings quite some times.
The start and stop conditions are sent correctly aswell.
Here is my code for the slave setup:
I2C_InitTypeDef i2c_init;
NVIC_InitTypeDef NVIC_InitStructure, NVIC_InitStructure2;
I2C_DeInit(I2C1);
I2C_SoftwareResetCmd(I2C1, ENABLE);
I2C_SoftwareResetCmd(I2C1, DISABLE);
RCC_APB1PeriphClockCmd(RCC_APB1Periph_I2C1, ENABLE);
RCC_AHB1PeriphClockCmd(RCC_AHB1Periph_GPIOB, ENABLE);
gpio_init.GPIO_Pin = GPIO_Pin_7 | GPIO_Pin_8;
gpio_init.GPIO_Mode = GPIO_Mode_AF;
gpio_init.GPIO_Speed = GPIO_Speed_50MHz;
gpio_init.GPIO_PuPd = GPIO_PuPd_UP;
gpio_init.GPIO_OType = GPIO_OType_OD;
GPIO_Init(GPIOB, &gpio_init);
GPIO_PinAFConfig(GPIOB, GPIO_PinSource7, GPIO_AF_I2C1); // SDA
GPIO_PinAFConfig(GPIOB, GPIO_PinSource8, GPIO_AF_I2C1); // SCL
NVIC_PriorityGroupConfig(NVIC_PriorityGroup_2);
NVIC_InitStructure.NVIC_IRQChannel = I2C1_EV_IRQn;
NVIC_InitStructure.NVIC_IRQChannelPreemptionPriority = 0;
NVIC_InitStructure.NVIC_IRQChannelSubPriority = 0;
NVIC_InitStructure.NVIC_IRQChannelCmd = ENABLE;
NVIC_Init(&NVIC_InitStructure);
NVIC_InitStructure2.NVIC_IRQChannel = I2C1_ER_IRQn;
NVIC_InitStructure2.NVIC_IRQChannelPreemptionPriority = 0;
NVIC_InitStructure2.NVIC_IRQChannelSubPriority = 0;
NVIC_InitStructure2.NVIC_IRQChannelCmd = ENABLE;
NVIC_Init(&NVIC_InitStructure2);
I2C_ITConfig(I2C1, I2C_IT_EVT, ENABLE);
I2C_ITConfig(I2C1, I2C_IT_ERR, ENABLE);
I2C_ITConfig(I2C1, I2C_IT_BUF, ENABLE);
i2c_init.I2C_ClockSpeed = 50000;
i2c_init.I2C_Mode = I2C_Mode_I2C;
i2c_init.I2C_DutyCycle = I2C_DutyCycle_2;
i2c_init.I2C_OwnAddress1 = 0x21;
i2c_init.I2C_Ack = I2C_Ack_Enable;
i2c_init.I2C_AcknowledgedAddress = I2C_AcknowledgedAddress_7bit;
I2C_Init(I2C1, &i2c_init);
I2C_StretchClockCmd(I2C1, ENABLE);
I2C_Cmd(I2C1, ENABLE);
And the address I'm sending should be correct like that.
Thanks for your help :)
Edit:
The problem might also be that the STM32F4 isn't reading the I2C lines. But If thats the case I have no idea which setting is wrong. Or do I need to start the peripheral in any way? I thought if the PE bit is set the I2C is running and responding automatically. Isn't that the case?
Finally solved the problem. The default HAL Initialisation has initialized the GPIOs alternate function wrong.
There were two Pins to choose for SDA and SCL lines. For SDA in the end both pins where configured as alternate function what caused to listening on wrong line or responding to the wrong one.
Hope that helps someone else too.
I'm trying to get the at86rf230 kernel driver running on a BeagleBone Black to communicate with my radio. I have confirmed that I am able to interact with the device using some userspace SPI code. Here's the fragment of the DTS file I'm working with:
fragment#0 {
target = <&am33xx_pinmux>;
__overlay__ {
spi1_pins_s0: spi1_pins_s0 {
pinctrl-single,pins = <
0x040 0x37 /* DIG2 GPIO_9.15 I_PULLUP | MODE7-GPIO1_16 */
0x044 0x17 /* SLPTR GPIO_9.23 O_PULLUP | MODE7-GPIO1_17 */
0x1AC 0x17 /* RSTN GPIO_9.25 O_PULLUP | MODE7-GPIO3_21 */
0x1A4 0x37 /* IRQ GPIO_9.26 I_PULLUP | MODE7-GPIO3_19 */
0x190 0x33 /* SCLK mcasp0_aclkx.spi1_sclk, INPUT_PULLUP | MODE3 */
0x194 0x33 /* MISO mcasp0_fsx.spi1_d0, INPUT_PULLUP | MODE3 */
0x198 0x13 /* MOSI mcasp0_axr0.spi1_d1, OUTPUT_PULLUP | MODE3 */
0x19c 0x13 /* SCS0 mcasp0_ahclkr.spi1_cs0, OUTPUT_PULLUP | MODE3 */
>;
};
};
};
fragment#3 {
target = <&spi1>;
__overlay__ {
#address-cells = <1>;
#size-cells = <0>;
status = "okay";
pinctrl-names = "default";
pinctrl-0 = <&spi1_pins_s0>;
at86rf230#0 {
spi-max-frequency = <1000000>;
reg = <0>;
compatible = "at86rf230";
interrupts = <19>;
interrupt-parent = <&gpio3>;
};
};
};
On loading the module I get the following error in dmesg:
[ 352.668833] at86rf230 spi1.0: no platform_data
[ 352.668945] at86rf230: probe of spi1.0 failed with error -22
I am trying to work out the right way to attach platform_data to the SPI overlay. Here's what I'd like to attach:
platform_data {
rstn = <&gpio3 21 0>;
slp_tr = <&gpio1 17 0>;
dig2 = <&gpio1 16 0>;
};
Unfortunately, just sticking it in as-is doesn't work so well when I use dtc to compile the DTS. I get the following error:
syntax error: properties must precede subnodes
FATAL ERROR: Unable to parse input tree
I feel that I'm ridiculously close to solving this, and I just need a little shove in the right direction ;)
First of all, the GPIO names in your excerpt are wrong. Accordingly to the latest code in linux-next there are
pdata->rstn = of_get_named_gpio(spi->dev.of_node, "reset-gpio", 0);
pdata->slp_tr = of_get_named_gpio(spi->dev.of_node, "sleep-gpio", 0);
There are only two of them.
Second, you have to adjust the DTS for your exact board. The entire DTS has to be considered as a platform data for all devices found on the board (some supported, some might be not). The section for the specific device should be described as device node.
So, the good start point is to check what is in upstream already exists, namely in arch/arm/boot/dts/am335x-boneblack.dts, don't forget to check included files as well.
And the example for this specific driver is in Documentation/devicetree/bindings/net/ieee802154/at86rf230.txt.