DM6446 GPIO Bank 0 request_irq returns -22 - linux-device-driver

I'm trying to set up an interrupt-handler in my driver for DM6446 GPIO BANK 0 interrupt.But request_irq returns -22.I know the Interrupt number for GPIO BANK-0 from the data sheet which states it to be 56.Following are the settings for GPIO in my code.I want to get interrupt on GPIO-10.
while((REG_VAL(PTSTAT) & 0x1) != 0); // Wait for power state transtion to finish
REG_VAL(MDCTL26) = 0x00000203; //To enable GPIO module and EMURSITE BIT as stated in sprue14 for state transition
REG_VAL(PTCMD) = 0x1; // Start power state transition for ALWAYSON
while((REG_VAL(PTSTAT) & 0x1) != 0); // Wait for power state transtion to finish
REG_VAL(PINMUX0) = REG_VAL(PINMUX0) & 0x80000000; //Disbale other Functionlaity on BANK 0 pins
printk(KERN_DEBUG "I2C: PINMUX0 = %x\n",REG_VAL(PINMUX0));
REG_VAL(DIR01) = REG_VAL(DIR01) | 0xFFFFFFFF; //Set direction as input for GPIO 0 and 10
REG_VAL(BINTEN) = REG_VAL(BINTEN) | 0x00000001; //Enable Interrupt for GPIO Bank 0
REG_VAL(SET_RIS_TRIG01) = REG_VAL(SET_RIS_TRIG01) | 0x00000401; // Enable rising edge interrupt of GPIO BANK 0 PIN 0 PIN 10
REG_VAL(CLR_FAL_TRIG01) = REG_VAL(CLR_FAL_TRIG01) | 0x00000401; // Disable falling edge interrupt of Bank 0
Result = request_irq(56,Gpio_Interrupt_Handler,0,"gpio",I2C_MAJOR);
if(Result < 0)
{
printk(KERN_ALERT "UNABLE TO REQUEST GPIO IRQ %d ",Result);
}
A little help shall be appreciated.
Thank you.
I have tried the gpio_to_irq as well for PIN-10 of BANK-0 but it returns irq no to be 72 but DM6446 has interrupt number upto 63 only in Data sheet.

I got it. If i use gpio_to_irq, It will return a valid IRQ number but different than the interrupt number(which i guess is also called IRQ number) specified in data sheet of Processor.If I see the /proc/interrupts, it will have an entry of that IRQ returned form gpio_to_irq but under GPIO type not the processor's Interrupt controller, which in my case for ARM shall be AINTC.All other interrupts are of AINTC type.
Moreover, Even if request_irq succeeds with interrupt number stated in data sheet,/proc/stat will report interrupts at both IRQ numbers i.e. AINTC and GPIO type.

Related

Unreset GPIO Pins While System Resetting

Is it possible to unreset some GPIO pins while using NVIC_SystemReset function at STM32 ?
Thanks in advance for your answers
Best Regards
I try to reach NVIC_SystemReset function. But not clear inside of this function.
Also this project is running on KEIL
Looks like it is not possible, NVIC_SystemReset issues a general reset of all subsystems.
But probably, instead of system reset, you can just reset all peripheral expect one you need keep working, using peripheral reset registers in Reset and Clock Control module (RCC): RCC_AHB1RSTR, RCC_AHB2RSTR, RCC_APB1RSTR, RCC_APB1RSTR.
Example:
// Issue reset of SPI2, SPI3, I2C1, I2C2, I2C3 on APB1 bus
RCC->APB1RSTR = RCC_APB1RSTR_I2C3RST | RCC_APB1RSTR_I2C2RST | RCC_APB1RSTR_I2C1RST
| RCC_APB1RSTR_SPI3RST | RCC_APB1RSTR_SPI2RST;
__DMB(); // Data memory barrier
RCC->APB1RSTR = 0; // deassert all reset signals
See detailed information in RCC registers description in Reference Manual for your MCU.
You may also need to disable all interrupts in NVIC:
for (int i = 0 ; i < 8 ; i++) NVIC->ICER[i] = 0xFFFFFFFFUL; // disable all interrupts
for (int i = 0 ; i < 8 ; i++) NVIC->ICPR[i] = 0xFFFFFFFFUL; // clear all pending flags
if you want to restart the program, you can reload stack pointer to its top value, located at offset 0 of the flash memory and jump to the start address which is stored at offset 4. Note: flash memory is addressed starting from 0x08000000 address in the address space.
uint32_t stack_top = *((volatile uint32_t*)FLASH_BASE);
uint32_t entry_point = *((volatile uint32_t*)(FLASH_BASE + 4));
__set_MSP(stack_top); // set stack top
__ASM volatile ("BX %0" : : "r" (entry_point | 1) ); // jump to the entry point with bit 1 set

STM32 Use DMA to generate bit pattern on GPIO PIN

I am trying to generate a bit pattern on a GPIO pin. I have set-up the DMA engine to transfer from an array of GPIO pin states to the GPIO BSRR register
Here is the code I am using to configure the DMA
hdma_tim16_ch1_up.Instance = DMA1_Channel3;
hdma_tim16_ch1_up.Init.Direction = DMA_PERIPH_TO_MEMORY;
hdma_tim16_ch1_up.Init.PeriphInc = DMA_PINC_DISABLE;
hdma_tim16_ch1_up.Init.MemInc = DMA_MINC_ENABLE;
hdma_tim16_ch1_up.Init.PeriphDataAlignment = DMA_PDATAALIGN_WORD;
hdma_tim16_ch1_up.Init.MemDataAlignment = DMA_MDATAALIGN_WORD;
hdma_tim16_ch1_up.Init.Mode = DMA_NORMAL;
hdma_tim16_ch1_up.Init.Priority = DMA_PRIORITY_LOW;
if (HAL_DMA_Init(&hdma_tim16_ch1_up) != HAL_OK)
{
Error_Handler();
}
/* Several peripheral DMA handle pointers point to the same DMA handle.
Be aware that there is only one channel to perform all the requested DMAs. */
__HAL_LINKDMA(tim_baseHandle,hdma[TIM_DMA_ID_CC1],hdma_tim16_ch1_up);
__HAL_LINKDMA(tim_baseHandle,hdma[TIM_DMA_ID_UPDATE],hdma_tim16_ch1_up);
Here is the code I use to setup the transfer:
uint32_t outputbuffer[] = {
0x0000100,0x01000000,
0x0000100,0x01000000,
0x0000100,0x01000000,
0x0000100,0x01000000,
0x0000100,0x01000000,
0x0000100,0x01000000,
0x0000100,0x01000000
/* ... */
};
if (HAL_DMA_Start_IT(htim16.hdma[TIM_DMA_ID_UPDATE], (uint32_t)outputbuffer, (uint32_t)&GPIOG->BSRR, 14) != HAL_OK)
{
/* Return error status */
return HAL_ERROR;
}
__HAL_TIM_ENABLE_DMA(&htim16,TIM_DMA_UPDATE);
HAL_TIM_Base_Start_IT(&htim16);
I am expecting to see every time the counter overflows, the DMA transfers 32 bits from the array and increments to the next array position until the DMA CNDTR register reads 0.
I set up a GPIO pin to toggle every time the timer over flows and I setup an alternating bit pattern in the array. I would expect the two GPIO pins to be similar in their output shape but I get one longer pulse on the line connected to the DMA. Any tips would be greatly appreciated
configure TIM2 as input capture direct mode (TIM2_CH1)
configure TIM2 DMA direction "memory to peripheral"
configure TIM2 data width Half word / Half word
configure GPIO pins as GPIO_OUTPUT, for example 16 pins GPIOD0..GPIOD15
copy and paste HAL_TIM_IC_Start_DMA() function from HAL library and give it a new name MY_TIM_IC_Start_DMA()
find HAL_DMA_Start_IT() function call in MY_TIM_IC_Start_DMA()
replace (uint32_t)&htim->Instance->CCR1 with (uint32_t)&GPIOD->ODR
if (HAL_DMA_Start_IT(htim->hdma[TIM_DMA_ID_CC1], (uint32_t)&GPIOD->ODR, (uint32_t)pData, Length) != HAL_OK)
Now you can start DMA to GPIO transfer by calling
MY_TIM_IC_Start_DMA(&htim2, TIM_CHANNEL_1,(uint32_t*)gpioBuffer,GPIO_BUFFER_SIZE);
Actual transfer must be triggered by providing pulses on TIM2_CH1 input pin (for example, by using output compare pin from other timer channel). Those pulses originally was used to save Timer2 CCR1 register values to DMA buffer. Code was tweaked to transfer DMA buffer value to GPIOD ODR register.
For GPIO to Memory transfer change TIM2 DMA direction to "peripheral to memory", configure GPIO pins as GPIO_INPUT and use GPIOD->IDR instead of ODR in HAL_DMA_Start_IT parameters in modified MY_TIM_IC_Start_DMA() function.

STM32F405 bare metal spi slave - MISO data messed up sometimes

I've set up two STM32 Boards, one as SPI-master, the other one as slave.
I write directly to registers without any framework.
Master to slave communication is working perfectly. But the slave sends garbage sometimes.
I first tried interrupts, but the slave would always send garbage and often receive garbage.
Now I implemented DMA. This is working way better, the slave now always receives correct data. But sending is still an issue.
If the transmission is 3 to 5 Bytes long the data from the slave is correct in 95% of all cases.
If the transmission is longer then 5 bytes, then after the 4th or 5th byte there is just random byte foo. But the first 4 bytes are nearly (95%) always correct.
The signals are clean, I checked them with an oscilloscope. The data which the master receives shows up properly on MISO. So I guess the slave somehow writes garbage into the SPI DR, or the data register gets messed up.
I know SPI slaves on non-FPGAs are tricky, but this really is unexpected...
Anyone can point me a direction? I'm desperate and thankful for any bit of advice.
This is the code
void DMA1_Stream3_IRQHandler( void )
{
if (spi2_slave)
{
while( (spi_spc->SR & (1<<1)) == 0 ); // must wait for TXE to be set!
while( spi_spc->SR & (1<<7) ); // must wait for busy to clear!
DMA1_Stream3->CR &= ~(1<<0); // Disable stream 3
while((DMA1_Stream3->CR & (1<<0)) != 0); // Wait till disabled
DMA1_Stream3->NDTR = 3; // Datenmenge zum Empfangen
DMA1_Stream3->CR |= (1<<0); // Enable DMA1_Stream3 (TX)
DMA1->LIFCR = (1<<27); // clear Transfer complete in Stream 3
// fire SPI2 finished CBF
if (spi2_xfer_done != 0)
{
if (spi2_xfer_len > 0)
{
spi2_xfer_done(spi2_rx_buffer, spi2_xfer_len);
}
}
}
else
{
while( spi_spc->SR & (1<<7) ); // must wait for busy to clear!
GPIOB->ODR |= (1<<12); // Pull up SS Pin
spi_spc->CR2 &= ~((1<<0) | (1<<1)); // Disable TX and RX DMA request lines
spi_spc->CR1 &= ~(1<<6); // 6:disableSPI
DMA1->LIFCR = (1<<27); // clear Transfer complete in Stream 3
// fire SPI2 finished CBF
if (spi2_xfer_done != 0)
{
spi2_xfer_done(spi2_rx_buffer, spi2_xfer_len);
}
while( (spi_spc->SR & (1<<1)) == 0 ); // must wait for TXE to be set!
}
}
// For Slave TX DMA
void DMA1_Stream4_IRQHandler( void )
{
DMA1_Stream4->CR &= ~(1<<0); // Disable stream 4
while((DMA1_Stream4->CR & (1<<0)) != 0); // Wait till disabled
spi_spc->CR2 &= ~(1<<1); // Disable TX DMA request lines
DMA1->HIFCR = (1<<5); // clear Transfer complete in Stream 4
}
void mcu_spi_spc_init_slave(void (*xfer_done)(uint8_t* data, uint32_t dlen))
{
spi2_slave = 1;
spi2_xfer_done = xfer_done;
for (int c=0;c<SPI2_BUFFER_SIZE;c++)
{
spi2_tx_buffer[c] = 'X';
spi2_rx_buffer[c] = 0;
}
// Enable the SPI2 peripheral clock
RCC->APB1ENR |= RCC_APB1ENR_SPI2EN;
// Enable port B Clock
RCC->AHB1ENR |= (1<<1);
// Enable DMA1 Clock
RCC->AHB1ENR |= RCC_AHB1ENR_DMA1EN;
// Reset the SPI2 peripheral to initial state
RCC->APB1RSTR |= RCC_APB1RSTR_SPI2RST;
RCC->APB1RSTR &= ~RCC_APB1RSTR_SPI2RST;
/*
* SPC SPI2 SS: Pin33 PB12
* SPC SPI2 SCK: Pin34 PB13
* SPC SPI2 MISO: Pin35 PB14
* SPC SPI2 MOSI: Pin36 PB15
*/
// Configure the SPI2 GPIO pins
GPIOB->MODER |= (2<<24) | (2<<26) | (2<<28) | (2<<30);
GPIOB->PUPDR |= (02<<26) | (2<<28) | (2<<30);
GPIOB->OSPEEDR |= (3<<24) | (3<<26) | (3<<28) | (3<<30); // "very High speed"
GPIOB->AFR[1] |= (5<<16) | (5<<20) | (5<<24) | (5<<28); // Alternate function 5 (SPI2)
//-------------------------------------------------------
// Clock Phase and Polarity = 0
// CR1 = LSByte to MSByte, MSBit first
// DFF = 8bit
// 6 MHz Clock (48MHz / 8)
spi_spc->CR1 = (7<<3) | (0<<2) | (0<<1) | (1<<0) // 0:CPHA, 1:CPOL, 2:MASTER, 3:CLOCK_DIVIDER
| (0<<7) | (0<<11); // 7:LSB first, 11:DFF(8Bit)
spi_spc->CR2 = (0<<2) | (1<<1) | (1<<0); // 2:SSOE, 0:Enable RX DMA IRQ, 1:Enable TX DMA IRQ
// DMA config (Stream3:RX p2mem, Stream4:TX mem2p
// DMA for RX Stream 3 Channel 0
DMA1_Stream3->CR &= ~(1<<0); // EN = 0: disable and reset
while((DMA1_Stream3->CR & (1<<0)) != 0); // Wait
DMA1_Stream4->CR &= ~(1<<0); // EN = 0: disable and reset
while((DMA1_Stream4->CR & (1<<0)) != 0); // Wait
DMA1->LIFCR = (0x3D<<22); // clear all ISRs related to Stream 3
DMA1->HIFCR = (0x3D<< 0); // clear all ISRs related to Stream 4
DMA1_Stream3->PAR = (uint32_t) (&(spi_spc->DR)); // Peripheral addresse
DMA1_Stream3->M0AR = (uint32_t) spi2_rx_buffer; // Memory addresse
DMA1_Stream3->NDTR = 3; // Datenmenge zum Empfangen
DMA1_Stream3->FCR &= ~(1<<2); // ENABLE Direct mode by CLEARING Bit 2
DMA1_Stream3->CR = (0<<25) | // 25:Channel selection(0)
(1<<10) | // 10:increment mem_ptr,
(0<<9) | // 9: Do not increment periph ptr
(0<<6) | // 6: Dir(P -> Mem)
(1<<4); // 4: finish ISR
// DMA for TX Stream 4 Channel 0
DMA1_Stream4->PAR = (uint32_t) (&(spi_spc->DR)); // Peripheral addresse
DMA1_Stream4->M0AR = (uint32_t) spi2_tx_buffer; // Memory addresse
DMA1_Stream4->NDTR = 1; // Datenmenge zum Senden (dummy)
DMA1_Stream4->FCR &= ~(1<<2); // ENABLE Direct mode by CLEARING Bit 2
DMA1_Stream4->CR = (0<<25) | // 25:Channel selection(0)
(1<<10) | // 10:increment mem_ptr,
(0<<9) | // 9: Do not increment periph ptr
(1<<6) | // 6: Dir(Mem -> P)
(1<<4);
// Setup the NVIC to enable interrupts.
// Use 4 bits for 'priority' and 0 bits for 'subpriority'.
NVIC_SetPriorityGrouping( 0 );
uint32_t pri_encoding = NVIC_EncodePriority( 0, 1, 0 );
NVIC_SetPriority( DMA1_Stream4_IRQn, pri_encoding );
NVIC_EnableIRQ( DMA1_Stream4_IRQn );
NVIC_SetPriority( DMA1_Stream3_IRQn, pri_encoding );
NVIC_EnableIRQ( DMA1_Stream3_IRQn );
DMA1_Stream3->CR |= (1<<1); // Enable DMA1_Stream3 (RX)
spi_spc->CR1 |= (1<<6); // 6:EnableSPI
}
In the future the system has to send and receive roughly 500 bytes.
So, I did it. It was a whole bunch of things. Also, my assumption in the question was wrong. My slave did not receive/send valid data.
The signals were shown as clean by the oscilloscope, but the scope itself was adding noise to the lines, that was not visible on the scope itself. Not measuring the lines helped.
I put 100 OHM resistors close to the MASTER pins. This was not working, out of desperation I put the resistors close to the slave instead. Suddenly I got valid data. (This has been the main culprit all along)
According to the comment of Ashley Miller, I implemented a circular buffer, where I always send a fixed length every time. So the slave knows exactly what to expect. This mitigated eventual errors that could be produced when switching off / resetting the DMA shortly after the transmission.
The UART tricked me also. When getting too much data at once ( as little as 20 or 30 bytes! ) my terminal program gliched and threw the bytes randomly around. So part of the problem was just that... I'm using GtkTerm for those who are interested.
The Clock mode CPOL= 0 and CPH = 0 doesn't work at all. I set both master and slave to the same setting and it just received garbage. If I loop back the master to itself (connect MISO to MOSI a.k.a. exclude the slave) then it works regardless of clock mode.
This seems to stem from a timing issue, where the slave has to react too fast and can't handle even the slowest possible speed (approx. 100 kHz). I did not go into details on this.
I hope I could help someone with this.

two wire ADC, SPI read

I use Arduino UNO (Atmega328) and a 12bit ADC component (AD7893ANZ-10), datasheet available on https://www.analog.com/media/en/technical-documentation/data-sheets/AD7893.pdf
The problem:
I tried already few different codes but the read value is wrong always, even when the SDATA pin of ADC is not connected to MISO pin of Arduino, I get the same values (See figure1 here). I simulated it in proteus(See figure2 here) and used the virtual serial monitor in proteus. The MOSI and SS pin of Arduino are not connected but I set SS pin in code to LOW and HIGH to fullfill libraries requirements. More information about the timing of ADC is added as comments into the code below. Or availabe in the datasheet. I would be thanksfull if you take a look on it due I cant figure out what I did wrong.
PS: The ADC has just to pins to communicate with SPI: 1.SDATA(slaveout) and 2.SCLK. The pin CONVST on ADC is to initiate a conversion.
#include <SPI.h>
//source of code https://www.gammon.com.au/spi
void setup() {
Serial.begin (115200);
pinMode(7, OUTPUT); // this pin is connected to convst on adc to initiate conversion
// Put SCK, MOSI, SS pins into output mode (introductions from source)
// also put SCK, MOSI into LOW state, and SS into HIGH state.
// Then put SPI hardware into Master mode and turn SPI on
pinMode(SCK, OUTPUT);
pinMode(MOSI, OUTPUT);
pinMode(SS, OUTPUT);
digitalWrite(SS, HIGH);
digitalWrite(SCK, LOW);
digitalWrite(MOSI, LOW);
SPCR = (1<<MSTR);
SPI.begin ();
SPI.beginTransaction(SPISettings(2000000, MSBFIRST, SPI_MODE1)); // set the clk frequency; take the MSB first;
//mode1 for CPOL=0 and CPHA=1 according to datasheet p.9 "CPOL bit set to a logic zero and its CPHA bit set to a logic one."
}
int transferAndWait (const byte what) //methode to read data
{
int a = SPI.transfer(what); //send zero(MOSI not connected)and read the first 8 bits and save
Serial.println (a); // show the value, in serial monitor -1
a =(a << 8); //shift the saved first 8 bits to left
Serial.println (a); // show the value, in serial monitor 255
a = a | SPI.transfer(what); //read and save the last 8 bits
Serial.println (a); // show the value, in serial monitor -256
delayMicroseconds (10);
return a;
}
void loop() {
int k;
digitalWrite(7, HIGH); //set pin high to initiate the conversion
delayMicroseconds(9); //the conversion time needed, actually 6 mikroseconds
digitalWrite(SS, LOW); // enable Slave Select to get the library working
k = transferAndWait (0); //call the method
delayMicroseconds(1);
digitalWrite(7, LOW);
digitalWrite(SS, HIGH); //disable chip select
delay(2000); //delay just to keep the overview on serial monitor
Serial.println (k); // show the value, in serial monitor -1
}
First the the return variable should be an unsigned int instead of a signed int.
Also the CONVST should only be low for a short period as conversion is started afterwards. See Timing sequence.

Hardware interrupt between Raspberry pi and Atmega 328

I have connected my RPI and atmega328 together in order to control the start of an event on my arduino. In order to do so, GPIO 25 (RPI) is connected directly to pin7 (Arduino PD7). I've got a python script on the RPI witch set the GPIO 25 to high then back to LOW:
import RPi.GPIO as GPIO
GPIO.setmode(GPIO.BCM)
GPIO.setup(25, GPIO.OUT)
GPIO.output(25, 1)
#Do some stuff
GPIO.output(25, 0)
The arduino is waiting in a loop for either a physical button to be pressed or the pin7 to be set to HIGH by the RPI:
const int interrupt = 7;
const int button = 13;
const int led = 9;
void setup() {
Serial.begin(9600);
pinMode(interrupt, INPUT);
pinMode(button,INPUT);
pinMode(led, OUTPUT);
digitalWrite(led, LOW);
}
void loop() {
bool on = false;
bool buttonOn = false;
while (!on || !buttonOn) {
on = digitalRead(interrupt);
buttonOn = digitalRead(button);
digitalWrite(led, LOW);
}
digitalWrite(led, HIGH);
delay(1000);
}
Now unfortunately this doesn't work. I have checked the logic level of the atmega328 (https://learn.sparkfun.com/tutorials/logic-levels) and it seems that 3.3V is good enough for HIGH signal.
Am I missing something with the pull up /pull down resistance? I know the PD7 on the atmega is specified as follow:
Port D is an 8-bit bi-directional I/O port with internal pull-up
resistors (selected for each bit). The Port D output buffers have
symmetrical drive characteristics with both high sink and source
capability. As inputs, Port D pins that are externally pulled low will
source current if the pull-up resistors are activated. The Port D pins
are tristated when a reset condition becomes active, even if the clock
is not running.
EDIT:
I have done more testing and I am getting the HIGH or LOW value correctly. It seems that the issue comes from the:
while ((!on) || (!buttonOn)) {
Is there an issue with Arduino and the OR operator in a while loop? Even when one condition is true and the other one is false, it never goes out of the loop.
while ((!on) || (!buttonOn)) {
}
That loop will run as long as one of the variables is false.
Yesterday I was for some reason thinking that you were reading the interrupt pin twice when reading your code.
3.3 v output should be ok to turn the Arduino input high.
You can have a wiring problem or your raspberry pi can be so fast that the arduino misses the pulse.
Change your program on the raspberry pi to leave the output high for so long (e.g. 10) seconds that you can measure it with a multimeter to see that you are setting the right pin.
Does the Arduino now see the input?