Raspberry Pi how to trigger event on pull-down interrupt pin - raspberry-pi

I have a sensor with the interrupt output connected to a input pin on my RaspberryPi. My goal is to trigger an event from the sensor interrupt. The data sheet for my sensor says that once an interrupt is triggered on the sensor, the interrupt status register will have the appropriate bit set to 1 and stay that way until it is cleared; while the status register has a status bit of 1, the interrupt pad on the sensor will be pulled down.
My problem is that I can see the status register correctly reflect an interrupt when I physically trigger the sensor. But when I read the pin from my Pi, I never see any change reflected. Here's the gist of my code:
import Sensor
import RPi.GPIO as GPIO
GPIO.setmode(GPIO.BOARD)
GPIO.setup(11, GPIO.IN, pull_up_down = GPIO.PUD_UP)
s = Sensor.start()
while True:
print 'sensor int reg: ', s.readIntReg() # I do not clear interrupt
print 'pin value: ', GPIO.input(11)
The first print will change according to my interaction with the sensor as expected. The second print shows the pin holds 1 or 0 depending on whether it is set to pull up or down, respectively.
It seems like the problem lies in that whenever the interrupt fires, the sensor is pulling the pin down and the Pi is pulling it up... How should I handle this?
The sensor is the VCNL4010 [https://www.adafruit.com/products/466]

I suppose you have the gpio driver installed and active on the Pi?
Then you'll probably never see the interrupt triggering from the Python level since the kernel driver will service it (and reset the flag) already in the background.

I added a 10k external pull-up resistor with 3.3V and that did the trick... not sure why the internal pull-up on the Pi didn't do the same, perhaps I configured it wrong.
UPDATE: That turned out not to be the issue at all. I was neglecting to explicitly set the sensor to free run mode. Part of my code had the unintended side effect of setting that mode so in tweaking things for test sometimes it worked. The pull-up on the Pi works fine.

Related

Stm32g474 discovery kit HRTIM ISR HRTIM1_TIMA_IRQHandler not hit for channel set interrupt setx1 and channel reset interrupt rstx1

I have enabled the interrupts for channel set (SETx1) and channel reset (RSTx1) by setting bits RSTx1IE and SET1xIE. I am trying to drive the outputs to active-inactive state via software by writing to SST and RST bits of the TIMERA.
Inside the TIMAISR register I can see the SETx1 and RSTx1 interrupt event flags are set. This indicates the fact that output active-inactive state changes are driving the interrupts.
But I cannot see the respective ISR HRTIM1_TIMA_IRQHandler getting hit. While other interrupts are working fine and I can see the ISR gets executed in those situations, it is only for the SETx1 and RSTx1/SETx2 and RSTx2 I see this problem.
Is there any settings I could me missing that might cause this issue?
Image that shows global interrupt for TimerA is enabled in NVIC
Image that shows HRTIM peripheral interrupt are enabled for SET and RST
Image that shows Interrupt occured inside HRTIM peripheral for SET and RST

STM32 CDC_Receive_FS callback never called

I am trying to use the USB Device library on STM32Cube but when I execute using the debugger or that I try to turn an LED on in CDC_Receive_FS, it never reaches that point.
Here is how I set up everyting:
My board is a NUCLEO-F746ZG
I enabled USB_OTG_FS in Device_Only mode, activated _VBUS and _SOF. Left everything else by default and USB On The Go FS global interrupt is enabled!
I set up USB_DEVICE: Class For FS IP set to Virtual Port Com, left everything by default
Main loop left empty
CDC_Receive_FS: put breakpoint in it and/or HAL_GPIO_WritePin(LD1_GPIO_Port, LD1_Pin, GPIO_PIN_SET);
I have TIM2 set up for the things I would like to do when it will work
Then I tried to send data to the board, first using Python using serial with a baudrate of 921600 but got nothing. Then using PuTTY with a baudrate of 9600, still nothing...
If anyone has a clue, I have been struggling with it the whole day.
Here is the whole project: https://ecloud.global/s/cjGYqK6z9g58Lm4

STM32 UART in DMA mode stops receiving after receiving from a host with wrong baud rate

The scenario: I have a STM32 MCU, which uses an UART in DMA Mode with Idle Interrupt for RS485 data transfer. The baud rate of the UART is set in CubeMX, in this case to 115200. My Code works fine, when the Host uses the correct baud rate, it is also "long time" stable, no issues or worries.
BUT: when I set the wrong baud rate at the host, e.g. 56700 instead of 115200, the UART stops receiving data, even if I later set the baud rate at the host to the same baud rate the Microcontroller uses, it won't work. The only way to solve this issue so far is: reset the MCU and connect again with the correct baud rate.
To give you some (Pseudo-)Code:
uint8_t UART_Buf[128];
HAL_UART_Receive_DMA(&huart2, UART_Buf, 128);
__HAL_UART_ENABLE_IT(&huart2, UART_IT_IDLE);
Or in Plain Words: there is a UART Buffer for DMA (UART_Buf[128]) and the UART is started with HAL_UART_Receive_DMA(...), DMA Rx is set to circular mode in CubeMX, also the Idle-Interrupt is activated, using the HAL Macro: __HAL_UART_ENABLE_IT(...); This code works fine so far.
Works fine means:
when I transmit data from my PC to the Micro, the (one) Idle Interrupt is triggered (correctly) by the MCU. In the ISR I set a flag, to start the data parsing afterwards. I receive exactly the number of bytes I have sent, and all is fine.
BUT: when I make the wrong setting in my Terminal Program and instead of the (correct) baud rate of 115200, the baud rate select menu is set to e.g. 57600, the trouble begins:
The idle interrupt will still trigger after each transmission.
But it triggers 2-4 times in a quick "burst" (depending on the baud rate) and the number of bytes received is 0. I'd expect at least some bs data, but there is exactly 0 data in the buffer - which I can check with the debugger. There is obviously received nothing. When I change the baud rate in my terminal program and restart it, there is still nothing received on the MCU.
I could live with 0 received bytes, if the baud rate of the host is incorrect, but it's pretty uncool that one incoming transmission of a host with the wrong baud rate disables the UART until a hardware reset is done.
My attempts to resolve this were so far:
count the "Idle Interrupt Bursts" in combination with 0 received bytes to trigger a "self reset" routine, that stops the UART and restarts it, using the MX_USART2_UART_Init(); Routine. With zero effect. I can see the Idle Interrupt is still triggered correctly, but the buffer remains empty and no data is transferred into the buffer. The UART remains in a non-receiving state.
The Question
Has anyone out there experienced similar issues, and if yes: how did you solve that?
Additional Info: this happens on a STM32F030 as well as on a STM32G03x
When you send to the UART at the wrong baud rate it will appear to the receiver as framing errors and/or noise errors. It could also appear as random characters being received correctly, but this is less likely so don't be surprised to have nothing in your buffer.
When you are receiving with DMA, it is normal to turn the error interrupt on or else poll the error bits. When an error is detected you would then re-initialize everything and restart the DMA. This sounds like what you are trying to do by counting the idle interrupts, but you are just not checking the right bits.
If you don't want to do that, it is not impossible to imagine that you have nothing to do at the driver level and want to try to do the resynchronisation at a higher level (eg: start reading again and discard everything until a newline character) but you will have to bear in mind at least two things:
First, make sure you clear the DDRE bit in the USART_CR3 register. The name "DMA Disable on Reception Error" speaks for itself.
Second, the UART peripheral is able to self resynchronize, as long as you have an idle gap between bytes. If you switch the transmitter to the correct baud rate but keep blasting out data then the receiver may never correctly identify which bit is a start bit.
After investigating this issue a little bit further, i found a solution.
Abstract:
When a host connects to the MCU to an UART with an other baud rate than the UART is set to, it will go into an error state and stop DMA transmission to the RX Buffer. You can check if there is an error with the HAL_UART_GetError(...) function. If there is an error, stop the UART/DMA and restart it.
The Details:
First of all, it was not the DDRE bit in the USART_CR2 register. This was set to 0 by CubeMX. But the hint of Tom V led me into the right direction.
I tried to recover the UART by playing around with the register bits. I read through the UART section of the reference manual multiple times and tried to figure out, which bits to set in which order, to resolve the error condition manually.
What I found out:
When a transmission with the wrong baud rate is received by the UART the following changes in the UART Registers occur (on an STM32F030):
Control register 1 (USART_CR1) - Bit 8 (PEIE) goes from 1 to 0. PEIE is the Parity Interrupt Enable Bit.
Control register 2 (USART_CR2) - remains unchanged
Control register 3 (USART_CR3) - changes from 0d16449 to 0d16384, which means
Bit 0 (EIE - Error Interrupt enable) goes from 1 to 0
Bit 6 (DMAR - DMA enable receiver) goes from 1 to 0
Bit 14 (DEM - Driver enable mode) remains unchanged at 1
USART_CR3.DEM makes sense. I am using the RS485-Functionality of the F030, so the UART handles the Driver-Enable GPIO by itself.
the transition from 1 to 0 at USART_CR3.EIE and USART_CR3.DMAR are most probably the reason why no more data are transfered to the DMA buffer.
Besides that, the error Flags in the Interrupt and status register (USART_ISR) for ORE and FE are set. ORE stands for Overrun Error and FE for Frame Error. Although these bit can be cleared by writing a 1 to the corresponding bit of the Interrupt flag clear register (USART_ICR), the ErrorCode in the hUART Struct remains at the intial error value.
At the end of my try&error process, I managed to have all registers at the same values they had during valid transmissions, but there were still no bytes received. Whatever i tried, id had no effect. The UART remained in a non receiving state. So i decided to use the "brute force" approach and use the HAL functions, which I know they work.
Finally the solution is pretty simple:
if an Idle Interrupt is detected, but the number of received bytes is 0
=> check the Error-Status of the UART with HAL_UART_GetError(...)
If there is an error, stop the UART with HAL_UART_DMAStop(...) and restart it with HAL_UART_Receive_DMA(...)
The code:
if(RxLen) {
// normal execution, number of received bytes > 0
if(UA_RXCallback[i]) (*UA_RXCallback[i])(hUA); // exec RX callback function
} else {
if(HAL_UART_GetError(&huart2)) {
HAL_UART_DMAStop(&huart2); // STOP Uart
MX_USART2_UART_Init(); // INIT Uart
HAL_UART_Receive_DMA(&huart2, UA2_Buf, UA2_BufSz); // START Uart DMA
__HAL_UART_CLEAR_IDLEFLAG(&huart2); // Clear Idle IT-Flag
__HAL_UART_ENABLE_IT(&huart2, UART_IT_IDLE); // Enable Idle Interrupt
}
}
I had a similar issue. I'm using a DMA to receive data, and then periodically checking how many bytes were received. After a bit error, it would not recover. The solution for me was to first subscribe to ErrorCallback on the UART_HandleTypeDef.
In the error handler, I then call UART_Start_Receive_DMA(...) again. This seems to restart the UART and DMA without issue.

Sending commands to uart on python

I am trying to write a pyserial command to the uart port to control the robot arm.
I have some manual:
manual for arm
manual command example
I use pyserial like that:
import serial
from time import sleep
port = serial.Serial("/dev/ttyUSB0", baudrate=9600, parity=serial.PARITY_NONE, stopbits=serial.STOPBITS_ONE, bytesize=8, timeout=1)
port.write(b"\x055\x55\x0B\x03\x02\x20\x03\x02\xB0\x04\x09\xFC\x03\xaa")
sleep(0.3)
#port.write(b"\x05")
#sleep(0.3)
#port.write(b"\x06")
#sleep(0.03)
#port.write(b"\x08\x01\x00")
print('send')
At first I tried to run it in one line, the buzzer will beep that the command was accepted, but the hand does not move.
Then I tried to split the Header separately for the Length in the next line and the Command with Parameters in the next.
Tell me how you can send these commands to the port, maybe there is something ready to do this in Python?
LSC Series Servo Controller Communication Protocol V1.2 manual says:
If the user transmits the correct data to the servo
controller, the blue LED 2 on the controller will flash one time, indicating that the
correct data have been received. If the user transmits the wrong data, then the blue
LED2 will not have any reaction and will keep the bright, then the buzzer will beepbeep twice to remind the user of the data error.
The only thing in that manual about that buzzer is that it beeps 2 times if there is a data error...

Beagle Bone Black I2C2 Issues

I am having troubles using an I2C sensor with the Beagle Bone Black (BBB). The BBB is running a newly flashed 18.04 Ubuntu image specifically for the BBB.
I wired the sensor (VIN, GND, SCL, SDA) to the corresponding I2C2 pins (4, 2, 19, 20) on the BBB using the below pinout.
The sensor is supposed to be using address 0x40, but scanning I2C2 (using i2cdetect -r 2) does not show the sensor.
I have tested this with two separate sensors as I thought at first I may have fried the original sensor somehow, but the results are the same. In fact, running the I2C2 scan command yields the exact same results when nothing is connected at all.
I have read in many places that I2C2 may not be enabled by default, but I assume it is enabled in my case as I can scan I2C2 without getting an error. Is this assumption incorrect? Again, this is a freshly flashed BBB, and I have not enabled/disabled anything - it should be in the default state.
I have also verified the connectivity of my wires between the sensor and BBB. The voltage between VIN and GND on the chip is 3.3V, so it is definitely being powered.
Why can't I connect to my I2C sensors using the BBB?
it could be that the source you are using is outdated or not a viable entry for i2c.
Also, you could use this command to make sure i2c2 pins are available:
config-pin p9.21 i2c
config-pin p9.22 i2c
This may work, also. If this does not work, please reply with your entire source.
Seth
P.S. Also, if you have time, you may want to get an i2c library to use if your software falls short of setting up your own i2c library. They have smbus2 you can install with pip and other i2c libraries out there still.
Here are a few things you should check (in random order).
List all I2C buses wich i2cdetect -l and try them all. Depending on the platform, the i2c bus number in Linux may be different from the peripheral number used in the datasheet and pinout. E.g. "I2C2" might be bus i2c-1 or i2c-3 in Linux).
Use an oscilloscope or logical analyzer to see if the SCL and SDA lines are being driven. If they aren't check the bus number as above. If they are, then check whether the device gives an ACK; if it doesn't, anything else will never work: double-check the chip slave address. There are cheap logical analyzers that you can buy and user with pulseview.
Simply load the Linux driver for your chip (see the kernel docs on how to do it from userspace for a quick test). Then see check if the device appears or use dmesg to see any kernel error messages while probing.