I am trying to reading NMEA messages sent by a FLARM device via the UART port on a Raspberry Pi pico using micropython.
Before connecting to the FLARM device, I tried reading messages from my Raspberry Pi 3b+; it worked well.
Here is the code I use:
import time
from machine import Pin, UART
pin = Pin(25, Pin.OUT)
uart = UART(1,baudrate=19200)
while True:
if uart.any():
buf = uart.readline()
print(buf)
print(buf.decode('ascii')) # noud affichons le message reçu
else:
time.sleep(1)
When connecting to the FLARM device, I get an error saying "UnicodeError" right after printing the variable buf.
I get values of buf such as:
b'\xca\xab\xfc\xe7~\xa3\xc3\x90\x00'
The baud rate is correctly set to 19200bps. FLARM messages are NMEA sentences as described here: https://flarm.com/wp-content/uploads/man/FTD-012-Data-Port-Interface-Control-Document-ICD.pdf
Any idea why I can't display the messages?
For some reasons, the TX and GND pin were switched and that's why I got wrong bytes.
I don't understand why, but when I plugged the FLARM GND pin to my Pico's RX pin, and the FLARM TX pin to the Pico's GND, I got clear NMEA messages ... Weird !
Related
I am having troubles using an I2C sensor with the Beagle Bone Black (BBB). The BBB is running a newly flashed 18.04 Ubuntu image specifically for the BBB.
I wired the sensor (VIN, GND, SCL, SDA) to the corresponding I2C2 pins (4, 2, 19, 20) on the BBB using the below pinout.
The sensor is supposed to be using address 0x40, but scanning I2C2 (using i2cdetect -r 2) does not show the sensor.
I have tested this with two separate sensors as I thought at first I may have fried the original sensor somehow, but the results are the same. In fact, running the I2C2 scan command yields the exact same results when nothing is connected at all.
I have read in many places that I2C2 may not be enabled by default, but I assume it is enabled in my case as I can scan I2C2 without getting an error. Is this assumption incorrect? Again, this is a freshly flashed BBB, and I have not enabled/disabled anything - it should be in the default state.
I have also verified the connectivity of my wires between the sensor and BBB. The voltage between VIN and GND on the chip is 3.3V, so it is definitely being powered.
Why can't I connect to my I2C sensors using the BBB?
it could be that the source you are using is outdated or not a viable entry for i2c.
Also, you could use this command to make sure i2c2 pins are available:
config-pin p9.21 i2c
config-pin p9.22 i2c
This may work, also. If this does not work, please reply with your entire source.
Seth
P.S. Also, if you have time, you may want to get an i2c library to use if your software falls short of setting up your own i2c library. They have smbus2 you can install with pip and other i2c libraries out there still.
Here are a few things you should check (in random order).
List all I2C buses wich i2cdetect -l and try them all. Depending on the platform, the i2c bus number in Linux may be different from the peripheral number used in the datasheet and pinout. E.g. "I2C2" might be bus i2c-1 or i2c-3 in Linux).
Use an oscilloscope or logical analyzer to see if the SCL and SDA lines are being driven. If they aren't check the bus number as above. If they are, then check whether the device gives an ACK; if it doesn't, anything else will never work: double-check the chip slave address. There are cheap logical analyzers that you can buy and user with pulseview.
Simply load the Linux driver for your chip (see the kernel docs on how to do it from userspace for a quick test). Then see check if the device appears or use dmesg to see any kernel error messages while probing.
I'm experimenting with the STM32 nucleo board STM32F446.
uint8_t data[x];
HAL_UART_Receive_DMA(&huart2, &data, x);
This piece of code works when I send bytes to the PA3 and through DMA it writes to data the x bytes I sent.
However, when &data is replaced with 0x40020014 (GPIOA->ODR) or the bit-band aliased address 0x42400294 for PA5 LED, the bit for toggling the LED isn't set when I sent a byte to PA3, and HAL_UART_RxCpltCallback may or may not be called depending on x. Why?
Link to code: https://github.com/pterodragon/stm32_try/tree/question
I'm trying to access the bootloader for a Nucleo L476RG "slave" board.
The "master" board is a Nucleo L496ZG board. In my program, I have a DigitalOut defined on the master board called extBoot0, extReset. These go off to the boot0 and NRST pins on the slave board. Additionally, I have a Serial instance called usart on the master, which is attached to UART2 on the slave board. Also, it appears that there BOOT1 is preset to run the bootloader, i.e. it's asserted low and cannot be changed to run whatever's in SRAM.
Currently, in resetToBootloader, I set BOOT0 high and drop NRST low for 0.1 seconds, and bring it back up high. I've observed that running this function indeed resets the device and prevents the program from running.
In initBootloader, I format the serial per AN2606: 8-bit, even parity, 1 stop bit. I then send 0x7F over that serial bus to the slave board. I'm not getting any response and using a logic analyzer, I've confirmed that the slave is getting it on the right pin and there is no changes in the slave's TX input. What else needs to be done to start the bootloader?
Here's my relevant code:
DigitalOut extBoot0(D7);
DigitalOut extBoot1(D6);
DigitalOut extReset(D5);
Serial usart(/* tx, rx */ D1, D0);
uint8_t rxBuffer[1];
event_callback_t serialEventCb;
void serialCb(int events) {
printf("something happened!\n");
}
void initBootloader() {
wait(5); // just in case?
// Once initialized the USART1 configuration is: 8-bits, even parity and 1 Stop bit
serialEventCb.attach(serialCb);
usart.format(8, SerialBase::Even, 1);
uint8_t buffer[1024];
// write 0x7F
buffer[0] = 0x7F;
usart.write(buffer, 1, 0, 0);
printf("sending cmd\n");
// should ack 0x79
usart.read(rxBuffer, 1, serialEventCb, SERIAL_EVENT_RX_ALL, 0x79);
}
If it helps at all, here's a picture of my board setup.
I believed I solved this by using USART1 instead of USART2. The documentation states that both USART1 and USART2 can be used, but I only receive a 0x79 from USART1.
Additionally, I had to switch from Serial to UARTSerial. The slave first sends an incorrect packet, 0xC0 with an incorrect parity bit. Not really sure why it does that, but it causes the regular Serial instance to not handle the proceeding byte.
I have a sensor with the interrupt output connected to a input pin on my RaspberryPi. My goal is to trigger an event from the sensor interrupt. The data sheet for my sensor says that once an interrupt is triggered on the sensor, the interrupt status register will have the appropriate bit set to 1 and stay that way until it is cleared; while the status register has a status bit of 1, the interrupt pad on the sensor will be pulled down.
My problem is that I can see the status register correctly reflect an interrupt when I physically trigger the sensor. But when I read the pin from my Pi, I never see any change reflected. Here's the gist of my code:
import Sensor
import RPi.GPIO as GPIO
GPIO.setmode(GPIO.BOARD)
GPIO.setup(11, GPIO.IN, pull_up_down = GPIO.PUD_UP)
s = Sensor.start()
while True:
print 'sensor int reg: ', s.readIntReg() # I do not clear interrupt
print 'pin value: ', GPIO.input(11)
The first print will change according to my interaction with the sensor as expected. The second print shows the pin holds 1 or 0 depending on whether it is set to pull up or down, respectively.
It seems like the problem lies in that whenever the interrupt fires, the sensor is pulling the pin down and the Pi is pulling it up... How should I handle this?
The sensor is the VCNL4010 [https://www.adafruit.com/products/466]
I suppose you have the gpio driver installed and active on the Pi?
Then you'll probably never see the interrupt triggering from the Python level since the kernel driver will service it (and reset the flag) already in the background.
I added a 10k external pull-up resistor with 3.3V and that did the trick... not sure why the internal pull-up on the Pi didn't do the same, perhaps I configured it wrong.
UPDATE: That turned out not to be the issue at all. I was neglecting to explicitly set the sensor to free run mode. Part of my code had the unintended side effect of setting that mode so in tweaking things for test sometimes it worked. The pull-up on the Pi works fine.
I have two xbee's series 1. I have them as endpoint devices working in API mode and talking to each other. The first xbee is attached at a raspberry pi, while the other is on my pc where I see the terminal tab of XCTU program. The baud rate I use is 125000.
From raspberry pi I try to send a jpg image which is 30Kbytes. I send data frames 100 byte long (the biggest as it is said in the xbee documentation). Inside a loop I create and send the packets, I have also a cout statement that prints the loop number. Everything is fine and all bytes are sent. When I comment out the cout statement not all bytes are sent.
From what I have understood the cout statement works as a delay between packets, but I still cannot understand why is this happening as it is supposed that I use the half speed ...
I hope I was clear and look forward for a reply.
UPDATE
Just to summarize, i changed baud rate to 250000 where there is the same behavior as in 125000. I also implemented hardware flow control by checking cts signal. When xbees are in transparent mode I need a delay between sending characters at around 150us. The same goes for api mode too. The difference with 125000 baud rate in api mode was that the delay needed, was enough to be betwween each data packet, but in 250000 the delay is needed between each byte that i send. If i do the above everything goes well.
The next thing i did was to plug both xbees in my pc in transparent mode. I went to terminal tab of xctu software where i chose assemble packet and sent at around 3000 bytes to the other xbee. The result was the same. The second xbee received at about 1500 bytes and then each time that i was sending one byte from the first to the second, the "lost bytes" were being received at packets of 1000. :/
So could anyone know what am I doing wrong?
You should connect the /CTS pin from the XBee module into the Raspberry Pi, and have your routine stop sending data when the XBee de-asserts it.
At higher baud rates, it's possible to stream data into the XBee module faster than it can send to the remote module. The local XBee module uses the /CTS pin to notify the host when its buffers are almost full and the host should stop sending. People refer to this as hardware flow control.
It may be necessary to modify the serial driver on the Raspberry Pi to make use of that signal -- it should pause the transmit buffer when de-asserted, and automatically resume sending when re-asserted.