packets lost xbee series 1 - raspberry-pi

I have two xbee's series 1. I have them as endpoint devices working in API mode and talking to each other. The first xbee is attached at a raspberry pi, while the other is on my pc where I see the terminal tab of XCTU program. The baud rate I use is 125000.
From raspberry pi I try to send a jpg image which is 30Kbytes. I send data frames 100 byte long (the biggest as it is said in the xbee documentation). Inside a loop I create and send the packets, I have also a cout statement that prints the loop number. Everything is fine and all bytes are sent. When I comment out the cout statement not all bytes are sent.
From what I have understood the cout statement works as a delay between packets, but I still cannot understand why is this happening as it is supposed that I use the half speed ...
I hope I was clear and look forward for a reply.
UPDATE
Just to summarize, i changed baud rate to 250000 where there is the same behavior as in 125000. I also implemented hardware flow control by checking cts signal. When xbees are in transparent mode I need a delay between sending characters at around 150us. The same goes for api mode too. The difference with 125000 baud rate in api mode was that the delay needed, was enough to be betwween each data packet, but in 250000 the delay is needed between each byte that i send. If i do the above everything goes well.
The next thing i did was to plug both xbees in my pc in transparent mode. I went to terminal tab of xctu software where i chose assemble packet and sent at around 3000 bytes to the other xbee. The result was the same. The second xbee received at about 1500 bytes and then each time that i was sending one byte from the first to the second, the "lost bytes" were being received at packets of 1000. :/
So could anyone know what am I doing wrong?

You should connect the /CTS pin from the XBee module into the Raspberry Pi, and have your routine stop sending data when the XBee de-asserts it.
At higher baud rates, it's possible to stream data into the XBee module faster than it can send to the remote module. The local XBee module uses the /CTS pin to notify the host when its buffers are almost full and the host should stop sending. People refer to this as hardware flow control.
It may be necessary to modify the serial driver on the Raspberry Pi to make use of that signal -- it should pause the transmit buffer when de-asserted, and automatically resume sending when re-asserted.

Related

How to read Analog Output Holding Registers on Advantech ADAM 6717 through ModBus TCP

I've been exploring the ADAM 6717 from Advantech.
This is the ModBus address table for said device:
At first I wanted to modify the value of the Digital output channel 0(DO0), so, as can be seen from the picture above, such address is the 0x0017.
I succeed at this by using a ModBus tool and the following settings:
Sending either "On" or "Off", turns On and off a LED connected to that output. Everything runs smoothly according to my expectation up to this point.
The problem arises when I want to read the Analog Input channel 6 or equivalently, address 400431~40044.
Since that address lies on the Analog Output Holding Registers part of the address table, I though that the following settings would accomplish the job:
However, as can be seen above, the reading shows 0.0 when there is actually 6V connected to that input (a potentiometer)
It is worth mentioning that I've made sure to enable the AI6 channel as well as setting it to Voltage mode instead of current. Also, the web utility for the device shows the AI6 reading correctly as I change the potentiometer's resistance value.
So the problem doesn't lie in the connection from the potentiometer to the AI6 but somewhere else.
Out of nothing and leaving aside what I think I know on this topic, I though of changing the function from 0x03 to 0x04
However, the response is exactly the same.
It bugs me that I can read and write values to the output coils but not the Analog output holding registers.
Is there any configuration that I might be missing over here?
Thanks in advance.
Device settings:
IP address: 10.0.0.1
Port in which the ModBus service is running: 5020

STM32 UART in DMA mode stops receiving after receiving from a host with wrong baud rate

The scenario: I have a STM32 MCU, which uses an UART in DMA Mode with Idle Interrupt for RS485 data transfer. The baud rate of the UART is set in CubeMX, in this case to 115200. My Code works fine, when the Host uses the correct baud rate, it is also "long time" stable, no issues or worries.
BUT: when I set the wrong baud rate at the host, e.g. 56700 instead of 115200, the UART stops receiving data, even if I later set the baud rate at the host to the same baud rate the Microcontroller uses, it won't work. The only way to solve this issue so far is: reset the MCU and connect again with the correct baud rate.
To give you some (Pseudo-)Code:
uint8_t UART_Buf[128];
HAL_UART_Receive_DMA(&huart2, UART_Buf, 128);
__HAL_UART_ENABLE_IT(&huart2, UART_IT_IDLE);
Or in Plain Words: there is a UART Buffer for DMA (UART_Buf[128]) and the UART is started with HAL_UART_Receive_DMA(...), DMA Rx is set to circular mode in CubeMX, also the Idle-Interrupt is activated, using the HAL Macro: __HAL_UART_ENABLE_IT(...); This code works fine so far.
Works fine means:
when I transmit data from my PC to the Micro, the (one) Idle Interrupt is triggered (correctly) by the MCU. In the ISR I set a flag, to start the data parsing afterwards. I receive exactly the number of bytes I have sent, and all is fine.
BUT: when I make the wrong setting in my Terminal Program and instead of the (correct) baud rate of 115200, the baud rate select menu is set to e.g. 57600, the trouble begins:
The idle interrupt will still trigger after each transmission.
But it triggers 2-4 times in a quick "burst" (depending on the baud rate) and the number of bytes received is 0. I'd expect at least some bs data, but there is exactly 0 data in the buffer - which I can check with the debugger. There is obviously received nothing. When I change the baud rate in my terminal program and restart it, there is still nothing received on the MCU.
I could live with 0 received bytes, if the baud rate of the host is incorrect, but it's pretty uncool that one incoming transmission of a host with the wrong baud rate disables the UART until a hardware reset is done.
My attempts to resolve this were so far:
count the "Idle Interrupt Bursts" in combination with 0 received bytes to trigger a "self reset" routine, that stops the UART and restarts it, using the MX_USART2_UART_Init(); Routine. With zero effect. I can see the Idle Interrupt is still triggered correctly, but the buffer remains empty and no data is transferred into the buffer. The UART remains in a non-receiving state.
The Question
Has anyone out there experienced similar issues, and if yes: how did you solve that?
Additional Info: this happens on a STM32F030 as well as on a STM32G03x
When you send to the UART at the wrong baud rate it will appear to the receiver as framing errors and/or noise errors. It could also appear as random characters being received correctly, but this is less likely so don't be surprised to have nothing in your buffer.
When you are receiving with DMA, it is normal to turn the error interrupt on or else poll the error bits. When an error is detected you would then re-initialize everything and restart the DMA. This sounds like what you are trying to do by counting the idle interrupts, but you are just not checking the right bits.
If you don't want to do that, it is not impossible to imagine that you have nothing to do at the driver level and want to try to do the resynchronisation at a higher level (eg: start reading again and discard everything until a newline character) but you will have to bear in mind at least two things:
First, make sure you clear the DDRE bit in the USART_CR3 register. The name "DMA Disable on Reception Error" speaks for itself.
Second, the UART peripheral is able to self resynchronize, as long as you have an idle gap between bytes. If you switch the transmitter to the correct baud rate but keep blasting out data then the receiver may never correctly identify which bit is a start bit.
After investigating this issue a little bit further, i found a solution.
Abstract:
When a host connects to the MCU to an UART with an other baud rate than the UART is set to, it will go into an error state and stop DMA transmission to the RX Buffer. You can check if there is an error with the HAL_UART_GetError(...) function. If there is an error, stop the UART/DMA and restart it.
The Details:
First of all, it was not the DDRE bit in the USART_CR2 register. This was set to 0 by CubeMX. But the hint of Tom V led me into the right direction.
I tried to recover the UART by playing around with the register bits. I read through the UART section of the reference manual multiple times and tried to figure out, which bits to set in which order, to resolve the error condition manually.
What I found out:
When a transmission with the wrong baud rate is received by the UART the following changes in the UART Registers occur (on an STM32F030):
Control register 1 (USART_CR1) - Bit 8 (PEIE) goes from 1 to 0. PEIE is the Parity Interrupt Enable Bit.
Control register 2 (USART_CR2) - remains unchanged
Control register 3 (USART_CR3) - changes from 0d16449 to 0d16384, which means
Bit 0 (EIE - Error Interrupt enable) goes from 1 to 0
Bit 6 (DMAR - DMA enable receiver) goes from 1 to 0
Bit 14 (DEM - Driver enable mode) remains unchanged at 1
USART_CR3.DEM makes sense. I am using the RS485-Functionality of the F030, so the UART handles the Driver-Enable GPIO by itself.
the transition from 1 to 0 at USART_CR3.EIE and USART_CR3.DMAR are most probably the reason why no more data are transfered to the DMA buffer.
Besides that, the error Flags in the Interrupt and status register (USART_ISR) for ORE and FE are set. ORE stands for Overrun Error and FE for Frame Error. Although these bit can be cleared by writing a 1 to the corresponding bit of the Interrupt flag clear register (USART_ICR), the ErrorCode in the hUART Struct remains at the intial error value.
At the end of my try&error process, I managed to have all registers at the same values they had during valid transmissions, but there were still no bytes received. Whatever i tried, id had no effect. The UART remained in a non receiving state. So i decided to use the "brute force" approach and use the HAL functions, which I know they work.
Finally the solution is pretty simple:
if an Idle Interrupt is detected, but the number of received bytes is 0
=> check the Error-Status of the UART with HAL_UART_GetError(...)
If there is an error, stop the UART with HAL_UART_DMAStop(...) and restart it with HAL_UART_Receive_DMA(...)
The code:
if(RxLen) {
// normal execution, number of received bytes > 0
if(UA_RXCallback[i]) (*UA_RXCallback[i])(hUA); // exec RX callback function
} else {
if(HAL_UART_GetError(&huart2)) {
HAL_UART_DMAStop(&huart2); // STOP Uart
MX_USART2_UART_Init(); // INIT Uart
HAL_UART_Receive_DMA(&huart2, UA2_Buf, UA2_BufSz); // START Uart DMA
__HAL_UART_CLEAR_IDLEFLAG(&huart2); // Clear Idle IT-Flag
__HAL_UART_ENABLE_IT(&huart2, UART_IT_IDLE); // Enable Idle Interrupt
}
}
I had a similar issue. I'm using a DMA to receive data, and then periodically checking how many bytes were received. After a bit error, it would not recover. The solution for me was to first subscribe to ErrorCallback on the UART_HandleTypeDef.
In the error handler, I then call UART_Start_Receive_DMA(...) again. This seems to restart the UART and DMA without issue.

Sending commands to uart on python

I am trying to write a pyserial command to the uart port to control the robot arm.
I have some manual:
manual for arm
manual command example
I use pyserial like that:
import serial
from time import sleep
port = serial.Serial("/dev/ttyUSB0", baudrate=9600, parity=serial.PARITY_NONE, stopbits=serial.STOPBITS_ONE, bytesize=8, timeout=1)
port.write(b"\x055\x55\x0B\x03\x02\x20\x03\x02\xB0\x04\x09\xFC\x03\xaa")
sleep(0.3)
#port.write(b"\x05")
#sleep(0.3)
#port.write(b"\x06")
#sleep(0.03)
#port.write(b"\x08\x01\x00")
print('send')
At first I tried to run it in one line, the buzzer will beep that the command was accepted, but the hand does not move.
Then I tried to split the Header separately for the Length in the next line and the Command with Parameters in the next.
Tell me how you can send these commands to the port, maybe there is something ready to do this in Python?
LSC Series Servo Controller Communication Protocol V1.2 manual says:
If the user transmits the correct data to the servo
controller, the blue LED 2 on the controller will flash one time, indicating that the
correct data have been received. If the user transmits the wrong data, then the blue
LED2 will not have any reaction and will keep the bright, then the buzzer will beepbeep twice to remind the user of the data error.
The only thing in that manual about that buzzer is that it beeps 2 times if there is a data error...

Can't receive from USB bulk endpoint despite Windows enumerates and libusb reads descriptor of STM32 custom device class

For a fast ADC sampling USB device, I am using the USB 2.0 High Speed capable STM32F733 with the embedded USB-HS PHY. In USBView, I can see that the device is enumerated, the libusb code opens the device and claims interface, but when I try to receive data with libusb_bulk_transfer, the operation times out (return code -12). Things I have tried: I have confirmed than when I request data with libusb_bulk_transfer, the device is interrupted. Note: I have DMA enabled in my class configuration C file and it is not clear to me how that is triggered. I have verified that the transfersize and packet count registers are being set correctly by the LL library function, and that when I request data from
Any tips on debugging such problems will be much appreciated - this board is my undergrad thesis due in under two months!
Desktop sequence:
libusb_get_device_list, libusb_get_device_descriptor, libusb_open, libusb_get_string_descriptor_ascii, libusb_free_device_list, libusb_bulk_transfer(devh, fat_EPIN_ADDR, inframe, fat_EPIN_SIZE, &gotBytes, 100). Where gotBytes is integer, and inframe is a large array.
Device firmware:
MX_USB_DEVICE_Init();
uint8_t txBuffer[10*fat_EPIN_SIZE];
while (1)
{
USBD_LL_Transmit(&hUsbDeviceHS, Custom_fat_EPIN_ADDR, txBuffer, Custom_fat_EPIN_SIZE);
HAL_Delay(1);
}
Custom_fat_EPIN_SIZE is 0x200 and the endpoint address is 0x81 (EP IN 1)
Installed driver for device is WinUSB (verified in Device Manger to be winusb.sys), and I am linking libusb-1.0 into my desktop program. You can find my source code at https://gitlab.com/tywonemi-school-stuff/silicon-radar-fun, the firmware is My SW/v1 and the desktop software is a Qt Creator project in My SW/Viewer, of note is usb.cpp. You can also compare with testing project/HIDTest, which is code that I tested with STM32F303 nucleo dev board where I was able to read an array through IN bulk endpoint with the Viewer application. However, F3 has the USB peripheral, while F7 has OTG_USB, and I am now attempting USB 2.0 compliant HS so there may be more protocol-based pitfalls. You can also find the output of the device descriptor etc from USBView in my SW/USBView_broken.txt
EDIT 1: I have found finally some concrete error in the STM32 behavior. The DMAADDR is set for EPIN 0x81, and never increments, despite the DMA being enabled. I have went through literally every occurrence of the word "DMA" in the USB_OTG periphery.
I thought it might be that my linker script makes my array be stored in DTCM or similar, and the OTG DMA can't access it, but the address of txBuffer is 0x2003EBEC which is in SRAM2. The AHB matrix in the reference manual clearly shows, that the USB OTG HS DMA is master for a bus that SRAM2 is a slave of. And DTCM is connected too. I will look for application notes for USB OTG HS DMA - it just seems to be refusing to copy data!
I have fixed my issue by disabling the DMA setting. I have re-read the relevant portions of the reference manual and still don't know how exactly the values propagate into the Tx FIFOs. It is possible that DMA-less operation will be a major bottleneck in my project, I might return to this later.

NRF24L01+ TX and RX Timing Issue?

I use a STM32F103C8T6 as a transmitter and an Arduino Uno as a receiver.
I cannot receive the value I am interested in. I have changed delay durations after each send and also CE pulse. I sometimes get it worked on spec by playing with delay durations.
To exemplify, I add a 200ms delay after TX function is executed and receiver receives very well then I disconnect the receiver and then when I connect it again, it receives nothing but zero.
Very similar situations occur when I play with CE pulse duration.
I cannot get it worked neither with auto-ack nor simple rx-tx operation.
I would like to mention that when I change roles (TX device is Arduino and RX device is STM) everything works perfectly.
I've checked STM32 by a logic analyzer to see whether the payload is filled properly or not and could not see any problem.
After I fill the payload, I check FIFO_STATUS register and everything is fine.
After I apply a pulse for a certain duration, I check STATUS register and can see that TX_DS bit is set.
I've discovered that just applying 10us pulse on CE may not be enough. It can take up to 500us.
Then decided to set CE pin high until TX_DS bit is set, but this method also did not work.
void TX_Mode(uint8_t data2send)
{
Flush_TX();
CleanInterrupts();
SetPRIM(PRIM_TX); //set as transmitter
csn_low(); //CSN=0
HAL_SPI_Transmit(&hspi2, &COMD_W_TX_PAYLOAD, 1,150); //send command to write to payload
HAL_SPI_TransmitReceive(&hspi2, &data2send, &dummy, 1, 150); // fill the payload
while(HAL_SPI_GetState(&hspi2) != HAL_SPI_STATE_READY);
csn_high();
ChipEnable_high();
while( !(TXDS_Bit_Is_Set() )); //wait until payload is sent
ChipEnable_low();
}