LoRa SX1278 can't set LNA GAIN - stm32

I'm trying to configure my SX1278 Ra-2 LoRa module via STM32 Nucleo board and ran into a problem.
While I was initializing the LNA register (0xC) by writing (0x23) -> 0010(max gain) 0011(boost on), which is supposed to give me the max gain and boost, after reading that register I receive 0x3.
Is this normal?
While LoRa SX1278 is in sleep mode it will return 0x3, without showing 3MSB. However in Standby Mode it reads 0x23 as it is supposed to.

Have you set AgcAutoOn to 0? Otherwise it will automatically set the LNAGain bits.
Source:
page 60:
When AgcAutoOn=0, the LNA gain is manually selected by choosing LnaGain bits in RegLna.
page 95:
Note:
Reading this address always returns the current LNA gain (which
may be different from what had been previously selected if AGC
is enabled.
Page 96: set bit 3 to 0 in 0x0D to disable AgcAutoOn.
Page 95: for the Booston/max gain, you need to set bits 0-1 and 5-7. Because of your writing style I suspect you are only writing to the lower ones.

While LoRa SX1278 in sleep mode will return 0x03, without showing 3MSB, in Standby Mode it reads 0x23 as it is supposed to.

Related

STM32 UART in DMA mode stops receiving after receiving from a host with wrong baud rate

The scenario: I have a STM32 MCU, which uses an UART in DMA Mode with Idle Interrupt for RS485 data transfer. The baud rate of the UART is set in CubeMX, in this case to 115200. My Code works fine, when the Host uses the correct baud rate, it is also "long time" stable, no issues or worries.
BUT: when I set the wrong baud rate at the host, e.g. 56700 instead of 115200, the UART stops receiving data, even if I later set the baud rate at the host to the same baud rate the Microcontroller uses, it won't work. The only way to solve this issue so far is: reset the MCU and connect again with the correct baud rate.
To give you some (Pseudo-)Code:
uint8_t UART_Buf[128];
HAL_UART_Receive_DMA(&huart2, UART_Buf, 128);
__HAL_UART_ENABLE_IT(&huart2, UART_IT_IDLE);
Or in Plain Words: there is a UART Buffer for DMA (UART_Buf[128]) and the UART is started with HAL_UART_Receive_DMA(...), DMA Rx is set to circular mode in CubeMX, also the Idle-Interrupt is activated, using the HAL Macro: __HAL_UART_ENABLE_IT(...); This code works fine so far.
Works fine means:
when I transmit data from my PC to the Micro, the (one) Idle Interrupt is triggered (correctly) by the MCU. In the ISR I set a flag, to start the data parsing afterwards. I receive exactly the number of bytes I have sent, and all is fine.
BUT: when I make the wrong setting in my Terminal Program and instead of the (correct) baud rate of 115200, the baud rate select menu is set to e.g. 57600, the trouble begins:
The idle interrupt will still trigger after each transmission.
But it triggers 2-4 times in a quick "burst" (depending on the baud rate) and the number of bytes received is 0. I'd expect at least some bs data, but there is exactly 0 data in the buffer - which I can check with the debugger. There is obviously received nothing. When I change the baud rate in my terminal program and restart it, there is still nothing received on the MCU.
I could live with 0 received bytes, if the baud rate of the host is incorrect, but it's pretty uncool that one incoming transmission of a host with the wrong baud rate disables the UART until a hardware reset is done.
My attempts to resolve this were so far:
count the "Idle Interrupt Bursts" in combination with 0 received bytes to trigger a "self reset" routine, that stops the UART and restarts it, using the MX_USART2_UART_Init(); Routine. With zero effect. I can see the Idle Interrupt is still triggered correctly, but the buffer remains empty and no data is transferred into the buffer. The UART remains in a non-receiving state.
The Question
Has anyone out there experienced similar issues, and if yes: how did you solve that?
Additional Info: this happens on a STM32F030 as well as on a STM32G03x
When you send to the UART at the wrong baud rate it will appear to the receiver as framing errors and/or noise errors. It could also appear as random characters being received correctly, but this is less likely so don't be surprised to have nothing in your buffer.
When you are receiving with DMA, it is normal to turn the error interrupt on or else poll the error bits. When an error is detected you would then re-initialize everything and restart the DMA. This sounds like what you are trying to do by counting the idle interrupts, but you are just not checking the right bits.
If you don't want to do that, it is not impossible to imagine that you have nothing to do at the driver level and want to try to do the resynchronisation at a higher level (eg: start reading again and discard everything until a newline character) but you will have to bear in mind at least two things:
First, make sure you clear the DDRE bit in the USART_CR3 register. The name "DMA Disable on Reception Error" speaks for itself.
Second, the UART peripheral is able to self resynchronize, as long as you have an idle gap between bytes. If you switch the transmitter to the correct baud rate but keep blasting out data then the receiver may never correctly identify which bit is a start bit.
After investigating this issue a little bit further, i found a solution.
Abstract:
When a host connects to the MCU to an UART with an other baud rate than the UART is set to, it will go into an error state and stop DMA transmission to the RX Buffer. You can check if there is an error with the HAL_UART_GetError(...) function. If there is an error, stop the UART/DMA and restart it.
The Details:
First of all, it was not the DDRE bit in the USART_CR2 register. This was set to 0 by CubeMX. But the hint of Tom V led me into the right direction.
I tried to recover the UART by playing around with the register bits. I read through the UART section of the reference manual multiple times and tried to figure out, which bits to set in which order, to resolve the error condition manually.
What I found out:
When a transmission with the wrong baud rate is received by the UART the following changes in the UART Registers occur (on an STM32F030):
Control register 1 (USART_CR1) - Bit 8 (PEIE) goes from 1 to 0. PEIE is the Parity Interrupt Enable Bit.
Control register 2 (USART_CR2) - remains unchanged
Control register 3 (USART_CR3) - changes from 0d16449 to 0d16384, which means
Bit 0 (EIE - Error Interrupt enable) goes from 1 to 0
Bit 6 (DMAR - DMA enable receiver) goes from 1 to 0
Bit 14 (DEM - Driver enable mode) remains unchanged at 1
USART_CR3.DEM makes sense. I am using the RS485-Functionality of the F030, so the UART handles the Driver-Enable GPIO by itself.
the transition from 1 to 0 at USART_CR3.EIE and USART_CR3.DMAR are most probably the reason why no more data are transfered to the DMA buffer.
Besides that, the error Flags in the Interrupt and status register (USART_ISR) for ORE and FE are set. ORE stands for Overrun Error and FE for Frame Error. Although these bit can be cleared by writing a 1 to the corresponding bit of the Interrupt flag clear register (USART_ICR), the ErrorCode in the hUART Struct remains at the intial error value.
At the end of my try&error process, I managed to have all registers at the same values they had during valid transmissions, but there were still no bytes received. Whatever i tried, id had no effect. The UART remained in a non receiving state. So i decided to use the "brute force" approach and use the HAL functions, which I know they work.
Finally the solution is pretty simple:
if an Idle Interrupt is detected, but the number of received bytes is 0
=> check the Error-Status of the UART with HAL_UART_GetError(...)
If there is an error, stop the UART with HAL_UART_DMAStop(...) and restart it with HAL_UART_Receive_DMA(...)
The code:
if(RxLen) {
// normal execution, number of received bytes > 0
if(UA_RXCallback[i]) (*UA_RXCallback[i])(hUA); // exec RX callback function
} else {
if(HAL_UART_GetError(&huart2)) {
HAL_UART_DMAStop(&huart2); // STOP Uart
MX_USART2_UART_Init(); // INIT Uart
HAL_UART_Receive_DMA(&huart2, UA2_Buf, UA2_BufSz); // START Uart DMA
__HAL_UART_CLEAR_IDLEFLAG(&huart2); // Clear Idle IT-Flag
__HAL_UART_ENABLE_IT(&huart2, UART_IT_IDLE); // Enable Idle Interrupt
}
}
I had a similar issue. I'm using a DMA to receive data, and then periodically checking how many bytes were received. After a bit error, it would not recover. The solution for me was to first subscribe to ErrorCallback on the UART_HandleTypeDef.
In the error handler, I then call UART_Start_Receive_DMA(...) again. This seems to restart the UART and DMA without issue.

How can I change the start address on flash?

I'm using STM32F746ZG and FreeRTOS.
The start address of flash is 0x08000000. But I want to change it to 0x08040000. I've searched this issue through google but I didn't find the solution.
I changed the linker script like the following.
MEMORY
{
RAM (xrw) : ORIGIN = 0x20000000, LENGTH = 320K
/* FLASH (rx) : ORIGIN = 0x8000000, LENGTH = 1024K */
FLASH (rx) : ORIGIN = 0x8040000, LENGTH = 768K
}
If I only change it and run the debugger, it has the problem.
If I change the VECT_TAB_OFFSET from 0x00 to 0x4000, it works fine.
/* #define VECT_TAB_SRAM */
#define VECT_TAB_OFFSET 0x40000 /* 0x00 */
SCB->VTOR = FLASH_BASE | VECT_TAB_OFFSET;
But if I don't use debugger, it doesn't work anything.
It means it only works when using ST-Linker.
Please let me know if you know the solution.
Thank you for in advance of your reply.
The boot address can be set in the option bytes.
You can set any address in the flash with 16k increments. There are two 16 bit registers in the option bytes area, one is used when the boot pin is low at reset, the other when the pin is high. Write the desired address shifted right by 14 bits, i.e. divided by 16384.
To boot from 0x08040000, write 0x2010 into the register as described in the Option bytes programming chapter of the reference manual.
You could also write a bootloader. Bootloader sits on the 0x0800 0000 address and loads your application firmware meaning jumps to it.
This is the other way to do it.
You need to place 8 bytes at the original beginning of the FLASH. Stm32 boots always from the address 0x00000000 which is aliased to the one of the memories (depending on the boot pins and options).
The first word contains the stack pointer the second one your reset handler. You never get to your code as it boots always from the same address.
You will need to modify your linker script and the startup files where vectors are defined

What is the meaning of CANBUS function mode initilazing settings for STM32?

I want to understand meaning of the following function mode definition, there is explanation in the library. But I don't understand that because explanations are very short and not enough. I searched on the net I couldnt find any information about.
CAN_InitStructure.CAN_TTCM = DISABLE;
CAN_InitStructure.CAN_ABOM = DISABLE;
CAN_InitStructure.CAN_AWUM = DISABLE;
CAN_InitStructure.CAN_NART = ENABLE;
CAN_InitStructure.CAN_RFLM = DISABLE;
CAN_InitStructure.CAN_TXFP = ENABLE;
These are the names of the bits located in the CAN master control register (CAN_MCR). So, the proper source for their meaning is the reference manual. My following answer will be somewhat copy & paste from the reference manual, but I will try to explain these bits in detail.
TTCM (Time triggered communication mode): This bit activates the Time Triggered Communication (TTCAN) mode, which is an extension to the CAN standard. I don't know much about TTCAN, but as I understand, it assigns time windows to messages to satisfy some real-time requirements. So, normally this bit should remain 0.
ABOM (Automatic bus-off management): If the transmit error counter (TEC) becomes greater than 255, the CAN hardware switches to bus-off state. To recover, it must wait for the recovery sequence, 128 occurrences of 11 consecutive recessive bits. Only after that, the CAN hardware may return to the normal operating state. This bit controls the returning behavior. If it's 1, returning to normal state is automatic. Otherwise, software should make the request, provided that the recovery sequence has been observed.
AWUM (Automatic wakeup mode): The CAN module can be in one of 3 modes: Initialization mode, normal mode or sleep (low power) mode. Sleep mode is requested by the software. However, you have 2 options to exit sleep mode. If this bit is 0, then you have to exit sleep mode manually. You may enable CAN wakeup interrupt to inform you about bus activity, then exit the sleep mode in ISR. But if this bit is 1, the hardware returns to normal mode automatically when it detects bus activity.
NART (No automatic retransmission): Normally, CAN hardware retries to transmit a message if its previous attempts fail, because of arbitration lost etc. But if you make this bit 1, the transmitter does not retry. This is required when you use Time Triggered Communication (TTCAN). Otherwise, you should keep this bit 0.
RFLM (Receive FIFO locked mode): Your receive mailboxes have 3 levels depth, meaning that they can store maximum 3 messages before they are overrun. This bit controls what happens in case of mailbox overrun. Default behavior is to keep the oldest 2 messages and the newest one. For example, if you received 5 messages, the buffer keeps the messages 1, 2 & 5. However, if you make this bit 1, the mailbox keeps the messages 1, 2 & 3 and discards the new arrivals.
TXFP (Transmit FIFO priority): You have 3 transmit mailboxes. When you fill more than one, the hardware must decide which one to transmit first. Normally, one can assume that a message with a lower ID number is more important and should be transmitted first. But if you want to transfer them in a first-comes-first-served fashion for some reason, you need to make this bit 1. Of course, this is just a local priority. On the physical bus, the messages with lower ID always have priority.

Brown-Out Disable PicBasic Pro

Hi there I'm currently working with a PIC16F877 and I need to disable Brown-Out Detect (BOD) on the PIC as it interferes with the rest of the program as I'm switching power between two modules.
Is there a simple way of disabling BOD using PicBasic Pro?
The BOD can be set by adjusting the Configuration Bits (Configuration Word).
This can only be done by programming, because the address (2007h) is outside the user program memory space.
The default value of the BODEN - Brown-Out Reset Enable Bit (bit 6 of the Configuration Bits) is '1', so you will have to change the value of BODEN to '0' in the compiler and reprogram the microcontroller.

How could one interrupt handler go until the same source is free?

Note that a single interrupt source (timer, keyboard, etc.)
will not signal a new interrupt to the processor until
the processor has indicated that handling of the previous interrupt from that source is ``done'', even if the system-wide interrupt-enable flag is on.
Who tells the PIC the current interrupt is over, and what does "systerm-wide interrupt-enable flag" mean?
That has been covered in my comment in the other question. :)
OK... Some more details...
If we're talking about the PIC and its usual operation (as in the BIOS and DOS), there are 16 IRQ lines. They are mapped (in the PIC) to interrupt vectors 8 through 0Fh (IRQ0 through IRQ7) and 70h through 77h (IRQ8 through IRQ15).
By reprogramming the PIC you can change this assignment (see the PIC (8259 chip) documentation). Changing this assignment is often more than just desirable in protected mode because various important exceptions are hardwired to interrupt vectors from 0 to around 1Fh (e.g. the general protection exception (AKA #GP) is at vector 0Dh, which is IRQ5 in this default assignment).
IRQ0 is the periodic timer (AKA PIT)
IRQ1 is the keyboard
IRQ2 is used to chain 2nd PIC (every PIC handles at most 8 IRQs, so you have 2 for 16 IRQs; IRQ8 thorugh 15 are, in fact, delivered through this IRQ2)
IRQ3 and IRQ4 are used for COM1 and COM2 serial ports
IRQ6 is used for the FDD
IRQ7 is used for the parallel port (where we used to connect our printers, and now it's usually the USB port)
IRQ8 is used for another timer, the real-time clock (AKA RTC)
IRQ12 is normally used for the PS/2 mouse
IRQ14 and IRQ15 are used for HDDs/CDROMs
Other IRQs aren't very fixed.
The PIC itself is connected to the CPU at I/O ports 20h and 21h (PIC1) and 0A0h and 0A1h (PIC2).
The CPU signals completion of IRQ handling by sending the EOI command to the corresponding PIC, from where this IRQ has come.
Thus, for IRQ0 through IRQ7 the ISR typically ends with this code:
...
mov al, 20h
out 20h, al ; send EOI to PIC1
; restore al using pop or mov
iret
For IRQ8 through IRQ15 the same thing looks like this:
mov al, 20h
out 0a0h, al ; send EOI to PIC2
out 20h, al ; send EOI to PIC1
; restore al using pop or mov
iret
In this latter case every PIC gets an EOI because, as I mentioned it earlier, PIC2 doesn't deliver IRQs directly to the CPU, but rather through PIC1 (on PIC1's IRQ2; this effectively limits the number of IRQs to 15), so both PICs are involved. And PIC2 is an interrupt source to PIC1 just like, say, the keyboard. So, 2 EOIs.
Further, some devices (may) have their own equivalents of EOI. For example, XT keyboards waited for a bit pulse (from 1 to 0) in one of their registers as an indication that the keyboard interrupt handling is complete. In such cases you send EOIs to the device and PIC(s).
EDIT: Most likely the text you're referring to means FLAGS.IF by "system-wide interrupt-enable flag".