RTOS μC/OS-II not running as expected - rtos

I'm using ST STM32F101xB and μC/OS-II, I was having external clock (HSE) on old board and it's running fine. We wanted to use internal clock (HSI) on new board, however, the RTOS (Appmaintask()) doesn't run using internal clock, i have changed my code as below, any idea what's wrong with the change:
void BSP_Init (void)
{
RCC_DeInit();
//RCC_HSEConfig(RCC_HSE_ON);
//RCC_WaitForHSEStartUp();
RCC_HCLKConfig(RCC_SYSCLK_Div1);
RCC_PCLK2Config(RCC_HCLK_Div1); // APB2 clock divide by 1 => 64MHz
RCC_PCLK1Config(RCC_HCLK_Div2); // APB1 clock divide by 2 => 32MHz
FLASH_SetLatency(FLASH_Latency_2);
FLASH_PrefetchBufferCmd(FLASH_PrefetchBuffer_Enable);
//RCC_PLLConfig(RCC_PLLSource_HSE_Div1, RCC_PLLMul_8); // 64MHz
RCC_PLLConfig(RCC_PLLSource_HSI_Div2, RCC_PLLMul_8);
RCC_PLLCmd(ENABLE);
RCC_LSEConfig(RCC_LSE_OFF);
while (RCC_GetFlagStatus(RCC_FLAG_PLLRDY) == RESET) {
;
}
RCC_SYSCLKConfig(RCC_SYSCLKSource_PLLCLK);
while (RCC_GetSYSCLKSource() != 0x08) {
;
}
//Set the Vector Table base location at 0x08000000
//NVIC_SetVectorTable(NVIC_VectTab_FLASH, 0x0);
// Need to finalize and arange priority for each interrupts in future,
// So that 1 interrupt wont blocks another interrupt.
NVIC_PriorityGroupConfig(NVIC_PriorityGroup_3);
}
void main()
{
INT8U err;
cpuObj = new Cstm32f10x();
BSP_Init();
BSP_IntDisAll(); /* Disable all ints until we are ready to accept them. */
OSInit();
err = OSTaskCreateExt (AppMainTask,
(void *)0,
(OS_STK *)&AppMainTaskStk[APP_MAIN_TASK_STK_SIZE-1],
APP_MAIN_TASK_PRIO,
APP_MAIN_TASK_ID,
(OS_STK *)&AppMainTaskStk[0],
APP_MAIN_TASK_STK_SIZE,
(void *)0,
OS_TASK_OPT_STK_CHK | OS_TASK_OPT_STK_CLR);
OSStart(); // Start multitasking (i.e. give control to uC/OS-II)
}
void AppMainTask (void *p_arg)
{
OS_CPU_SysTickInit();
while(TRUE)
{
OSTimeDly(1);
}
}
Thanks.

Setting up of the PLL is normally performed in the CMSIS start-up code provided by ST/ARM. This code executes as part of the runtime environment start-up before main() is called. I recommend that you use this code for chip initialisation since static data initialisation and static object constructors run before main() and will be running before possibly critical initialisation.
The CMSIS with Cortex-M3 core and STM32F1xx specific device support is included in the STM32 Standard Peripheral Library. The file that actually does the work is system_stm32f10x.c. Other functions you are performing in BSP_Init() such as flash latency are also dealt with by the CMSIS start-up code. Even if you customise this code, I strongly recommend that you use this method of early environment initialisation.
Another possibility is to use the STM32CubeMX utility to generate configuration code. This appears to be a replacement for the apparently now unavailable STM32 MicroXplorer utility.

Related

FreeRTOS ignores osDelay [STM32]

I am new here, but have often benefited from questions and their results.
Now I have a problem that I can not solve for days. It is about a STM32L431RCT6 with FreeRTOS. There are 3 tasks running on it. Two of them are dedicated to processing CanOpenNode (1 to send data, 1 to receive). One is a custom controller. The code works as it is, but when I enable the CRC unit (MX_CRC_Init()) the problem appears. With the CRC unit enabled, only the task to receive the data from Canopen is executed. The other tasks are set to "Ready". What I notice is that the osDelay() used in the receive task seems to be ignored. It doesn't seem to matter if I use osDelay(1) or osDelay(10000).
void start_CO_rec_Thread(void *argument)
{
/* USER CODE BEGIN start_CO_TI_Thread */
/* Infinite loop */
for(;;)
{
canopen_app_interrupt();
osDelay(1);
}
osThreadTerminate(NULL);
/* USER CODE END start_CO_TI_Thread */
}
Something else is added. If I remove functions and variables in the task by starting an own controller, the processing of the tasks works again. Now I have concluded that the tasks need more stack memory. But this is also not the solution of the puzzle...
Working Code:
void Start_Controller(void *argument)
{
Sensor = new DS18B20(htim6);
Regler = new Smithpredictor(Sensor);
// osTimerStart(LifetimerHandle, 1000);
for(;;)
{
if (global_state==4)
actual_Temperatur = Regler->run(target_Temperature);
osDelay(MBC_intervall_s*1000);
}
osThreadTerminate(NULL);
}
Not Working Code:
void Start_Controller(void *argument)
{
Sensor = new DS18B20(htim6);
Regler = new Smithpredictor(Sensor);
osTimerStart(LifetimerHandle, 1000);
for(;;)
{
if (global_state==4)
actual_Temperatur = Regler->run(target_Temperature);
osDelay(MBC_intervall_s*1000);
}
osThreadTerminate(NULL);
}
I hope someone has an idea how to proceed or can help me. Even if it is just another way to further identify the problem. Maybe I have also committed some stupidity, I am unfortunately not a computer scientist.

STM32 Flash Erase fails with a "Programming Parallelism error" and "Programming Sequence error"

I have an application running on an STM32F4 which uses the STM32 HAL framework + FreeRTOS. I occasionally need to store some settings in flash during runtime and have written the following function to erase the data at my target address of 0x08060000UL (I believe this is SECTOR_6 of this particular MCU).
HAL_StatusTypeDef Flash::erase(uint32_t address)
{
HAL_StatusTypeDef status;
HAL_FLASH_Unlock(); // unlock the flash API
__disable_irq(); // disable all interrupts
vTaskSuspendAll(); // suspend all FreeRTOS tasks
FLASH_EraseInitTypeDef eraseConfig = {0};
uint32_t sectorError;
uint32_t flashError = 0;
eraseConfig.TypeErase = FLASH_TYPEERASE_SECTORS;
eraseConfig.Sector = this->getSector(address);
eraseConfig.NbSectors = 1;
eraseConfig.VoltageRange = FLASH_VOLTAGE_RANGE_3;
status = HAL_FLASHEx_Erase(&eraseConfig, &sectorError); // <---- FAILS HERE
if (status != HAL_OK)
{
flashError = HAL_FLASH_GetError();
}
status = HAL_FLASH_Lock();
xTaskResumeAll(); // resume all FreeRTOS tasks
__enable_irq(); // re-enable interrupts
return status;
}
The flashError variable ends up getting set to 6, which means the following two errors occurred during the call to HAL_FLASHEx_Erase()
#define HAL_FLASH_ERROR_PGS 0x00000002U /*!< Programming Sequence error */
#define HAL_FLASH_ERROR_PGP 0x00000004U /*!< Programming Parallelism error */
I can't be 100% sure, but I think this code worked fine prior to implementing FreeRTOS. Regardless, what kind of behavior might cause such an error? I thought disabling all ISRs as well as suspending all tasks (even though there is only one running during this operation) would cover me, but no combination of these attempts alleviates the error 🤷‍♂️.
Turns out I had to reset some peripheral flags prior to using the HAL Flash API. Why? I don't know, but clearing all the flags prior to using the API fixed my problem.
__HAL_FLASH_CLEAR_FLAG(FLASH_FLAG_EOP);
__HAL_FLASH_CLEAR_FLAG(FLASH_FLAG_OPERR);
__HAL_FLASH_CLEAR_FLAG(FLASH_FLAG_WRPERR);
__HAL_FLASH_CLEAR_FLAG(FLASH_FLAG_PGAERR);
__HAL_FLASH_CLEAR_FLAG(FLASH_FLAG_PGPERR);
__HAL_FLASH_CLEAR_FLAG(FLASH_FLAG_PGSERR);

STM32 HAL UART receive by interrupt cleaning buffer

I'm working on an application where I process commands of fixed length received via UART.
I'm also using FreeRTOS and the task that handles the incoming commands is suspended until the uart interrupt handler is called, so my code is like this
void USART1_IRQHandler()
{
HAL_UART_IRQHandler(&huart1);
}
void HAL_UART_ErrorCallback(UART_HandleTypeDef *huart){
HAL_UART_Receive_IT(&huart1, uart_rx_buf, CMD_LEN);
}
void HAL_UART_RxCpltCallback(UART_HandleTypeDef *huart){
BaseType_t higherTaskReady = pdFALSE;
HAL_UART_Receive_IT(&huart1, uart_rx_buf, CMD_LEN); //restart interrupt handler
xSemaphoreGiveFromISR(uart_mutex, &higherTaskReady);
portYIELD_FROM_ISR( higherTaskReady); //Relase the semaphore
}
I am using the ErrorCallBack in case if an overflow occurs. Now I successfully catch every correct command, even if they are issued char by char.
However, I'm trying to make the system more error-proof by considering the case where more characters are received than expected.
The command length is 4 but if I receive, for example, 5 chars, then the first 4 is processed normally but when another command is received it starts from the last unprocessed char, so another 3 chars are needed until I can correctly process the commands again.
Luckily, the ErrorCallback is called whenever I receive more than 4 chars, so I know when it happens, but I need a robust way of cleaning the UART buffer so the previous chars are gone.
One solution I can think of is using UART receive 1 char at a time until it can't receive anymore, but is there a better way to simply flush the buffer?
Yes, the problem is the lack of delimiter, because every byte can can carry a value to be processed from 0 to 255. So, how can you detect the inconsistency?
My solution is a checksum byte in the protocol. If the checksum fails, a blocking-mode UART_Receive function is called in order to put the rest of the data from the "system-buffer" to a "disposable-buffer". In my example the fix size of the protocol is 6, I use the UART6 and I have a global variable RxBuffer. Here is the code:
void HAL_UART_RxCpltCallback(UART_HandleTypeDef *UartHandle)
{
if(UartHandle->Instance==USART6) {
if(your_checksum_is_ok) {
// You can process the incoming data
} else {
char TempBuffer;
HAL_StatusTypeDef hal_status;
do {
hal_status = HAL_UART_Receive(&huart6, (uint8_t*)&TempBuffer, 1, 10);
} while(hal_status != HAL_TIMEOUT);
}
HAL_UART_Receive_IT(&huart6, (uint8_t*)RxBuffer, 6);
}
}
void HAL_UART_ErrorCallback(UART_HandleTypeDef *UartHandle) {
if(UartHandle->Instance==USART6) {
HAL_UART_Receive_IT(&huart6, (uint8_t*)RxBuffer, 6);
}
}

STM32 FreeRTOS - UART Deferred Interrupt Problem

I am trying to read data with unkown size using UART Receive Interrupt. In the call back function, I enabled Rx interrupt in order to read characters until \n is gotten. If \n is get, then higher priority task which is deferred interrupt handler is woken. The problem is that I tried to read one by one byte via call back function and I tried to put each character into a buffer, but unfortunately buffer could not get any character. Moreover, deferred interrupt handler could not be woken.
My STM32 board is STM32F767ZI, and my IDE is KEIL.
Some Important notes before sharing the code:
1. rxIndex and gpsBuffer are declared as global.
2. Periodic function works without any problem.
Here is my code:
Periodic Function, Priority = 1
void vPeriodicTask(void *pvParameters)
{
const TickType_t xDelay500ms = pdMS_TO_TICKS(500UL);
while (1) {
vTaskDelay(xDelay500ms);
HAL_UART_Transmit(&huart3,(uint8_t*)"Imu\r\n",sizeof("Imu\r\n"),1000);
HAL_GPIO_TogglePin(GPIOB,GPIO_PIN_7);
}
}
Deferred Interrupt, Priority = 3
void vHandlerTask(void *pvParameters)
{
const TickType_t xMaxExpectedBlockTime = pdMS_TO_TICKS(1000);
while(1) {
if (xSemaphoreTake(xBinarySemaphore,xMaxExpectedBlockTime) == pdPASS) {
HAL_UART_Transmit(&huart3,(uint8_t*)"Semaphore Acquired\r\n",sizeof("Semaphore
Acquired\r\n"),1000);
// Some important processes will be added here
rxIndex = 0;
HAL_GPIO_TogglePin(GPIOB,GPIO_PIN_14);
}
}
}
Call back function:
void HAL_UART_RxCptlCallBack(UART_HandleTypeDef *huart)
{
gpsBuffer[rxIndex++] = rData;
if (rData == 0x0A) {
BaseType_t xHigherPriorityTaskWoken;
xSemaphoreGiveFromISR(xBinarySemaphore,&xHigherPriorityTaskWoken);
portEND_SWITCHING_ISR(xHigherPriorityTaskWoken);
}
HAL_UART_Receive_IT(huart,(uint8_t*)&rData,1);
}
Main function
HAL_UART_Receive_IT(&huart3,&rData,1);
xBinarySemaphore = xSemaphoreCreateBinary();
if (xBinarySemaphore != NULL) {
//success
xTaskCreate(vHandlerTask,"Handler",128,NULL,1,&vHandlerTaskHandler);
xTaskCreate(vPeriodicTask,"Periodic",128,NULL,3,&vPeriodicTaskHandler);
vTaskStartScheduler();
}
Using HAL for it is a best way to get into the troubles. It uses HAL_Delay which is systick dependant and you should rewrite this function to read RTOS tick instead.
I use queues to pass the data (the references to data) but it should work. There is always a big question mark when using the HAL functions.
void HAL_UART_RxCptlCallBack(UART_HandleTypeDef *huart)
{
BaseType_t xHigherPriorityTaskWoken = pdFALSE;
gpsBuffer[rxIndex++] = rData;
if (rData == 0x0A) {
if(xSemaphoreGiveFromISR(xBinarySemaphore,&xHigherPriorityTaskWoken) == pdFALSE)
{
/* some error handling */
}
}
HAL_UART_Receive_IT(huart,(uint8_t*)&rData,1);
portEND_SWITCHING_ISR(xHigherPriorityTaskWoken);
}
Concluding if I use HAL & RTOS I always modify the way HAL handles timeouts.

Play sounds synchronously using snd_pcm_writei

I need to play sounds upon certain events, and want to minimize
processor load, because some image processing is being done too, and
processor performance is limited.
For the present, I play only one sound at a time, and I do it as
follows:
At program startup, sounds are read from .wav files
and the raw pcm data are loaded into memory
a sound device is opened (snd_pcm_open() in mode SND_PCM_NONBLOCK)
a worker thread is started which continously calls snd_pcm_writei()
as long as it is fed with data (data->remaining > 0).
Somewhat resumed, the worker thread function is
static void *Thread_Func (void *arg)
{
thrdata_t *data = (thrdata_t *)arg;
snd_pcm_sframes_t res;
while (1)
{ pthread_mutex_lock (&lock);
if (data->shall_stop)
{ data->shall_stop = false;
snd_pcm_drop (data->pcm_device);
snd_pcm_prepare (data->pcm_device);
data->remaining = 0;
}
if (data->remaining > 0)
{ res = snd_pcm_writei (data->pcm_device, data->bufptr, data->remaining);
if (res == -EAGAIN) continue;
if (res < 0) // error
{ fprintf (stderr, "snd_pcm_writeX() error: %s\n", snd_strerror(result));
snd_pcm_recover (data->sub_device, res);
}
else // another chunk has been handed over to sound hw
{ data->bufptr += res * bytes_per_frame;
data->remaining -= res;
}
if (data->remaining == 0) snd_pcm_prepare (data->pcm_device);
}
pthread_mutex_unlock (&lock);
usleep (sleep_us); // processor relief
}
} // Thread_Func
Ok, so this works well for one sound at a time. How do I play various?
I found dmix, but it seems a tool on user level, to mix streams coming
from separate programs.
Furthermore, I found the Simple Mixer Interface in the ALSA Project C
Library Interface, without any hint or example or tutorial about how
to use all these function described by one line of text each.
As a last resort I could calculate the mean value of all the buffers
to be played synchronously. So long I've been avoiding that, hoping
that an ALSA solution might use sound hardware resources, thus
relieving the main processor.
I'd be thankful for any hint about how to continue.