FreeRTOS ignores osDelay [STM32] - stm32

I am new here, but have often benefited from questions and their results.
Now I have a problem that I can not solve for days. It is about a STM32L431RCT6 with FreeRTOS. There are 3 tasks running on it. Two of them are dedicated to processing CanOpenNode (1 to send data, 1 to receive). One is a custom controller. The code works as it is, but when I enable the CRC unit (MX_CRC_Init()) the problem appears. With the CRC unit enabled, only the task to receive the data from Canopen is executed. The other tasks are set to "Ready". What I notice is that the osDelay() used in the receive task seems to be ignored. It doesn't seem to matter if I use osDelay(1) or osDelay(10000).
void start_CO_rec_Thread(void *argument)
{
/* USER CODE BEGIN start_CO_TI_Thread */
/* Infinite loop */
for(;;)
{
canopen_app_interrupt();
osDelay(1);
}
osThreadTerminate(NULL);
/* USER CODE END start_CO_TI_Thread */
}
Something else is added. If I remove functions and variables in the task by starting an own controller, the processing of the tasks works again. Now I have concluded that the tasks need more stack memory. But this is also not the solution of the puzzle...
Working Code:
void Start_Controller(void *argument)
{
Sensor = new DS18B20(htim6);
Regler = new Smithpredictor(Sensor);
// osTimerStart(LifetimerHandle, 1000);
for(;;)
{
if (global_state==4)
actual_Temperatur = Regler->run(target_Temperature);
osDelay(MBC_intervall_s*1000);
}
osThreadTerminate(NULL);
}
Not Working Code:
void Start_Controller(void *argument)
{
Sensor = new DS18B20(htim6);
Regler = new Smithpredictor(Sensor);
osTimerStart(LifetimerHandle, 1000);
for(;;)
{
if (global_state==4)
actual_Temperatur = Regler->run(target_Temperature);
osDelay(MBC_intervall_s*1000);
}
osThreadTerminate(NULL);
}
I hope someone has an idea how to proceed or can help me. Even if it is just another way to further identify the problem. Maybe I have also committed some stupidity, I am unfortunately not a computer scientist.

Related

STM32 HAL UART receive by interrupt cleaning buffer

I'm working on an application where I process commands of fixed length received via UART.
I'm also using FreeRTOS and the task that handles the incoming commands is suspended until the uart interrupt handler is called, so my code is like this
void USART1_IRQHandler()
{
HAL_UART_IRQHandler(&huart1);
}
void HAL_UART_ErrorCallback(UART_HandleTypeDef *huart){
HAL_UART_Receive_IT(&huart1, uart_rx_buf, CMD_LEN);
}
void HAL_UART_RxCpltCallback(UART_HandleTypeDef *huart){
BaseType_t higherTaskReady = pdFALSE;
HAL_UART_Receive_IT(&huart1, uart_rx_buf, CMD_LEN); //restart interrupt handler
xSemaphoreGiveFromISR(uart_mutex, &higherTaskReady);
portYIELD_FROM_ISR( higherTaskReady); //Relase the semaphore
}
I am using the ErrorCallBack in case if an overflow occurs. Now I successfully catch every correct command, even if they are issued char by char.
However, I'm trying to make the system more error-proof by considering the case where more characters are received than expected.
The command length is 4 but if I receive, for example, 5 chars, then the first 4 is processed normally but when another command is received it starts from the last unprocessed char, so another 3 chars are needed until I can correctly process the commands again.
Luckily, the ErrorCallback is called whenever I receive more than 4 chars, so I know when it happens, but I need a robust way of cleaning the UART buffer so the previous chars are gone.
One solution I can think of is using UART receive 1 char at a time until it can't receive anymore, but is there a better way to simply flush the buffer?
Yes, the problem is the lack of delimiter, because every byte can can carry a value to be processed from 0 to 255. So, how can you detect the inconsistency?
My solution is a checksum byte in the protocol. If the checksum fails, a blocking-mode UART_Receive function is called in order to put the rest of the data from the "system-buffer" to a "disposable-buffer". In my example the fix size of the protocol is 6, I use the UART6 and I have a global variable RxBuffer. Here is the code:
void HAL_UART_RxCpltCallback(UART_HandleTypeDef *UartHandle)
{
if(UartHandle->Instance==USART6) {
if(your_checksum_is_ok) {
// You can process the incoming data
} else {
char TempBuffer;
HAL_StatusTypeDef hal_status;
do {
hal_status = HAL_UART_Receive(&huart6, (uint8_t*)&TempBuffer, 1, 10);
} while(hal_status != HAL_TIMEOUT);
}
HAL_UART_Receive_IT(&huart6, (uint8_t*)RxBuffer, 6);
}
}
void HAL_UART_ErrorCallback(UART_HandleTypeDef *UartHandle) {
if(UartHandle->Instance==USART6) {
HAL_UART_Receive_IT(&huart6, (uint8_t*)RxBuffer, 6);
}
}

STM32 FreeRTOS - UART Deferred Interrupt Problem

I am trying to read data with unkown size using UART Receive Interrupt. In the call back function, I enabled Rx interrupt in order to read characters until \n is gotten. If \n is get, then higher priority task which is deferred interrupt handler is woken. The problem is that I tried to read one by one byte via call back function and I tried to put each character into a buffer, but unfortunately buffer could not get any character. Moreover, deferred interrupt handler could not be woken.
My STM32 board is STM32F767ZI, and my IDE is KEIL.
Some Important notes before sharing the code:
1. rxIndex and gpsBuffer are declared as global.
2. Periodic function works without any problem.
Here is my code:
Periodic Function, Priority = 1
void vPeriodicTask(void *pvParameters)
{
const TickType_t xDelay500ms = pdMS_TO_TICKS(500UL);
while (1) {
vTaskDelay(xDelay500ms);
HAL_UART_Transmit(&huart3,(uint8_t*)"Imu\r\n",sizeof("Imu\r\n"),1000);
HAL_GPIO_TogglePin(GPIOB,GPIO_PIN_7);
}
}
Deferred Interrupt, Priority = 3
void vHandlerTask(void *pvParameters)
{
const TickType_t xMaxExpectedBlockTime = pdMS_TO_TICKS(1000);
while(1) {
if (xSemaphoreTake(xBinarySemaphore,xMaxExpectedBlockTime) == pdPASS) {
HAL_UART_Transmit(&huart3,(uint8_t*)"Semaphore Acquired\r\n",sizeof("Semaphore
Acquired\r\n"),1000);
// Some important processes will be added here
rxIndex = 0;
HAL_GPIO_TogglePin(GPIOB,GPIO_PIN_14);
}
}
}
Call back function:
void HAL_UART_RxCptlCallBack(UART_HandleTypeDef *huart)
{
gpsBuffer[rxIndex++] = rData;
if (rData == 0x0A) {
BaseType_t xHigherPriorityTaskWoken;
xSemaphoreGiveFromISR(xBinarySemaphore,&xHigherPriorityTaskWoken);
portEND_SWITCHING_ISR(xHigherPriorityTaskWoken);
}
HAL_UART_Receive_IT(huart,(uint8_t*)&rData,1);
}
Main function
HAL_UART_Receive_IT(&huart3,&rData,1);
xBinarySemaphore = xSemaphoreCreateBinary();
if (xBinarySemaphore != NULL) {
//success
xTaskCreate(vHandlerTask,"Handler",128,NULL,1,&vHandlerTaskHandler);
xTaskCreate(vPeriodicTask,"Periodic",128,NULL,3,&vPeriodicTaskHandler);
vTaskStartScheduler();
}
Using HAL for it is a best way to get into the troubles. It uses HAL_Delay which is systick dependant and you should rewrite this function to read RTOS tick instead.
I use queues to pass the data (the references to data) but it should work. There is always a big question mark when using the HAL functions.
void HAL_UART_RxCptlCallBack(UART_HandleTypeDef *huart)
{
BaseType_t xHigherPriorityTaskWoken = pdFALSE;
gpsBuffer[rxIndex++] = rData;
if (rData == 0x0A) {
if(xSemaphoreGiveFromISR(xBinarySemaphore,&xHigherPriorityTaskWoken) == pdFALSE)
{
/* some error handling */
}
}
HAL_UART_Receive_IT(huart,(uint8_t*)&rData,1);
portEND_SWITCHING_ISR(xHigherPriorityTaskWoken);
}
Concluding if I use HAL & RTOS I always modify the way HAL handles timeouts.

RTOS μC/OS-II not running as expected

I'm using ST STM32F101xB and μC/OS-II, I was having external clock (HSE) on old board and it's running fine. We wanted to use internal clock (HSI) on new board, however, the RTOS (Appmaintask()) doesn't run using internal clock, i have changed my code as below, any idea what's wrong with the change:
void BSP_Init (void)
{
RCC_DeInit();
//RCC_HSEConfig(RCC_HSE_ON);
//RCC_WaitForHSEStartUp();
RCC_HCLKConfig(RCC_SYSCLK_Div1);
RCC_PCLK2Config(RCC_HCLK_Div1); // APB2 clock divide by 1 => 64MHz
RCC_PCLK1Config(RCC_HCLK_Div2); // APB1 clock divide by 2 => 32MHz
FLASH_SetLatency(FLASH_Latency_2);
FLASH_PrefetchBufferCmd(FLASH_PrefetchBuffer_Enable);
//RCC_PLLConfig(RCC_PLLSource_HSE_Div1, RCC_PLLMul_8); // 64MHz
RCC_PLLConfig(RCC_PLLSource_HSI_Div2, RCC_PLLMul_8);
RCC_PLLCmd(ENABLE);
RCC_LSEConfig(RCC_LSE_OFF);
while (RCC_GetFlagStatus(RCC_FLAG_PLLRDY) == RESET) {
;
}
RCC_SYSCLKConfig(RCC_SYSCLKSource_PLLCLK);
while (RCC_GetSYSCLKSource() != 0x08) {
;
}
//Set the Vector Table base location at 0x08000000
//NVIC_SetVectorTable(NVIC_VectTab_FLASH, 0x0);
// Need to finalize and arange priority for each interrupts in future,
// So that 1 interrupt wont blocks another interrupt.
NVIC_PriorityGroupConfig(NVIC_PriorityGroup_3);
}
void main()
{
INT8U err;
cpuObj = new Cstm32f10x();
BSP_Init();
BSP_IntDisAll(); /* Disable all ints until we are ready to accept them. */
OSInit();
err = OSTaskCreateExt (AppMainTask,
(void *)0,
(OS_STK *)&AppMainTaskStk[APP_MAIN_TASK_STK_SIZE-1],
APP_MAIN_TASK_PRIO,
APP_MAIN_TASK_ID,
(OS_STK *)&AppMainTaskStk[0],
APP_MAIN_TASK_STK_SIZE,
(void *)0,
OS_TASK_OPT_STK_CHK | OS_TASK_OPT_STK_CLR);
OSStart(); // Start multitasking (i.e. give control to uC/OS-II)
}
void AppMainTask (void *p_arg)
{
OS_CPU_SysTickInit();
while(TRUE)
{
OSTimeDly(1);
}
}
Thanks.
Setting up of the PLL is normally performed in the CMSIS start-up code provided by ST/ARM. This code executes as part of the runtime environment start-up before main() is called. I recommend that you use this code for chip initialisation since static data initialisation and static object constructors run before main() and will be running before possibly critical initialisation.
The CMSIS with Cortex-M3 core and STM32F1xx specific device support is included in the STM32 Standard Peripheral Library. The file that actually does the work is system_stm32f10x.c. Other functions you are performing in BSP_Init() such as flash latency are also dealt with by the CMSIS start-up code. Even if you customise this code, I strongly recommend that you use this method of early environment initialisation.
Another possibility is to use the STM32CubeMX utility to generate configuration code. This appears to be a replacement for the apparently now unavailable STM32 MicroXplorer utility.

How to run ant from an Eclipse plugin, send output to an Eclipse console, and capture the build result (success/failure)?

From within an Eclipse plugin, I'd like to run an Ant build script. I also want to display the Ant output to the user, by displaying it in an Eclipse console. Finally, I also want to wait for the Ant build to be finished, and capture the result: did the build succeed or fail?
I found three ways to run an Ant script from eclipse:
Instantiate an org.eclipse.ant.core.AntRunner, call some setters and call run() or run(IProgressMonitor). The result is either normal termination (indicating success), or a CoreException with an IStatus containing a BuildException (indicating failure), or else something else went wrong. However, I don't see the Ant output anywhere.
Instantiate an org.eclipse.ant.core.AntRunner and call run(Object), passing a String[] containing the command line arguments. The result is either normal termination (indication success), or an InvocationTargetException (indicating failure), or else something else went wrong. The Ant output is sent to Eclipse's stdout, it seems; it is not visible in Eclipse itself.
Call DebugPlugin.getDefault().getLaunchManager(), then on that call getLaunchConfigurationType(IAntLaunchConfigurationConstants.ID_ANT_BUILDER_LAUNCH_CONFIGURATION_TYPE), then on that set attribute "org.eclipse.ui.externaltools.ATTR_LOCATION" to the build file name (and attribute DebugPlugin.ATTR_CAPTURE_OUTPUT to true) and finally call launch(). The Ant output is shown in an Eclipse console, but I have no idea how to capture the build result (success/failure) in my code. Or how to wait for termination of the launch, even.
Is there any way to have both console output and capture the result?
Edit 05/16/2016 #Lii alerted me to the fact that any output between the ILaunchConfigurationWorkingCopy#launch call and when the IStreamListener is appended will be lost. He made a contribution to this answer here.
Original Answer
I realize this is an old post, but I was able to do exactly what you want in one of my plugins. If it doesn't help you at this point, maybe it will help someone else. I originally did this in 3.2, but it has been updated for 3.6 API changes...
// show the console
final IWorkbenchPage activePage = PlatformUI.getWorkbench()
.getActiveWorkbenchWindow()
.getActivePage();
activePage.showView(IConsoleConstants.ID_CONSOLE_VIEW);
// let launch manager handle ant script so output is directed to Console view
final ILaunchManager manager = DebugPlugin.getDefault().getLaunchManager();
ILaunchConfigurationType type = manager.getLaunchConfigurationType(IAntLaunchConstants.ID_ANT_LAUNCH_CONFIGURATION_TYPE);
final ILaunchConfigurationWorkingCopy workingCopy = type.newInstance(null, [*** GIVE YOUR LAUNCHER A NAME ***]);
workingCopy.setAttribute(ILaunchManager.ATTR_PRIVATE, true);
workingCopy.setAttribute(IExternalToolConstants.ATTR_LOCATION, [*** PATH TO ANT SCRIPT HERE ***]);
final ILaunch launch = workingCopy.launch(ILaunchManager.RUN_MODE, null);
// make sure the build doesnt fail
final boolean[] buildSucceeded = new boolean[] { true };
((AntProcess) launch.getProcesses()[0]).getStreamsProxy()
.getErrorStreamMonitor()
.addListener(new IStreamListener() {
#Override
public void streamAppended(String text, IStreamMonitor monitor) {
if (text.indexOf("BUILD FAILED") > -1) {
buildSucceeded[0] = false;
}
}
});
// wait for the launch (ant build) to complete
manager.addLaunchListener(new ILaunchesListener2() {
public void launchesTerminated(ILaunch[] launches) {
boolean patchSuccess = false;
try {
if (!buildSucceeded[0]) {
throw new Exception("Build FAILED!");
}
for (int i = 0; i < launches.length; i++) {
if (launches[i].equals(launch)
&& buildSucceeded[0]
&& !((IProgressMonitor) launches[i].getProcesses()[0]).isCanceled()) {
[*** DO YOUR THING... ***]
break;
}
}
} catch (Exception e) {
[*** DO YOUR THING... ***]
} finally {
// get rid of this listener
manager.removeLaunchListener(this);
[*** DO YOUR THING... ***]
}
}
public void launchesAdded(ILaunch[] launches) {
}
public void launchesChanged(ILaunch[] launches) {
}
public void launchesRemoved(ILaunch[] launches) {
}
});
I'd like to add one thing to happytime harry's answer.
Sometimes the first writes to the stream happens before the stream listener is added. Then streamAppended on the listener is never called for those writes so output is lost.
See for example this bug. I think happytime harry's solution might have this problem. I myself registered my stream listener in ILaunchListener.launchChanged and this happened 4/5 times.
If one wants to be sure to get all the output from a stream then the IStreamMonitor.getContents method can be used to fetch the output that happened before the listener got added.
The following is an attempt on a utility method that handles this. It is based on the code in ProcessConsole.
/**
* Adds listener to monitor, and calls listener with any content monitor already has.
* NOTE: This methods synchronises on monitor while listener is called. Listener may
* not wait on any thread that waits for monitors monitor, what would result in dead-lock.
*/
public static void addAndNotifyStreamListener(IStreamMonitor monitor, IStreamListener listener) {
// Synchronise on monitor to prevent writes to stream while we are adding listener.
// It's weird to synchronise on monitor because that's a shared object, but that's
// what ProcessConsole does.
synchronized (monitor) {
String contents = monitor.getContents();
if (!contents.isEmpty()) {
// Call to unknown code while synchronising on monitor. This is dead-lock prone!
// Listener must not wait for other threads that are waiting in line to
// synchronise on monitor.
listener.streamAppended(contents, monitor);
}
monitor.addListener(listener);
}
}
PS: There is some weird stuff going on in ProcessConsole.java. Why is the content buffering switched of from the ProcessConsole.StreamListener constructor?! If the ProcessConsole.StreamListener runs before this one maybe this solution doesn't work.

Threading concept

Can somebody help me on this:
private Thread workerThread;
private EventWaitHandle waitHandle;
if (workerThread == null)
{
workerThread = new Thread(new ThreadStart(Work));
workerThread.Start();
//workerThread.Join();
}
else if (workerThread.ThreadState == ThreadState.WaitSleepJoin)
{
waitHandle.Set();
}
private void Work()
{
while (true)
{
string filepath = RetrieveFile();
if (filepath != null)
ProcessFile(filepath);
else
// If no files left to process then wait
waitHandle.WaitOne();
}
}
private void ProcessFile(string filepath)
{
XMLCreation myXML = new XMLCreation();
myXML.WriteXml(filepath, XMLFullFilePath);
}
private string RetrieveFile()
{
if (workQueue.Count > 0)
return workQueue.Dequeue();
else
return null;
}
see this is how all this work.
i have a filewatcher event that fires only when new file is being add to that folder, now the problem is its a small part of bigger application and when the file watcher fires there is another process which is accessing that file and i get error like this file is being used by another process. so i have tried to implement through threading but with the above code only some files are being processed, but in the log i can see all the files are being processed. Is it the right way to do it or am i missing something in it
thanks in adv.
You will have to use a mutex to control who is accessing the file and allow only one process at a time to work with that file at the very first time. If you think that there is the possibility that more than one thread will be waiting to work with the same file then you will have to implement a producer-consumer threading system with a queue.
Here is the best documentation about threads you can find in .NET:
http://www.albahari.com/threading/