Connect a VTK callback with a Qt slot - callback

I'm trying to connect a VTKcallback to a Qt slot, so the slot will be fired when the callback happens.
I'm using a QVTKWidget to render a point cloud that has been added into a PCLVisualizer (from the point cloud library, PCL).
Let's show some code:
PointCloud.h
class PointCloud: public QObject {
Q_OBJECT
private:
static void loadStartCallback(
vtkObject *caller,
unsigned long eventId,
void *clientData,
void *callData
);
static void loadEndCallback(
vtkObject *caller,
unsigned long eventId,
void *clientData,
void *callData
);
void load(void);
// more funcs and methods
private:
QVTKWidget* widget;
pcl::visualization::PCLVisualizer* visualizer;
unsinged long observerStartTag;
unsinged long observerEndTag;
// more attributes
}
PointCloud.cpp
void PointCloud::loadStartCallback(
vtkObject* caller,
unsigned long eventId,
void* clientData,
void* callData
) {
qDebug() << "\t\tPointCloud - loadCallback started\n";
if(clientData) {
PointCloud* self = reinterpret_cast<PointCloud*>( clientData );
self->widget->GetRenderWindow()->RemoveObserver(self->observerStartTag);
}
void PointCloud::loadEndCallback(
vtkObject* caller,
unsigned long eventId,
void* clientData,
void* callData
) {
qDebug() << "\t\tPointCloud - loadCallback ended\n";
if(clientData) {
PointCloud* self = reinterpret_cast<PointCloud*>( clientData );
self->widget->GetRenderWindow()->RemoveObserver(self->observerEndTag);
}
}
void load(void) {
vtkSmartPointer<vtkRenderWindow> renderWindow = visualizer->getRenderWindow();
vtkSmartPointer<vtkCallbackCommand> startCallback = vtkSmartPointer<vtkCallbackCommand>::New();
startCallback->SetCallback( loadStartCallback );
startCallback->SetClientData(this);
observerStartTag = renderWindow->AddObserver(vtkCommand::StartEvent, startCallback );
vtkSmartPointer<vtkCallbackCommand> endCallback = vtkSmartPointer<vtkCallbackCommand>::New();
endCallback->SetCallback( loadEndCallback );
endCallback->SetClientData(this);
observerEndTag = renderWindow->AddObserver(vtkCommand::EndEvent, endCallback );
// more processing. local_cloud is already populated
// and functional at this point
widget->SetRenderWindow( renderWindow );
visualizer->addPointCloud<pcl::PointXYZ>(local_cloud, "local_cloud");
widget->Show();
widget->Update();
}
This works well, once the cloud rendering starts, the PointCloud - loadCallback started is printed and when the rendering has ended and the cloud is shown, the message PointCloud - loadCallback ended is printed.
Now, besides printing the end message, I want to fire a Qt slot as well. I'm trying to use the vtkEventQtSlotConnect class for that, as it seems that is the right choice for connecting callbacks to slots:
New in PointCloud.h
private slots:
void test(void);
New in PointCloud.cpp
void PointCloud::test(void) { qDebug() << "\t\tThis is a test\n; }
Added into PointCloud::load(), before calling visualizer->addPointCloud
vtkEventQtSlotConnect* vtk_qt_connector = vtkEventQtSlotConnect::New();
vtk_qt_connector->Connect(
renderWindow,
vtkCommand::EndEvent,
this,
SLOT(test(void)),
0,
1.0
);
// AFTER widget->Update()
vtk_qt_connector->Disconnect(); // NO PARAM: disconnects ALL slots
vtk_qt_connector->Delete();
} // End of PointCloud::load()
With those additions, the messages in the callbacks are printed, but the message inside the test() slot is never shown.
Any idea of what I'm doing wrong?
EDIT
In the VTKexamples for callbacks that I've seen, a vtkRendeWindowInteractor is used to manage the callbacks. However, If I add the callback observer to it, it is not as accurate as adding them to the render window directly.

Ok,
I have checked again the code and found something new. Some co-worker has added a QThread in the load() method to smooth things, but forgot to document/tell that there was a QThreadthere.
In PointCloud::load()
QThread* thread = new QThread;
ThreadedCloud* tcloud = new ThreadedCloud; // computes internal vars and more
tcloud->moveToThread(thread);
connect(thread, SIGNAL(started()), tcloud, SLOT(read()), Qt::QueuedConnection );
connect(thread, SIGNAL(finished()), thread, SLOT(deleteLater()), Qt::QueuedConnection );
connect(tcloud, SIGNAL(cloudIsLoaded()), this, SLOT(addCloudToViewer()), Qt::QueuedConnection );
connect(tcloud, SIGNAL(cloudIsLoaded()), thread, SLOT(quit()), Qt::QueuedConnection );
connect(tcloud, SIGNAL(cloudIsLoaded()), tcloud, SLOT(deleteLater()), Qt::QueuedConnection );
connect(tcloud, SIGNAL(cloudIsNotLoaded(std::string)), this, SLOT(errorLoadingCloud(std::string)), Qt::QueuedConnection );
thread->start();
The cloudIsLoaded() is a signal that is emitted when the thread has finished whatever it has to do and we're ready to add the cloud to the PCLVisualizer and render it. That is done in addCloudToViewer.
The key factor here is that once the thread is started, the control flow exits the load()method and because I'm disconnecting the callback/slot before the end of the method, once the cloud is being rendererd that connection isn't there anymore!
So the solution was to move the vtk_qt_connector inside the addCloudToViewer method and do there the callback/slot connection.

Related

STM32 FreeRTOS - UART Deferred Interrupt Problem

I am trying to read data with unkown size using UART Receive Interrupt. In the call back function, I enabled Rx interrupt in order to read characters until \n is gotten. If \n is get, then higher priority task which is deferred interrupt handler is woken. The problem is that I tried to read one by one byte via call back function and I tried to put each character into a buffer, but unfortunately buffer could not get any character. Moreover, deferred interrupt handler could not be woken.
My STM32 board is STM32F767ZI, and my IDE is KEIL.
Some Important notes before sharing the code:
1. rxIndex and gpsBuffer are declared as global.
2. Periodic function works without any problem.
Here is my code:
Periodic Function, Priority = 1
void vPeriodicTask(void *pvParameters)
{
const TickType_t xDelay500ms = pdMS_TO_TICKS(500UL);
while (1) {
vTaskDelay(xDelay500ms);
HAL_UART_Transmit(&huart3,(uint8_t*)"Imu\r\n",sizeof("Imu\r\n"),1000);
HAL_GPIO_TogglePin(GPIOB,GPIO_PIN_7);
}
}
Deferred Interrupt, Priority = 3
void vHandlerTask(void *pvParameters)
{
const TickType_t xMaxExpectedBlockTime = pdMS_TO_TICKS(1000);
while(1) {
if (xSemaphoreTake(xBinarySemaphore,xMaxExpectedBlockTime) == pdPASS) {
HAL_UART_Transmit(&huart3,(uint8_t*)"Semaphore Acquired\r\n",sizeof("Semaphore
Acquired\r\n"),1000);
// Some important processes will be added here
rxIndex = 0;
HAL_GPIO_TogglePin(GPIOB,GPIO_PIN_14);
}
}
}
Call back function:
void HAL_UART_RxCptlCallBack(UART_HandleTypeDef *huart)
{
gpsBuffer[rxIndex++] = rData;
if (rData == 0x0A) {
BaseType_t xHigherPriorityTaskWoken;
xSemaphoreGiveFromISR(xBinarySemaphore,&xHigherPriorityTaskWoken);
portEND_SWITCHING_ISR(xHigherPriorityTaskWoken);
}
HAL_UART_Receive_IT(huart,(uint8_t*)&rData,1);
}
Main function
HAL_UART_Receive_IT(&huart3,&rData,1);
xBinarySemaphore = xSemaphoreCreateBinary();
if (xBinarySemaphore != NULL) {
//success
xTaskCreate(vHandlerTask,"Handler",128,NULL,1,&vHandlerTaskHandler);
xTaskCreate(vPeriodicTask,"Periodic",128,NULL,3,&vPeriodicTaskHandler);
vTaskStartScheduler();
}
Using HAL for it is a best way to get into the troubles. It uses HAL_Delay which is systick dependant and you should rewrite this function to read RTOS tick instead.
I use queues to pass the data (the references to data) but it should work. There is always a big question mark when using the HAL functions.
void HAL_UART_RxCptlCallBack(UART_HandleTypeDef *huart)
{
BaseType_t xHigherPriorityTaskWoken = pdFALSE;
gpsBuffer[rxIndex++] = rData;
if (rData == 0x0A) {
if(xSemaphoreGiveFromISR(xBinarySemaphore,&xHigherPriorityTaskWoken) == pdFALSE)
{
/* some error handling */
}
}
HAL_UART_Receive_IT(huart,(uint8_t*)&rData,1);
portEND_SWITCHING_ISR(xHigherPriorityTaskWoken);
}
Concluding if I use HAL & RTOS I always modify the way HAL handles timeouts.

Eclipse Marking std namespace as 'could not be resolved'

I am having issues with setting up Eclipse CDT to correctly do development in C++11. It's highlighting std::error_code and honestly everything in the std namespace. While the application compiles just fine on the command line.
Screen shot included.
The full project can be found on github.com/rbaindourov/OpenWeatherMapDSLink including the hidden files used by Eclipse.
I can host the the OVA of the virtual machine if you like as well.
Thanks in advance to any industrious individual who is willing to take the time to help me configure Eclipse CDT correctly.
And apologies if my question is not to your liking.
The source for the main.cpp file originally came from:
https://github.com/CiscoDevNet/kinetic-efm-cpp-sdk/blob/master/efm-cpp-sdk-1.0.15-Ubuntu16.04-dslink-dev/examples/simple_responder/main.cpp
// #copyright_start
// Copyright (c) 2019 Cisco and/or its affiliates. All rights reserved.
// #copyright_end
#include <efm_link.h>
#include <efm_link_options.h>
#include <efm_logging.h>
#include "error_code.h"
#include <iostream>
#include <random>
#include <sstream>
/// #brief The simple responder link example demonstrates the EFM SDK API for responder implementations. Shows node,
/// action creation, and stream handling.
class SimpleResponderLink
{
public:
/// Constructs the responder link implementation.
/// #param link The link to work with.
SimpleResponderLink(cisco::efm_sdk::Link& link)
: link_(link)
, responder_(link.responder())
{
}
/// The initialize callback that will be called as soon as the initialization including serialization is complete.
/// Will create the first level node hierarchy. Only nodes not created by the deserialization will actually be
/// created.
/// #param link_name The name of the link.
/// #param ec The error code will be set to an error if the initialization failed.
void initialize(const std::string& link_name, const std::error_code& ec)
{
if (!ec) {
LOG_EFM_DEBUG(
"SimpleResponderLink", cisco::efm_sdk::DebugLevel::l1, "Responder link '" << link_name << "' initialized");
} else {
LOG_EFM_ERROR(ec, "could not initialize responder link");
}
cisco::efm_sdk::NodeBuilder builder{"/"};
builder.make_node("sdk version")
.display_name("SDK Version")
.type(cisco::efm_sdk::ValueType::String)
.value(link_.get_version_info());
builder.make_node("text")
.display_name("String")
.type(cisco::efm_sdk::ValueType::String)
.value("Hello, World!")
.writable(
cisco::efm_sdk::Writable::Write, std::bind(&::SimpleResponderLink::set_text, this, std::placeholders::_1))
.on_subscribe(std::bind(&::SimpleResponderLink::on_subscribe_text, this, std::placeholders::_1));
builder.make_node("set_text")
.display_name("Set Text")
.action(cisco::efm_sdk::Action(
cisco::efm_sdk::PermissionLevel::Read,
std::bind(
&SimpleResponderLink::set_text_called,
this,
std::placeholders::_1,
std::placeholders::_2,
std::placeholders::_3,
std::placeholders::_4))
.add_param(cisco::efm_sdk::ActionParameter{"String", cisco::efm_sdk::ValueType::String})
.add_column({"Success", cisco::efm_sdk::ValueType::Bool})
.add_column({"Message", cisco::efm_sdk::ValueType::String}));
responder_.add_node(
std::move(builder),
std::bind(&SimpleResponderLink::nodes_created, this, std::placeholders::_1, std::placeholders::_2));
}
/// Callback that will be called upon construction of the first level nodes.
/// #param paths The paths of the nodes that were actually created. A path that was added to the NodeBuilder is
/// not part of the paths vector means that the node was already created. Normally, there is no need to check for
/// the presence of a path. If the error code signals no error, just continue with your work.
/// #param ec The error code will be set to an error if the node creation failed.
void nodes_created(const std::vector<cisco::efm_sdk::NodePath>& paths, const std::error_code& ec)
{
if (!ec) {
LOG_EFM_DEBUG("SimpleResponderLink", cisco::efm_sdk::DebugLevel::l1, "created nodes");
for (const auto& path : paths) {
LOG_EFM_DEBUG("SimpleResponderLink", cisco::efm_sdk::DebugLevel::l2, "created path - " << path);
}
}
}
/// Called every time the link connects to the broker.
/// Will set the value on the '/text' path.
/// #param ec The error code will be set to an error if the connect failed.
void connected(const std::error_code& ec)
{
if (!ec) {
disconnected_ = false;
LOG_EFM_INFO(responder_error_code::connected);
responder_.set_value(text_path_, cisco::efm_sdk::Variant{"Hello, World!"}, [](const std::error_code&) {});
}
}
/// Called every time the link is disconnected from the broker.
/// Will set a flag to signal the disconnected status.
/// #param ec The error code will be set to an error if the disconnect failed.
void disconnected(const std::error_code& ec)
{
LOG_EFM_INFO(responder_error_code::disconnected, ec.message());
disconnected_ = true;
}
/// Will be called the node '/text' is set via an \#set action.
/// #param value The value that was set.
void set_text(const cisco::efm_sdk::Variant& value)
{
LOG_EFM_INFO(responder_error_code::set_text, value);
}
/// Action callback for the '/set_text' action. Will set the value of the path '/text' to the given one. It will also
/// echo back the set parameter.
/// The stream will be closed automatically by the link.
/// #param stream The stream to add a result to.
/// #param parent_path The path of the node the action was called for.
/// #param params The parameters set by the peer.
/// #param ec The error code will be set to an error if the action failed.
void set_text_called(
const cisco::efm_sdk::MutableActionResultStreamPtr& stream,
const cisco::efm_sdk::NodePath& parent_path,
const cisco::efm_sdk::Variant& params,
const std::error_code& ec)
{
(void)parent_path;
if (!ec) {
LOG_EFM_DEBUG("SimpleResponderLink", cisco::efm_sdk::DebugLevel::l3, "set_text_called");
const auto* input = params.get("String");
if (input) {
auto text = *input;
responder_.set_value(text_path_, text, [stream, text](const std::error_code& ec) {
if (!ec) {
stream->set_result(cisco::efm_sdk::UniqueActionResultPtr{new cisco::efm_sdk::ActionValuesResult{
cisco::efm_sdk::ActionValuesResult(cisco::efm_sdk::ActionSuccess).add_value(true).add_value(text)}});
} else {
stream->set_result(cisco::efm_sdk::UniqueActionResultPtr{
new cisco::efm_sdk::ActionValuesResult{cisco::efm_sdk::ActionValuesResult(cisco::efm_sdk::ActionError)
.add_value(false)
.add_value("Could not set value")}});
}
});
return;
}
}
stream->set_result(cisco::efm_sdk::UniqueActionResultPtr{
new cisco::efm_sdk::ActionValuesResult{cisco::efm_sdk::ActionValuesResult(cisco::efm_sdk::ActionError)
.add_value(false)
.add_value("Could not set value")}});
}
/// Will be called if a subscribe or unsubscribe is issued for the '/text' node.
/// #param subscribe True if a subscribe was done or false if an unsubscribe was done.
void on_subscribe_text(bool subscribe)
{
if (subscribe) {
LOG_EFM_INFO(responder_error_code::subscribed_text);
} else {
LOG_EFM_INFO(responder_error_code::unsubscribed_text);
}
}
private:
cisco::efm_sdk::Link& link_;
cisco::efm_sdk::Responder& responder_;
cisco::efm_sdk::NodePath text_path_{"/text"};
bool disconnected_{true};
};
int main(int argc, char* argv[])
{
cisco::efm_sdk::FileConfigLoader loader;
cisco::efm_sdk::LinkOptions options("Simple-Responder-Link", loader);
if (!options.parse(argc, argv, std::cerr)) {
return EXIT_FAILURE;
}
cisco::efm_sdk::Link link(std::move(options), cisco::efm_sdk::LinkType::Responder);
LOG_EFM_INFO(::responder_error_code::build_with_version, link.get_version_info());
SimpleResponderLink responder_link(link);
link.set_on_initialized_handler(
std::bind(&SimpleResponderLink::initialize, &responder_link, std::placeholders::_1, std::placeholders::_2));
link.set_on_connected_handler(std::bind(&SimpleResponderLink::connected, &responder_link, std::placeholders::_1));
link.set_on_disconnected_handler(
std::bind(&SimpleResponderLink::disconnected, &responder_link, std::placeholders::_1));
link.run();
return EXIT_SUCCESS;
}

Boost ASIO asynchronous socket with timeout

I am trying to find the proper / canonical way to implement the code below that provides a synchronous wrapper around async asio methods in order to have a timeout. The code appears to work, but none of the examples I have looked at use the boolean in the lambda to terminate the do/while loop running i/o service, so I'm not sure if this is the proper form or if it will have unintended consequences down the road. Some do things like
while(IOService.run_one);
but that never terminates.
Edit:
I'm trying to follow this example:
http://www.boost.org/doc/libs/1_53_0/doc/html/boost_asio/example/timeouts/blocking_tcp_client.cpp
But in this code they avoid needing the number of bytes read by using a \n terminator. I need the number of bytes read, hence the callback.
I have seen many other solutions that use boost async futures as well as other methods, but they do not seem to compile with the versions of gcc / boost standard for Ubuntu 16.04 and I would like to stay with those versions.
ByteArray SessionInfo::Read(const boost::posix_time::time_duration &timeout)
{
Deadline.expires_from_now(timeout);
auto bytes_received = 0lu;
auto got_callback = false;
SessionSocket->async_receive(boost::asio::buffer(receive_buffer_,
1024),
[&bytes_received, &got_callback](const boost::system::error_code &error, std::size_t bytes_transferred) {
bytes_received = bytes_transferred;
got_callback = true;
});
do
{
IOService.run_one();
}while (!got_callback);
auto bytes = ByteArray(receive_buffer_, receive_buffer_ + bytes_received);
return bytes;
}
This is how I'd do it. The first event that fires causes io_service::run() to return.
ByteArray SessionInfo::Read(const boost::posix_time::time_duration &timeout)
{
Deadline.expires_from_now(timeout); // I assume this is a member of SessionInfo
auto got_callback{false};
auto result = ByteArray();
SessionSocket->async_receive( // idem for SessionSocket
boost::asio::buffer(receive_buffer_, 1024),
[&](const boost::system::error_code error,
std::size_t bytes_received)
{
if (!ec)
{
result = ByteArray(receive_buffer_, bytes_received);
got_callback = true;
}
Deadline.cancel();
});
Deadline.async_wait([&](const boost::system::error_code ec)
{
if (!ec)
{
SessionSocket->cancel();
}
});
IOService.run();
return result;
}
Reading the conversation below M. Roy's answer, your goal is to make sure that
IOService.run(); returns. All points are valid, the instance of boost::asio::io_service should only be run once (meaning not simultaneously but it could be run multiple times in series) per thread of execution so it is imperative to know how it is used. That said, to make the IOService stop I would amend M. Roy's solution like so:
ByteArray SessionInfo::Read(const boost::posix_time::time_duration &timeout) {
Deadline.expires_from_now(timeout);
auto got_callback{false};
auto result = ByteArray();
SessionSocket->async_receive(
boost::asio::buffer(receive_buffer_, 1024),
[&](const boost::system::error_code error,
std::size_t bytes_received) {
if (!ec) {
result = ByteArray(receive_buffer_, bytes_received);
got_callback = true;
}
Deadline.cancel();
});
Deadline.async_wait(
[&](const boost::system::error_code ec) {
if (!ec) {
SessionSocket->cancel();
IOService.stop();
}
});
IOService.run();
return result;
}

Data is getting discarded in TCP/IP with boost::asio::read_some?

I have implemented a TCP server using boost::asio. This server uses basic_stream_socket::read_some function to read data. I know that read_some does not guarantee that supplied buffer will be full before it returns.
In my project I am sending strings separated by a delimiter(if that matters). At client side I am using WinSock::send() function to send data. Now my problem is on server side I am not able to get all the strings which were sent from client side. My suspect is that read_some is receiving some data and discarding leftover data for some reason. Than again in next call its receiving another string.
Is it really possible in TCP/IP ?
I tried to use async_receive but that is eating up all my CPU, also since buffer has to be cleaned up by callback function its causing serious memory leak in my program. (I am using IoService::poll() to call handler. That handler is getting called at a very slow rate compared to calling rate of async_read()).
Again I tried to use free function read but that will not solve my purpose as it blocks for too much time with the buffer size I am supplying.
My previous implementation of the server was with WinSock API where I was able to receive all data using WinSock::recv().
Please give me some leads so that I can receive complete data using boost::asio.
here is my server side thread loop
void
TCPObject::receive()
{
if (!_asyncModeEnabled)
{
std::string recvString;
if ( !_tcpSocket->receiveData( _maxBufferSize, recvString ) )
{
LOG_ERROR("Error Occurred while receiving data on socket.");
}
else
_parseAndPopulateQueue ( recvString );
}
else
{
if ( !_tcpSocket->receiveDataAsync( _maxBufferSize ) )
{
LOG_ERROR("Error Occurred while receiving data on socket.");
}
}
}
receiveData() in TCPSocket
bool
TCPSocket::receiveData( unsigned int bufferSize, std::string& dataString )
{
boost::system::error_code error;
char *buf = new char[bufferSize + 1];
size_t len = _tcpSocket->read_some( boost::asio::buffer((void*)buf, bufferSize), error);
if(error)
{
LOG_ERROR("Error in receiving data.");
LOG_ERROR( error.message() );
_tcpSocket->close();
delete [] buf;
return false;
}
buf[len] ='\0';
dataString.insert( 0, buf );
delete [] buf;
return true;
}
receiveDataAsync in TCP Socket
bool
TCPSocket::receiveDataAsync( unsigned int bufferSize )
{
char *buf = new char[bufferSize + 1];
try
{
_tcpSocket->async_read_some( boost::asio::buffer( (void*)buf, bufferSize ),
boost::bind(&TCPSocket::_handleAsyncReceive,
this,
buf,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred) );
//! Asks io_service to execute callback
_ioService->poll();
}
catch (std::exception& e)
{
LOG_ERROR("Error Receiving Data Asynchronously");
LOG_ERROR( e.what() );
delete [] buf;
return false;
}
//we dont delete buf here as it will be deleted by callback _handleAsyncReceive
return true;
}
Asynch Receive handler
void
TCPSocket::_handleAsyncReceive(char *buf, const boost::system::error_code& ec, size_t size)
{
if(ec)
{
LOG_ERROR ("Error occurred while sending data Asynchronously.");
LOG_ERROR ( ec.message() );
}
else if ( size > 0 )
{
buf[size] = '\0';
emit _asyncDataReceivedSignal( QString::fromLocal8Bit( buf ) );
}
delete [] buf;
}
Client Side sendData function.
sendData(std::string data)
{
if(!_connected)
{
return;
}
const char *pBuffer = data.c_str();
int bytes = data.length() + 1;
int i = 0,j;
while (i < bytes)
{
j = send(_connectSocket, pBuffer+i, bytes-i, 0);
if(j == SOCKET_ERROR)
{
_connected = false;
if(!_bNetworkErrNotified)
{
_bNetworkErrNotified=true;
emit networkErrorSignal(j);
}
LOG_ERROR( "Unable to send Network Packet" );
break;
}
i += j;
}
}
Boost.Asio's TCP capabilities are pretty well used, so I would be hesitant to suspect it is the source of the problem. In most cases of data loss, the problem is the result of application code.
In this case, there is a problem in the receiver code. The sender is delimiting strings with \0. However, the receiver fails to proper handle the delimiter in cases where multiple strings are read in a single read operation, as string::insert() will cause truncation of the char* when it reaches the first delimiter.
For example, the sender writes two strings "Test string\0" and "Another test string\0". In TCPSocket::receiveData(), the receiver reads "Test string\0Another test string\0" into buf. dataString is then populated with dataString.insert(0, buf). This particular overload will copy up to the delimiter, so dataString will contain "Test string". To resolve this, consider using the string::insert() overload that takes the number of characters to insert: dataString.insert(0, buf, len).
I have not used the poll function before. What I did is create a worker thread that is dedicated to processing ASIO handlers with the run function, which blocks. The Boost documentation says that each thread that is to be made available to process async event handlers must first call the io_service:run or io_service:poll method. I'm not sure what else you are doing with the thread that calls poll.
So, I would suggest dedicating at least one worker thread for the async ASIO event handlers and use run instead of poll. If you want that worker thread to continue to process all async messages without returning and exiting, then add a work object to the io_service object. See this link for an example.

Test Event expiration in Drools Fusion CEP

Ciao, I have tested in several ways, but I'm still unable to test and verify the Event expiration mechanism in Drools Fusion, so I'm looking for some little guidance, please?
I've read the manual and I'm interested in this feature:
In other words, one an event is inserted into the working memory, it is possible for the engine to find out when an event can no longer match other facts and automatically retract it, releasing its associated resources.
I'm using the Drools IDE in Eclipse, 5.4.0.Final and I modified the template code created by the "New Drools Project" wizard to test and verify for Event expiration.
The code below. The way I understood to make the "lifecycle" to work correctly is that:
You must setup the KBase in STREAM mode - check
You must Insert the Events in temporal order - check
You must define temporal constraints between Events - check in my case is last Message()
However, when I inspect the EventFactHandle at the end, none of the Event() has expired.
Thanks for your help.
Java:
public class DroolsTest {
public static final void main(String[] args) {
try {
KnowledgeBase kbase = readKnowledgeBase();
// I do want the pseudo clock
KnowledgeSessionConfiguration conf = KnowledgeBaseFactory.newKnowledgeSessionConfiguration();
conf.setOption(ClockTypeOption.get("pseudo"));
StatefulKnowledgeSession ksession = kbase.newStatefulKnowledgeSession(conf, null);
SessionPseudoClock clock = ksession.getSessionClock();
KnowledgeRuntimeLogger logger = KnowledgeRuntimeLoggerFactory.newFileLogger(ksession, "test");
// Insert of 2 Event:
Message message = new Message();
message.setMessage("Message 1");
message.setStatus(Message.HELLO);
ksession.insert(message);
ksession.fireAllRules();
clock.advanceTime(1, TimeUnit.DAYS);
Message message2 = new Message();
message2.setMessage("Message 2");
message2.setStatus(Message.HELLO);
ksession.insert(message2);
ksession.fireAllRules();
clock.advanceTime(1, TimeUnit.DAYS);
ksession.fireAllRules();
// Now I do check what I have in the working memory and if EventFactHandle if it's expired or not:
for (FactHandle f : ksession.getFactHandles()) {
if (f instanceof EventFactHandle) {
System.out.println(((EventFactHandle)f)+" "+((EventFactHandle)f).isExpired());
} else {
System.out.println("not an Event: "+f);
}
}
logger.close();
} catch (Throwable t) {
t.printStackTrace();
}
}
private static KnowledgeBase readKnowledgeBase() throws Exception {
KnowledgeBuilder kbuilder = KnowledgeBuilderFactory.newKnowledgeBuilder();
kbuilder.add(ResourceFactory.newClassPathResource("Sample.drl"), ResourceType.DRL);
KnowledgeBuilderErrors errors = kbuilder.getErrors();
if (errors.size() > 0) {
for (KnowledgeBuilderError error: errors) {
System.err.println(error);
}
throw new IllegalArgumentException("Could not parse knowledge.");
}
KnowledgeBase kbase = KnowledgeBaseFactory.newKnowledgeBase();
kbase.addKnowledgePackages(kbuilder.getKnowledgePackages());
// following 2 lines is the template code modified for STREAM configuration
KnowledgeBaseConfiguration config = KnowledgeBaseFactory.newKnowledgeBaseConfiguration();
config.setOption( EventProcessingOption.STREAM );
return kbase;
}
/*
* This is OK from template, as from the doc:
* By default, the timestamp for a given event is read from the Session Clock and assigned to the event at the time the event is inserted into the working memory.
*/
public static class Message {
public static final int HELLO = 0;
public static final int GOODBYE = 1;
private String message;
private int status;
public String getMessage() {
return this.message;
}
public void setMessage(String message) {
this.message = message;
}
public int getStatus() {
return this.status;
}
public void setStatus(int status) {
this.status = status;
}
}
}
Drools:
package com.sample
import com.sample.DroolsTest.Message;
declare Message
#role(event)
end
declare window LastMessageWindow
Message() over window:length(1)
end
rule "Hello World"
when
accumulate( $m : Message(status==Message.HELLO) from window LastMessageWindow,
$messages : collectList( $m ) )
then
System.out.println( ((Message)$messages.get(0)).getMessage() );
end
Please note: even if I add expiration of 1second to the Message event, by
#expires(1s)
I still don't get the expected result that the very first Message event inserted, I would have expected is now expired? Thanks for your help.
Found solution! Obviously it was me being stupid and not realizing I was using Drools 5.4.0.Final while still referring to old documentation of 5.2.0.Final. In the updated documentation for Drools Fusion 5.4.0.Final, this box is added for 2.6.2. Sliding Length Windows:
Please note that length based windows do not define temporal constraints for event expiration from the session, and the engine will not consider them. If events have no other rules defining temporal constraints and no explicit expiration policy, the engine will keep them in the session indefinitely.
Therefore the 3rd requirement I originally enlisted of "You must define temporal constraints between Events" is obviously NOT met because I now understand Sliding Length Window in Drools 5.4.0.Final:
Message() over window:length(1)
are indeed NOT a definition of a temporal constraints for event expiration from the session.
Updating this answer hopefully somebody will find it helpful. Also, just so for your know, me being stupid actually for relying on googling in order to reach the doc, and sometimes you don't get redirected to the current release documentation, so it seems...