We are planning to use Drools/JBoss BRMS 6 for business rules management. Our plan is to write rules using the workbench, deploy the rules package in multiple Execution Servers and allow applications to access the Rules package by making calls to the REST API. We do not have any Java wrappers or custom classes in between the calling applications and the rules package.
I am trying to incorporate some logging into the rules engine. I understand that there are EventListener interfaces that can be implemented.
Please would you provide some information/guidance on how to implement Listeners in our kind of set up? Where will I create and store the Java classes/methods that would implement Event Listeners?
How can a calling application insert an Event Listener into the session? Will it be part of the xml/json payload?
Thanks
1. Where to implement the listeners?
The listeners must be obviously implemented in Java. One simple place I found to put those implementation is in a separate maven project. After all, a project in the kie-workbench is a maven project itself. So you can create a separate project (outside the kie-workbench) implement the listeners you want to and then add this new project as a dependency in your kie-workbench's project (check the documentation on how to do that).
The only problem I found with this approach is that once you defined the dependency between your projects, the kie-workbench will scan every single class of it and of any other dependency it has. Check this link for more information.
So, if your listener project doesn't have too many dependencies, you should be good to go. Please note that, in theory, you could add any kie/drools dependency you have in your listener project as <scope>provided</scope>.
2. How can I configure these listeners?
A trick that I always use is to have what I call a "configuration" rule to do this kind of job.
A "Configuration" rule is a rule without LHS (and, if you are distrustful, a high salience). This kind of rules are guaranteed to be executed only once. Just make sure that you call a fireAllRules() before the first interaction with the kie-server, or that the first interaction always starts with a fireAllRules command.
Your configuration rule could look like this:
/**
Configures the session's listeners.
**/
rule "[SUB-CONFIG] Listeners Configuration"
salience 1000
when
then
((org.drools.impl.StatefulKnowledgeSessionImpl)kcontext.getKnowledgeRuntime()).addEventListener(new MyWorkingMemoryEventListener());
((org.drools.impl.StatefulKnowledgeSessionImpl)kcontext.getKnowledgeRuntime()).addEventListener(new MyAgendaEventListener());
end
You can place this rule in your kie-server project.
Hope it helps,
Related
The visual studio project templates for a Service fabric services contains code that can be reused over other multiple projects. For example the ServiceEventSource.cs or ActorEventSource.cs
My programmer instinct wants to move this code to a shared library, so I don't have duplicate code. But maybe this isn't the way to go with microservices, since you want to have small independent services. Introducing a library will make it more dependent. But they are already dependent on the EventSource class.
My solution will be to move some reusable code to a base class in a shared project and inherit that class in my services. Is this the best approach?
I'm guessing all your services are going to be doing lots of different jobs so once you pad out your EventSource classes they'll be completely different from each other except one method which would be service started?
Like with any logging there is many different approaches, one of the main ones I like is using AOP or interceptor proxies using IoC containers, this will keeps your classes clean but allows re-use of the ETW code and a decent amount of logging to be able to debug later down the line.
I moved a lot of duplicate code to my own nuget libraries which is working quiet well. It is a extra dependency, but always better then duplicate code. Now I'm planning to make my one SF templates in visual studio, so I don't have to remove and adjust some files.
I found a nice library (EventSourceProxy) which helps me managing the EventSource code for ETW: https://github.com/jonwagner/EventSourceProxy
There is UI already there to control rules. and perform operation like CRUD using drl file or even using dsl for easing making drl rules for nontech person for such operation. So, Is there any other way to create our own webpage to control such rules for even easy usability?
Is any way to edit source codes for available workbench UI?
Drools is open source and you can modify the UI if you want. You can also treat Drools like a component in your architecture by wrapping it in a service and calling it through your own simplified API. You can then call your API from your own web front end.
I have the following problem:
1: An OSGI bundle A (equinox) is activated, and the activator parses an XML file
2: in the XML file, a declarative service is requested, which is present in another bundle (B)
3: bundle B is not activated yet, so the activator of bundle A needs to wait
I know how to approach this purely in DS, but the parsing needs to be carried out in the activator. Also I do not want to fool around with start levels and the likes. Ideally, I would want to be able to register the service provided by bundle B when needed.
Is there an elegant way to achieve this?
Thanks,
Kees
OSGi services are dynamic by nature and therefore you should never depend on the availability of a service. You need to use some kind of service tracking via a ServiceTracker or better, go for the pure DS solution which does all the hard work for you.
Since you indicate that you must parse the XML file, I guess you decided to use some kind of external configuration with services to use. I would suggest to re-consider this type of architecture. You need to write a lot of code while often the same goals can be reached by using a combination of the configuration admin and declarative services/blueprint.
I am interested in creating rules for Drools Planner. I want that a user can create his own rules in a java app before starting the Drools Planner. Maybe a Drools-rule-file could be generated after the user has added his rules. Would this be possible or do I have to create the rule-file while developing the whole java application?
Many thanks...
Yes it's possible.
The trick is to build your own RuleBase and set it in the Planner config.
See section "5.3.4.2.2. A RuleBase (possibly defined by Guvnor)" in the manual.
You can construct a RuleBase by several, depending on how you want your user to edit his/her rules:
From a DRL file. This presumes the user knows DRL. See Drools Expert manual.
From a DSL file. This allows you to use natural language.
From the guvnor webapp. This allows you to use the tooling Guvnor, such as a guided rule editor, a decision tables spreadsheet, ... You can even use a changeset.
From guvnor in eclipse or a standalone app (under development and experimental). There's some work being in this area, but it's still young.
Hey guys. We're using OSGi services in an Eclipse RCP application. To track them, we're using the org.osgi.util.tracker.ServiceTracker class. A sample code from the application looks like
mailServiceTracker = new ServiceTracker(context, MailService.class.getName(), null);
mailServiceTracker.open();
MailService service = (MailService) mailServiceTracker.getService();
Now my problem is that the getService() method frequently returns null when I created a new service. The code works very well for services that are existing for a long time in the application, but each time I create a new service, I have to do many things until the service is finally found and tracked. I regularly try for example
'Clean...' in Eclipse
'Refresh' all projects in Eclipse
Rebuild the project on the command line
Sometimes those things help, and sometimes they don't. Does anyone have experiences with those trackers and can tell me how to avoid this behavior and how to get the services tracked immediately upon creation?
Thanks
The problem is that the services you want may not have been created yet (especially in an bundle activator, as some bundles may not yet have started). If you still want to use the service tracker, you will need to provide a ServiceTrackerCustomizer, and keep track (sorry, no pun intended) of the services as they come and go.
Or, you could just switch over to Declarative Services that handle this for you.
There is nothing wrong with using ServiceTrackers other than the fact that it's a fairly low-level way of tracking services. Whilst I agree that declarative services are a nice mechanism, simply dismissing ServiceTrackers because of "all sorts of issues" sounds like bad advice.
Back to the question.
As soon as a service tracker is created and opened, it gives you access to all services that match the filter condition you specified upon creation. There is no delay there. The only thing I can think about is that somehow your bundles are not correctly resolved, so services that are registered from a bundle A are simply not visible to a bundle B using a ServiceTracker. To check this, first locate the bundle that exports the package containing the service interface, and then make sure both A and B are actually wired to it.
Explaining the update/refresh mechanism in OSGi a bit more:
Whenever you update something in OSGi, it's a two step process.
Let's assume you update a bundle that contains a new version of an exported package. Let's also assume there is some consumer that imports it. As long as you only update the bundle but not explicitly refresh the wiring (of which import links to which export) the consumer will still be wired to the old version of the package. As soon as you do a package refresh (something you can do in OSGi via the PackageAdmin service) your consumer will be resolved again and will be wired to the new version.
The reason this is decoupled is that you might want to do updates of several bundles and not "refresh" after each one but instead defer such a refresh until all of them are updated.
It's quite possible that this is the effect you're seeing. Initially you only do an update, and only after the refresh will the tracker actually see the new version of the service.
Not being flippant at all, don't use service trackers. They appear to make your life simple, but there are all sort of issues with them. I'd recommend that you look into using Declarative Services instead. The support for DS in Eclipse has been very good from 3.5 onward.
You might want to check out this book and the associated presentations for more information on why using Service Trackers is a bad idea.
http://equinoxosgi.org/