Runtime creation and persistence of executable model rules - drools

We have the need to create and persist rules at runtime. The goal is to create the rules, persist them and then reload them at a later point in time. Using bits and pieces of code cobbled together from drools unit tests, I can successfully create rules from DRL strings and then persist them to a kjar. And using the new KieBuilder.buildAll overload, the kjar (presumably) is built using the new executable model. All of that seems to work.
But what I really want to do is eliminate the DRL strings entirely and create my rules at runtime using the flow or pattern DSL. Again, using example code, I can create those rules at runtime, and execute them in a session. What I can’t seem to do is actually persist them as a kjar (or any other form that I can devise). It seems that the end result of building a rule using flow or pattern DSL is a KieBase. And there seems to be no way to serialize or persist a KieBase. At some point in the process, I need to be able to getBytes() in order to persist the KieBase.
For example, I can create the KieBase like this:
Rule rule = getRule();
ModelImpl model = new ModelImpl().addRule( rule );
KieBase kieBase = KieBaseBuilder.createKieBaseFromModel( model );
But I then need to be able to persist that newly created kieBase so it can be reloaded later. And there doesn't seem to be a workable way to do that.
Any suggestions? I’m using 7.7.0 for my testing.
UPDATE 2018-07-23
Let me clarify my original question with additional information. There are really two use cases where I’d like to use the new executable model to author rules in Java: 1) at design time; 2) at run time. Each use case has slightly different requirements, and so far I’ve been unsuccessful in getting either one to work completely.
For the 1st use case, at design time I need the ability to write rules in Java (using the new pattern DSL) and then save those rules to a kjar. Once there, they can be loaded into a KieServer instance and executed. Purportedly the Kie Maven Plugin can do this, and I’ve attempted to follow the instructions given in the drools doc (for example section 2.2.1.4 of the 7.8.0 doc). But those instructions appear to be incomplete, and there just aren’t any examples of how to accomplish this. What file or files need to be added to the resources\META-INF folder to identify the rules? How are the rules actually exposed in the Java code? Do they need to be in a particular type of class? Are the rules returned from public methods? How are those methods identified as having rules? Are any Java annotations needed to make this work?
All of those questions would be answered for me if there was just one simple end-to-end example that demonstrated how to author a rule in Java, AND create the kjar containing that rule.
For the 2nd use case (actually the more important of the two for me), I need the ability to dynamically create rules at runtime. Based on configuration data within our application, multiple rules need to be programmatically created and ultimately loaded into a KieServer instance. My assumption was that the process would be similar to use case #1 where a kjar could be programmatically created and then loaded into the KieServer. And remember that in this case, the Maven Plugin isn’t in the picture since this is all being done at runtime, not design time. Using the examples for the executable model (primarily the unit tests), I can author the rules in Java, and I can execute them. But I’ve found no way to actually build a kjar from them, or to directly load them into a KieServer.

To execute the rules, they have to be in a specific Java file and the kjar needs to have a file into the META-INF folder stating where the rules actually are.
Take a look at what's the maven plugin doing here
https://github.com/kiegroup/droolsjbpm-integration/blob/master/kie-maven-plugin/src/main/java/org/kie/maven/plugin/GenerateModelMojo.java#L165
There will be probably an easier way in the future, but I can't tell you when.
Thank you for using the bleeding edge features, and good luck with that.

Related

OCM or Nodes in JCR?

We are developing a CMS based on JCR/Sling/JSP/Felix/etc.
What I found so far is using Nodes are very straight forward and flexible. But my concern is over time it could become too hard to maintain and manage.
So, is it wise to invest in using a OCM? Would it be just an extra layer of complexity? What's the real benefit in OCM if there's any? Or it's better for us to stick to Nodes instead?
And lastly, is Jackrabbit OCM the best option for us if we are to go down that path?
Thank you.
In my personal experience I can say it severly depends on your situation if OCM is a useful tool for your project or not.
The real problem in using OCM (in my personal experience) is when the definition of a class used in existing persisted data (as objects) in the repository has changed. For example: you found it necessary to change some members and methods of a class to match with functionality changes. By this I mean that the class definition of the persisted data object in the repository no longer matches the definition of actual class. When a persisted data is saved to the jcr repository it is usually saved in a format that java understands in terms of serialization. Which means that when something changes to the definition of the used class, the saved data in the repository can no longer be correctly interpreted by java. This issue tends to lead to complex deployment where you need to convert old persisted data objects to the new definition and save them again in the repository to make sure you can still use "old" but still required persisted data.
What does work (in my opinion) is using a framework that allows to map nodes and node properties to java objects directly (for example by using annotations) and the other way around (persist a java object to the repository as a JCR node where the java member fields are actual node properties). This way you stick to the data representation of jcr (nodes with properties) and can still map them to the members of a java class.
I've used a framework like this in a cms called AEM (of Adobe) before, although I must mention this is in a OSGI context (but the prinicipe still stands). The used framework basically allowed maximum flexibility and persists the java object as a JCR node and the other way around. Because it mapped directly to the jcr definition, code changes in the class and members ment just changing annotations, and old persisted data was still usuable without much effort.

How to manage test data for Hibernate Search integration tests

I have a Spring-based system that uses Hibernate Search 3.4 (on top of Hibernate 3.5.4). Integration tests are managed by Spring, with #Transactional annotation. At the moment test data (entities that are to be indexed) is loaded by Liquibase script, we use it's Spring integration. It's very inconvenient to manage.
My new solution is to have test data defined as Spring beans and wire them as Resources, by name. This part works.
I tried to have these beans persisted and indexed in setUp method of my test cases (and in test methods themselves) but I failed. They get into DB fine but I can't get them indexed. I tried calling index() on FullTextEntityManager (with flushToIndexes), I tried createIndexer().startAndWait().
What else can I do?
Or may be there is some better option of testing HS?
Thank You in advance
My new solution is to have test data defined as Spring beans and wire
them as Resources, by name. This part works.
sounds like a strange setup for a unit test. To be honest I am not quote sure how you do this.
In Search itself an in memory database (H2) is used together with a Lucene RAM directory. The benefits of such a setup is that it is fast and easy to avoid dependencies between tests.
I tried to have these beans persisted and indexed in setUp method of
my test cases (and in test methods themselves) but I failed. They get
into DB fine but I can't get them indexed.
If automatic indexing is enabled and the persisting of the test data is occurring within an transaction, it should work. A common mistake in combination with Spring is to use the wrong transaction manager. The Hibernate Search forum has a lot of threads around this, for example this one - https://forum.hibernate.org/viewtopic.php?f=9&t=998155. Since you are not giving any concrete configuration and code examples it is hard to give more specific advice.
I tried createIndexer().startAndWait()
that is also a good approach. I would recommend this approach if you want to insert not such a couple of test entities, but a whole set of data. In this case it can make sense to use a framework like dbunit to insert the testdata and then manually index the data. createIndexer().startAndWait() is the right tool for that. Extracting all this loading/persisting/indexing functionality into a common test base class is the way to go. The base class can also be responsible to do all the Spring bootstrapping.
Again, to give more specific feedback you have to refine your question.
I have a complete different approach, when I write any queries, i want to write a complete test suite, but data creation has always been pain(special mention to when test customer gets corrupt and all your test suite breaks.
To solve this I created Random-JPA. It's simple and easy to integrate. The whole idea is you create fresh data and test.
You Can find the full documentation here

what's the best Rules Framework that can work in conjunction with Spring Batch ( 500k objects)?

I've used both spring-batch and drools on previous projects, separately. In my current project, I have a design where I need to process upto 500k xml objects, convert them to jaxB, apply rule on each of the object (the rule itself is fairly simple: compare properties and update two flags in a 'notification' object), and finally send an event so a spring web flow viewmodel (that can be a listener) will update itself. That's not the requirement for design but it's what I have implemented:
1) ItemReader (JaxB)
2) ItemProcessor:-maps to a ksession (stateful) and fires rules based on a drl file.
3) ItemWriter: prepares the necessary cleanup and raises appropriate events
Seems to me that the logic itself is straight forward, but when I added all the gluecode of batch job: itemReader, Itemprocessor, etc., a simple rule didn't work. Also, after reading several forums it seems RETE algo isn't going to scale well on batch applications.
In summary, is drools the best way to integrate a basic rules framework in spring-batch OR are there any light weight alternatives?
the rule itself is fairly simple: compare properties and update two flags in a 'notification' object
No need for any Rules Framework. That is what Spring Batch's ItemProcessor is for
from ItemProcessor JavaDocs:
"..an extension point which allows for the application of business logic in an item oriented processing scenario"
No need to complicate things with Drools or any other rules engine, unless you really need it => e.g. have dozens / hundreds of complex rules + that are not trivial to code.
usually the RETE algorithm is not a problem is a huge advantage. You need to design your solution with the assumption that it will be a batch process and it will work fine. You need to take into account that the big overhead in your scenario is creating all the 500k objects from the XML code. Once you get the objects if you design your business rules correctly it will perform correctly.

Can an API in SOAP/WSDL be kept backwards compatible easily?

When using an IPC library, it is important that it provides the possibility that both client and server can communicate even when their version of the API differs. As I'm considering using SOAP for our client/server application, I wonder whether a SOAP/WSDL solution can deal with API changes well.
For example:
Adding parameters to existing functions
Adding variables to existing structs that are used in existing functions
Removing functions
Removing parameters from existing functions
Removing variables from existing structs that are used in existing functions
Changing the type of a parameter used in an existing function
Changing the order of parameters in an existing function
Changing the order of composite parts in an existing struct
Renaming existing functions
Renaming parameters
Note: by "struct" I mean a composite type
As far as I know there is not such stuff as per the SOAP/WSDL standard. But tools exists to cope with such issues. For instance, in Glassfish you can specify XSL stylesheet to transform the request/response of a web service. Other solution such as Oracle SOA suite offer much more elaborated tools to manage versioning of web service and integration of component together. Message can be routed automatically to different version of a web service and/or transformed. You will need to check what your target infrastructure offers.
EDIT:
XML and XSD is more flexible regarding evolution of the schema than types and serialization in object-oriented languages. Some stuff can be made backward compatible by simply declaring them as optional, e.g.
Adding parameters to existing functions - if a parameter is optional, you get a null value if the client doesn't send it
Adding variables to existing structure that are used in existing functions - if the value is optional, you get null if the client doesn't provide it
Removing functions - no magic here
Removing parameters from existing functions - parameters sent by the client will be superfluous according to the new definition and will be omitted
Removing variables from existing structure that are used in existing functions - I don't know in this case
Changing the type of a parameter used in an existing function - that depends on the change. For a simple type the serialization/deserialization may still work, e.g. String to int.
Note that I'm not 100% sure of the list. But a few tests can show you what works and what doesn't. The point is that XML is sent over the wire, so it gives some flexibility.
It doesn't. You'll have to manage that manually somehow. Typically by creating a new interface as you introduce major/breaking changes.
More generally speaking, this is an architectural problem, rather than a technical one. Once an interface is published, you really need to think about how to handle changes.

Preventing Netbeans JAXB generation trashing classes

I'm developing a SOAP service using JAX-WS and JAXB under Netbeans 6.8, and getting a little frustrated with Netbeans trashing my work every time the XSD schema my JAXB bindings are based upon changes.
To elaborate, the IDE automatically generates classes bound to the schema, which can then be (un)marshalled from/to XML using JAXB. To these classes I've added extra methods to (for example) convert to and from separate classes designed to be persisted to database with JPA. The problem is that whenever the schema changes and I rebuild, these classes are regenerated, and all my custom methods are deleted. I can manually replace them by copy-pasting from a backup file, but that is rather time-consuming and tedious. As I'm using an iterative design approach, the schema is changing rather frequently and I'm wasting an awful lot of time whenever it does, simply to reinstate my previous code.
While the IDE automatically regenerating the JAXB-bound classes is entirely reasonable and I don't mean to imply otherwise, I was wondering if anyone had any bright ideas as to how to prevent my extra work having to be manually reinstated every time my schema changes?
Making modifications to the XJC-generate source isn't really a good idea, for reasons that you've discovered. You should either use a binding customization or XJC to plugin to generate the additional code that you need, or else move your additional code out of the XJC-generate code and into separate source files.
If your additional code is there to convert between JAXB and JPA class models, then it can probably stand on its own as a distinct translation layer. It's not very OO that way, but it'll get around your problem.
Alternatively, there's an XJC plugin that is supposed to let you preserve code that's been manually added to the generated source, but it's poorly documented (and I haven't used it myself). You might have to dog around on http://jaxb.dev.java.net/ to find out how to use it.
Instead of modifying the generated classes to add methods - extend the generated classes and put your additional methods in your classes derived from the generated classes.