claim processing with policy variants using drools and jbpm? - drools

I'm trying to build an claim processing system. There will be multiple variations of insurance policies (based on the negotiations with individual clients). Aim is to keep a base policies per provider and then apply variations to them per client to ensure easy maintenance of top level policies (like damage due to fire covered or not). The policies should be easy to be created by non-technical business users.
What is the best approach for this? I'm thinking on the lines of using Drools to come up with basic rules and then create jBPM processes per policy provider that will consume the rules. Guvnor for authoring and maintenance of rules and processes.
Assuming no human tasks (its going to be just a set of rules that need to be fired and results be thrown out), is using jBPM going to be an overkill? Are there better alternatives in the open source world?

Drools is already closely integrated with jBPM for use cases like this, so it definitely won't be overkill, they will work very nicely together. jBPM is not only about human interactions, it can just as well be used for automatic processing.
One remark, it might even be possible to not have one process per provider but have only one (or a small set of) process(es) and use rules to handle the variations.

Related

Activiti and Drools ... is one enough?

I have been asked to start exploring a Activiti tool for some client demo.
The demo will also have JBoss Drools with which Activiti will be integrated.
I am new to both of these tools and business process world, so excuse me if the question is dumb.
The question is why do you need Drools? Isn't Activiti enough for the job?
Both of them have conditional elements so why do you need Activiti on top of drools?
This question doesn't quite fit the purpose of StackOverflow, so don't be surprised if you get a few flags. But I'll try to give a short answer.
Activity is a workflow engine, Drools is a business rules engine. They serve two different purposes.
Workflow engines are useful when you have a flow of actions of different actors that need to be controlled programmatically.
Rules engines are useful when you have business rules for executing some task automatically that you want to describe in a declarative way.
Both purposes are orthogonal to each other, meaning that the problem you have to solve may require none, just one, or both of them.
Imagine a workflow where a customer reports an incident, some experts have to work on it, and finally a bill gets produced, but no heavy algorithms are behind those tasks. That might be supported by a workflow engine without a rules engine.
Imagine a complex price model for a product, like cars having all sorts of special features that may be ordered. (Hifi speakers cost 400 €, except if the executive version of the car is ordered, where they only cost 200 € if ordered in combination with smartphone adapter...) Here a rules engine may be useful, although nobody talked about a workflow, so no workflow engine is needed.
Imagine the first example (incident workflow) together with a complex billing scheme. Here both tools may be used.
I wonder why these two types of tools are in some places described as perfectly fitting together. (Maybe this kind of claim motivated your question.) They serve two different purposes, and whether you need them both depends on the problem you have to solve.

Why workflow/BPM is needed?

I need to work on a customer on boarding application. The workflow between various users can be implemented using JSF framework itself, with the help of faces confiq.xml I can specify the flow between various users. But here BPM is used with the help of webmethods tool. Does BPM is required always for implementation of workflow? What is its importance over normal implementation using other technologies?
Sassi,
in JSF you control only the pageflow between different UIs which can be part of a single activity which is performed by one user or or part of many activities.
A business process typically involves multiple people (participants / roles) and systems. The WfMS / BPMS for instance:
manages the task lists of the process participants
orchestrates the control flow between the different manual and system tasks
manages the process context information throughout the process (data. documents, persistence, versioning - ideally all ootb without coding)
provides rollback, error compensation features
creates an audit trail which is important for compliance / processes that need to be auditable (QA, regulators)
provides dashboards for operational monitoring
and reports for analysis and reporting of KPIs like averages process execution times or volumes grouped by different business data
allows you to model your business process in a graphical way, preferably in a standard notation (BPMN), which is much more user-friendly and a good basis for the communication between business and IT. The business will find it much harder to read the faces-config.xml.
supports the evaluation of simple or complex business rules to determine process flow and work assignment with user-friendly means
allows the
allows versioning of the process definitions (as if you had multiple faces-config versions in the classpath)
...
Find more BPMPS features and examples e.g. here http://www.eclipse.org/stardust/.
Eclipse Stardust is a mature and comprehensive open source BPMS which covers the aspects listed above and more.
There are lots of workflow solutions that are not a BPM system. However, a BPM system should always include a workflow solution. Presumably implemented by using a BPM notation standard and including kpi monitoring, business rules, simulation, user management, organization modeling and reporting. Although you could implement all those parts yourself in Java EE (with JSF) it would presumably take much more time.

Drools for Multi-Client Web Application feasible?

I am working on a multi-client web-based application that analyses sensor data and shall invoke actions based on this data with a rule engine.
Every client of this application has a set of environmental sensors (10s - 100s) and a set of rules to be evaluated every time the sensor values change (the sensor values are copied into a database).
A basic set of rules will often be reused by different clients but the rules are individually parameterized (e.g. time dependant) for each client and every client has a different amount of sensors and rules, which can be configured individually. Some rules might even be specific to single clients.
I believe that drools might be a good choice for such an implementation - using drools guvnor to manage the rules for each client. Every client would have his own knowledge base and rule execution session.
I wonder if such an environment would scale and if there is a benchmark or real-world example where someone has used drools for such scenario.
Most benchmarks I could find assess different rule engines by their ability to perform rules on a growing number of facts. The amount of facts in my scenario would be relatively stable (per client) and scalability would rather be limited by the amount of clients and concurrent application of many knowledge bases and sessions.
Any comment about benchmarks or rule engine comparison regarding this scalability problem is welcome. I'd also be glad to hear about real-world implementations where every client has his own rules and dataset to work on.
The main problem with benchmarks is that they will vary a lot depending on the specific rules that you write for your own domain. Most of the benchmarks are tweaked to perform better in the rule engine that is testing. If you have a session per client and you have a stable number of client you will face no problem. Once you get the initial version of your project you can fine tune the engine to improve the performance.
The most "difficult" thing in my opinion is to get the infrastructure right, with that I mean, when to create the sessions and how to select the rules for each of the clients. Because that's part of your specific domain, you will need to code it and manage all the sessions.
Hope it helps
Acting on sensor data is one of the examples given for "Complex Event Processing". The following link may give a deeper insight on this subject.
Drools Fusion is also capable of CEP.

How to manage multiple clients with slightly different business rules? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
We have written a software package for a particular niche industry. This package has been pretty successful, to the extent that we have signed up several different clients in the industry, who use us as a hosted solution provider, and many others are knocking on our doors. If we achieve the kind of success that we're aiming for, we will have literally hundreds of clients, each with their own web site hosted on our servers.
Trouble is, each client comes in with their own little customizations and tweaks that they need for their own local circumstances and conditions, often (but not always) based on local state or even county legislation or bureaucracy. So while probably 90-95% of the system is the same across all clients, we're going to have to build and support these little customizations.
Moreover, the system is still very much a work in progress. There are enhancements and bug fixes happening continually on the core system that need to be applied across all clients.
We are writing code in .NET (ASP, C#), MS-SQL 2005 is our DB server, and we're using SourceGear Vault as our source control system. I have worked with branching in Vault before, and it's great if you only need to keep 2 or 3 branches synchronized - but we're looking at maintaining hundreds of branches, which is just unthinkable.
My question is: How do you recommend we manage all this?
I expect answers will be addressing things like object architecture, web server architecture, source control management, developer teams etc. I have a few ideas of my own, but I have no real experience in managing something like this, and I'd really appreciate hearing from people who have done this sort of thing before.
Thanks!
I would recommend against maintaining separate code branches per customer. This is a nightmare to maintain working code against your Core.
I do recommend you do implement the Strategy Pattern and cover your "customer customizations" with automated tests (e.g. Unit & Functional) whenever you are changing your Core.
UPDATE:
I recommend that before you get too many customers, you need to establish a system of creating and updating each of their websites. How involved you get is going to be balanced by your current revenue stream of course, but you should have an end in mind.
For example, when you just signed up Customer X (hopefully all via the web), their website will be created in XX minutes and send the customer an email stating it's ready.
You definitely want to setup a Continuous Integration (CI) environment. TeamCity is a great tool, and free.
With this in place, you'll be able to check your updates in a staging environment and can then apply those patches across your production instances.
Bottom Line: Once you get over a handful of customers, you need to start thinking about automating your operations and your deployment as yet another application to itself.
UPDATE: This post highlights the negative effects of branching per customer.
Our software has very similar requirements and I've picked up a few things over the years.
First of all, such customizations will cost you both in the short and long-term. If you have control over it, place some checks and balances such that sales & marketing do not over-zealously sell customizations.
I agree with the other posters that say NOT to use source control to manage this. It should be built into the project architecture wherever possible. When I first began working for my current employer, source control was being used for this and it quickly became a nightmare.
We use a separate database for each client, mainly because for many of our clients, the law or the client themselves require it due to privacy concerns, etc...
I would say that the business logic differences have probably been the least difficult part of the experience for us (your mileage may vary depending on the nature of the customizations required). For us, most variations in business logic can be broken down into a set of configuration values which we store in an xml file that is modified upon deployment (if machine specific) or stored in a client-specific folder and kept in source control (explained below). The business logic obtains these values at runtime and adjusts its execution appropriately. You can use this in concert with various strategy and factory patterns as well -- config fields can contain names of strategies etc... . Also, unit testing can be used to verify that you haven't broken things for other clients when you make changes. Currently, adding most new clients to the system involves simply mixing/matching the appropriate config values (as far as business logic is concerned).
More of a problem for us is managing the content of the site itself including the pages/style sheets/text strings/images, all of which our clients often want customized. The current approach that I've taken for this is to create a folder tree for each client that mirrors the main site - this tree is rooted at a folder named "custom" that is located in the main site folder and deployed with the site. Content placed in the client-specific set of folders either overrides or merges with the default content (depending on file type). At runtime the correct file is chosen based on the current context (user, language, etc...). The site can be made to serve multiple clients this way. Efficiency may also be a concern - you can use caching, etc... to make it faster (I use a custom VirtualPathProvider). The largest problem we run into is the burden of visually testing all of these pages when we need to make changes. Basically, to be 100% sure you haven't broken something in a client's custom setup when you have changed a shared stylesheet, image, etc... you would have to visually inspect every single page after any significant design change. I've developed some "feel" over time as to what changes can be comfortably made without breaking things, but it's still not a foolproof system by any means.
In my case I also have no control other than offering my opinion over which visual/code customizations are sold so MANY more of them than I would like have been sold and implemented.
This is not something that you want to solve with source control management, but within the architecture of your application.
I would come up with some sort of plugin like architecture. Which plugins to use for which website would then become a configuration issue and not a source control issue.
This allows you to use branches, etc. for the stuff that they are intended for: parallel development of code between (or maybe even over) releases. Each plugin becomes a seperate project (or subproject) within your source code system. This also allows you to combine all plugins and your main application into one visual studio solution to help with dependency analisys etc.
Loosely coupling the various components in your application is the best way to go.
As mention before, source control does not sound like a good solution for your problem. To me it sounds that is better yo have a single code base using a multi-tenant architecture. This way you get a lot of benefits in terms of managing your application, load on the service, scalability, etc.
Our product using this approach and what we have is some (a lot) of core functionality that is the same for all clients, custom modules that are used by one or more clients and at the core a the "customization" is a simple workflow engine that uses different workflows for different clients, so each clients gets the core functionality, its own workflow(s) and some extended set of modules that are either client specific or generalized for more that one client.
Here's something to get you started on multi-tenancy architecture:
Multi-Tenant Data Architecture
SaaS database tenancy patterns
Without more info, such as types of client specific customization, one can only guess how deep or superficial the changes are. Some simple/standard approaches to consider:
If you can keep a central config specifying the uniqueness from client to client
If you can centralize the business rules to one class or group of classes
If you can store the business rules in the database and pull out based on client
If the business rules can all be DB/SQL based (each client having their own DB
Overall hard coding differences based on client name/id is very problematic, keeping different code bases per client is costly (think of the complete testing/retesting time required for the 90% that doesn't change)...I think more info is required to properly answer (give some specifics)
Layer the application. One of those layers contains customizations and should be able to be pulled out at any time without affect on the rest of the system. Application- and DB-level "triggers" (quoted because they may or many not employ actual DB triggers) that call customer-specific code or are parametrized with customer keys) are very helpful.
Core should never be customized, but you must layer it in somewhere, even if it is simplistic web filtering.
What we have is a a core datbase that has the functionality that all clients get. Then each client has a separate database that contains the customizations for that client. This is expensive in terms of maintenance. The other problem is that when two clients ask for a simliar functionality, it is often done differnetly by the two separate teams. There is currently little done to share custiomizations between clients and make common ones become part of the core application. Each client has their own application portal, so we don't have the worry about a change to one client affecting some other client.
Right now we are looking at changing to a process using a rules engine, but there is some concern that the perfomance won't be there for the number of records we need to be able to process. However, in your circumstances, this might be a viable alternative.
I've used some applications that offered the following customizations:
Web pages were configurable - we could drag fields out of view, position them where we wanted with our own name for the field label.
Add our own views or stored procedures and use them in: data grids (along with an update proc) and reports. Each client would need their own database.
Custom mapping of Excel files to import data into system.
Add our own calculated fields.
Ability to run custom scripts on forms during various events.
Identify our own custom fields.
If you clients are larger companies, you're almost going to need your own SDK, API's, etc.

When should you NOT use a Rules Engine? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I have a pretty decent list of the advantages of using a Rules Engine, as well as some reasons to use them, what I need is a list of the reasons why you should NOT use a Rules Engine
The best I have so far is this:
Rules engines are not really intended to handle workflow or process
executions nor are workflow engines or process management tools
designed to do rules.
Any other big reasons why you should not use them?
I will give 2 examples from personal experience where using a Rules Engine was a bad idea, maybe that will help:-
On a past project, I noticed that the rules files (the project used Drools) contained a lot of java code, including loops, functions etc. They were essentially java files masquerading as rules file. When I asked the architect on his reasoning for the design I was told that the "Rules were never intended to be maintained by business users".
Lesson: They are called "Business Rules" for a reason, do not use rules when you cannot design a system that can be easily maintained/understood by Business users.
Another case; The project used rules because requirements were poorly defined/understood and changed often. The development team's solution was to use rules extensively to avoid frequent code deploys.
Lesson: Requirements tend to change a lot during initial release changes and do not warrant usage of rules. Use rules when your business changes often (not requirements). Eg:- A software that does your taxes will change every year as taxation laws change and usage of rules is an excellent idea. Release 1.0 of an web app will change often as users identify new requirements but will stabilize over time. Do not use rules as an alternative to code deploy.
​
I get very nervous when I see people using very large rule sets (e.g., on the order of thousands of rules in a single rule set). This often happens when the rules engine is a singleton sitting in the center of the enterprise in the hope that keeping rules DRY will make them accessible to many apps that require them. I would defy anyone to tell me that a Rete rules engine with that many rules is well-understood. I'm not aware of any tools that can check to ensure that conflicts don't exist.
I think partitioning rules sets to keep them small is a better option. Aspects can be a way to share a common rule set among many objects.
I prefer a simpler, more data driven approach wherever possible.
The one poit I've noticed to be "the double edged sword" is:
placing the logic in hands of non technical staff
I've seen this work great, when you have one or two multidisciplinary geniuses on the non technical side, but I've also seen the lack of technicity leading to bloat, more bugs, and in general 4x the development/maintenance cost.
Thus you need to consider your user-base seriously.
I'm a big fan of Business Rules Engines, since it can help you make your life much easier as a programmer. One of the first experiences I've had while working on a Data Warehouse project was to find Stored Procedures containing complicated CASE structures stretching over entire pages. It was a nightmare to debug, since it was very difficult to understand the logic applied in such long CASE structures, and to determine if you have an overlapping between a rule at page 1 of the code and another from page 5. Overall, we had more than 300 such rules embedded in the code.
When we've received a new development requirement, for something called Accounting Destination, which was involving treating more than 3000 rules, i knew something had to change. Back then I've been working on a prototype which later on become the parent of what now is a Custom Business Rule engine, capable of handling all SQL standard operators. Initially we've been using Excel as an authoring tool and , later on, we've created an ASP.net application which will allow the Business Users to define their own business rules, without the need of writing code. Now the system works fine, with very few bugs, and contains over 7000 rules for calculating this Accounting Destination. I don't think such scenario would have been possible by just hard-coding. And the users are pretty happy that they can define their own rules without IT becoming their bottleneck.
Still, there are limits to such approach:
You need to have capable business users which have an excellent understanding of the company business.
There is a significant workload on searching the entire system (in
our case a Data Warehouse), in order to determine all hard-coded
conditions which make sense to translate into rules to be handled by
a Business Rule Engine. We've also had to take good care that these
initial templates to be fully understandable by Business Users.
You need to have an application used for rules authoring, in which
algorithms for detection of overlapping business rules is implemented. Otherwise you'll end up with a big mess, where no one understands anymore the results they get.
When you have a bug in a generic component like a Custom Business Rule Engine, it can be very difficult to debug and involve extensive tests to make sure that things that worked before also work now.
More details on this topic can be found on a post I've written: http://dwhbp.com/post/2011/10/30/Implementing-a-Business-Rule-Engine.aspx
Overall, the biggest advantage of using a Business Rule Engines is that it allows the users to take back control over the Business Rule definitions and authoring, without the need of going to the IT department each time they need to modify something. It also the reduces the workload over IT development teams, which can now focus on building stuff with more added value.
Cheers,
Nicolae
GREAT article on when not to use a rules Engine...(as well as when to use one)....
http://www.jessrules.com/guidelines.shtml
Another option is if you have a linear set of rules that only apply once in any order to get an outcome is to create a groovy interface and have developers write and deploy these new rules. The advantage is that it is wickedly fast because normally you would pass the hibernate session OR jdbc session as well as any parameters so you have access to all your apps data but in an efficient manner. With a fact list, there can be alot of looping/matching that really can slow the system down.....It's another way to avoid a rules engine and be able to be deployed dynamically(yes, our groovy rules were deployed in a database and we had no recursion....it either met the rule or it didn't). It is just another option.....oh and one more benefit is not learning rules syntax for incoming developers. They have to learn some groovy but that is very close to java so the learning curve is much better.
It really depends on your context. Rules engine have their place and the above is just another option if you have rules on a project that you may want to deploy dynamically for very simplified situations that don't require a rules engine.
BASICALLY do NOT use a rules engine if you have a simple ruleset and can have a groovy interface instead.....just as dynamically deployable and new developers joining your team can learn it faster than the drools language.(but that's my opinion)
In my experience, rules engines work best when the following are true:
Well-defined doctrine for your problem domain
High quality (preferably automated) data to help drive most of your inputs
Access to subject matter experts
Software developers with experience creating expert systems
If any of these four traits are missing, you still might find a rules engine works for you, but every time I've tried it with even 1 missing, I've run into trouble.
That's certainly a good start. The other thing with rules engines is that some things are well-understood, deterministic, and straight-forward. Payroll withholding is (or use to be) like that. You could express it as rules that would be resolved by a rules engine, but you could express the same rules as a fairly simple table of values.
So, workflow engines are good when you're expressing a longer-term process that will have persistent data. Rules engines can do a similar thing, but you have to do a lot of added complexity.
Rules engines are good when you have complicated knowledge bases and need search. Rules engines can resolve complicated issues, and can be adapted quickly to changing situations, but impose a lot of complexity on the base implementation.
Many decision algorithms are simple enough to express as a simple table-driven program without the complexity implied by a real rules engine.
I would strongly recommend business rules engines like Drools as open source or Commercial Rules Engine such as LiveRules.
When you have a lot of business policies which are volatile in nature, it is very hard to maintain that part of the core technology code.
The rules engine provides a great flexibility of the framework and easy to change and deploy.
Rules engines are not to be used everywhere but need to used when you have lot of policies where changes are inevitable on a regular basis.
I don't really understand some points such as :
a) business people needs to understand business very well, or;
b) disagreement on business people don't need to know the rule.
For me, as a people just touching BRE, the benefit of BRE is so called to let system adapt to business change, hence it's focused on adaptive of change.
Does it matter if the rule set up at time x is different from the rule set up at time y because of:
a) business people don't understand business, or;
b) business people don't understand rules?