Why not to use Rule Engine? Or Overheads of Rule Engine [closed] - rule-engine

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I have read several posts where the use cases for using Rule Engine are discusses. And many of them say that you should not use it for workflow management.
Posts I referred:
Pros and Cons of Rule Engine
When you should not use the Rule Engine
Guidelines on using Rule Engine
But I have not yet got any simple explanation on what all overheads does the Rule Engine add to system when used?
What if I use it for workflow management? Will it be an overhead on memory?
Can someone put some focus on this?
I will also put the scenario in which we will be using Rule Engine:
We have a bidding engine which whose input changes frequently on the basis of business analysts forecasts. So in simple terms we will be taking actions against some values provided based on rules. For example: If business analyst puts value as $2 then Rule engine will decide the bid value to be sent to the customers.

In short: a rule engine is used to make decisions; workflows are used to run processes.
You need a rues engine to replace some or all "IFs" and "ELSEs" in your compiled code with the "soft" logic that can be changed without changing/recompiling your main code. You supply it a rule and a data (called "fact object" or "source object") and the engine evaluates that data against that rule. That is the only purpose of a rules engine. Most engines can either return the output of a rule evaluation against your data as True or False, or invoke an "action" (a method in your code) to further process your data.
You use workflow process to run a factory, or a wharehouse, or a military facility. Workflow allows you to stop the conveyor and wait for an event to happen, or continue the approval process if a boss signed it. Etc, etc. Typically, workflow uses a rules engine internally as part of its core to make decisions what to do next.
Hope this clarifies things a bit :)

Because there is no standard rules engine and both the implementation and the consequences of that implementation are likely to be highly variable depending on the language and platform you are working with, it is impossible to answer your question directly. Nevertheless, I will endeavour to cast a little light for you.
What a Rules Engine does is to provide a way of implementing a set of conditions in your code or, more to the point, of allowing the conditions to be set outside your code and interpreted by it so that other stakeholders can alter the rules as needed.
You need to look at the exact problem you are trying to solve, the platform you are trying to solve it on and then decide whether, in that specific circumstance, a rules engine is the best solution. The linked questions provide some good guidance in that regard.
Bear in mind that if you have a problem that could be solved this way, or could be solved another way, every solution will have some overheads- some may affect performance, others affect development time or maintainability. You need to decide what is important to your users about your system and allow that to guide you towards solutions.

In the interest of full disclosure, I work for InRule Technology, which is a business rule management system vendor.
A point was made in one of the linked posts about the fact that there will of course be overhead when you use any third party solution. A rule engine is no exception. However, the key is maximizing the efficiency of rule engine execution through careful planning and modeling of (1) the objects and data that will be passed into the rule engine and (2) the complexity of the rules themselves.
The overhead is no different if you were doing this in code instead of a rule engine though. On the lower end, where the objects/data and rules are of moderate complexity, we classify these as short-running decisions and, as expected, overhead is at a minimum. As that data and/or those rules increase in complexity, execution time will increase. However, the metrics we've seen raise no red flags as far as performance is concerned.

Related

What does a web-based framework scalable?

thanks you very much in advance.
First of all, I conceive scalability as the ability to design a system that doest not change when the demand of its services, whatever they are, increases considerably. May you need more hardware (vertically or horizontally0? Fine, add it at your leisure because the system is prepared and has been designed to cope with it.
My question is simple to ask but presumably very complex to answer. I would like to know what you I look at in a framework to make sure it will scale accordingly, both in number of hits and number of sessions running simultaneously.
This question is not about technology nor a particular framework at all, it is more a theoretical question.
I know that depend very much on having a good database design and a proper hardware behind with replication, etc... Let's assume that this all exists, however yet my framework must meet some criteria, what?
Provide a memcache?
Ability to run across multiple machines (at the web server level) and use many replicated databases? But what is in the software that makes that possible?
etc...
Please, let's not relate the answers with any particular programming language or technology behind.
Thanks again,
D.
I think scalability depends most of all on the use case: do you expect huge amounts of data, then you should focus on the database, if it's about traffic, focus on the server, is it about adding new features, focus on your data-model and the framework you are using...
Comparing a microposts-service like Twitter to a university website or a webservice like GoogleDocs you will find quite different requirements.
First of all the common notion of scalability is the ability of a software to improve in throughput or capacity if more hardware resources are added (CPUs, memory, bandwidth etc).
Software that does not improve in increased resources is not scalable.
Getting out of the definitions, I think your question is related to evaluation of frameworks you are planning to introduce to your implementation that may affect your software's ability to scale.
IMHO the most important factor to evaluate when introducing a framework is to see if there is hidden serialization in it (that serialization in effects transfers to/affects your software)
So if you introduce a framework that introduces serialization in your application that can affect your ability to scale.
How to evaluate?
Careful source code inspection (if open source)
Are there any performance guarantees offered by those that build the
framework?
Do measurements yourself to see how introducing this framework
affects your performance and replace if not satisfied

Why is there no ActiveRecord REST adapter [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
We've been working on various projects using ActiveResource for a couple years now. It seems that ActiveResource is great to use if you are using Rails on both the client and server sides.
Problems with ActiveResource
We're using Scala on the server side and constantly running up against "Oh, ActiveResource doesn't want the API to <do this standard thing>" or "ActiveResource does <this weird thing>" so we have to change the server to support the demands of the client. This has all been discussed before, I know.
Another problem is that many gems and libraries require ActiveRecord. I can't count the number of gems we've run into that "require" your model to use ActiveRecord even though they don't actually use the actual AR functionality. It seems this is mostly because that's the easy path for gem development. "I'm using ActiveRecord, and can't imagine anyone not using it, so I'll just require that rather than figure out the more general way" (note, I've done this myself, so I'm not simply complaining)
So, if we use ActiveResource, we have to break the server to make it work, and we can't use a large portion of what makes Rails great.
REST Adapter?
All of this brought us to ask the question "Why does ActiveResource exist at all?" I mean, why would you have this secondary data storage path? Why isn't ActiveResource just a REST adapter? With a REST adapter, you can have all the good things in all the gems, and don't have to fight with ActiveResource's finicky nature. You just build your model the same way you build any model.
So I started exploring building one. It actually doesn't seem difficult at all. A few hours work and you could have the basic functionality built up. There are examples elsewhere using REST and SOAP, so it's doable.
So the question comes back. If it's so easy, why the hell hasn't this been done before?
Not simply a datastore?
I've come up with what I wonder is the answer. While building up a scaffold for this, I quickly ran into an issue. REST is very good at doing two things: 1) Act on this object, and 2) Act on all objects. In fact, that's pretty much the limit of the REST standard.
And so I started to wonder if scope is the reason there's no REST adapter. The ActiveRecord subsystem seems to need more than just "get one" and "get all." It's based on querying the datastore as much as anything.
The Actual Question
So, the actual question: Is there no ActiveRecord REST adapter simply because REST defines no standardized way to say "give me all of the cars where the car is in the same parking lot as these drivers and the drivers have a key."
That's right.
Unlike databases which have SQL as a standard way to express conditions, there is no standard in REST to support all the ActiveRecord functions, such as joins, group, and having.
How many hours work do you think it would take to do correlated queries or sub-queries?
I'm not being casually dismissive here. This concept touches on some personal projects I've considered, with some of the same issues I've been thinking through.
ActiveRecord supports all of SQL, which is way more powerful than most people use or need. Basically every part of an SQL statement has an ActiveRecord method which takes a string to fill in that section of the SQL.
You'd want to limit the client to the part of ActiveRecord people actually use. You'd still need IS NULL, and IS NOT NULL. You'd need comparisons such as less than and greater than. You'd want to support OR statements, for "field1 IS NULL OR field1 = ''".
To do all the comparison stuff, like where(["updated_at > ?", cutoff]) you would need a RESTful server more robust than existing web services. The server would have to use your gem, or be built with guidelines for implementing all the functionality.
So, in the end why? You're implementing a limited database API, going over the network with string URLs instead of binary packets, to a database engine that you are implementing.
On the other hand, if there was a standard for this, there might be good benefit to such an implementation.
If there was an implementation which one could install on a RESTful web server, which, while maybe not as powerful as SQL, could do indexed queries, post index processing of simple non-indexable expressions to qualify records return, and sort, (even if passing this to an SQL database do all the work), one could enable this on a server, and products, like Crystal Reports could be developed to use the standard for a report client.
Going through the web server API layer, could provide a way to enforce restrictions on what database operations could be performed to provide more security than fully opening up database access. Also, logic could be added to the web service to audit and do processing on events resulting from the CRUD operations (essentially triggers). Yes, database products supply security policies, triggers, and stored procedures to do these things, but with the product we are discussing, one could do it more easily in ruby, than using the database functions.
One could also have pseudo data, which is calculated from ruby code but acts like database records, along side the general DB RESTful access. Sure, databases can do this which store procedures, and some support writing stored procedures in Java, but this would be better because it would be easier to implement and could be written in ruby.

Why to build own CMS? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
On my first job interview, I was asked why did I build my own CMS? Why not to use one of existing CMS, Wordpress, Joomla, Drupal...? At first, I was stunned. I couldn't immediately recall all of my reasons for building my own CMS, but this was definitely one of the main reasons: It's my code and if I want to change something in that CMS (which I often have to do, because each website I build needs CMS with different functions) it's not a big problem. For some time I've been using Wordpress and one of the main things that distracted me from using it was discovering bugs in code that wasn't written by me and this bugs were often, especially if I made some changes to CMS or added a plugin...
Here, I can find these 8 reasons why NOT to build own CMS:
It won’t meet users’ needs
It’s too much work
It won’t be a standard solution
It won’t be extendible fast enough
It won’t be tested well enough
It won’t be easily changeable
It won’t add any value
Create content, not functionality
Quote from the same page:
So the main question to ask yourself
is: ‘Why am I really trying to
re-solve a problem that has already
been solved before?’
Well, I definitely agree that it's hard to invent CMS that hasn't been already invented, but on other hand, I think every CMS is (or should be) individual... it maybe won't have a million of functions, it will have 3 functions but their usage will be clear (to a user) and do all that one site needs to have. I think also that it is not good to give to a client a CMS with a lot of functions that are never used and it looks probably more professional when website and CMS together look like one product.
I would also like to comment some quote parts:
"It’s too much work" - I agree, but when using existing CMS and customizing it to website needs and can sometimes be very hard job or mission impossible.
"It won’t be easily changeable" - I disagree with this one.
What is your opinion on this one, why did you develop or didn't develop your own CMS?
Ile
This is an interesting question that applies to most development, not just when building a CMS.
In general, I would say that it's a bad idea to reinvent the wheel (and most of your 8 arguments are correct in most cases), but there are exceptions. The first one that comes to mind is one from Joel Spolsky, In Defense of Not-Invented-Here Syndrome:
If it's a core business function -- do
it yourself, no matter what.
The point is, if you're making your money directly from building content management systems, you should not take one from someone else and tweak it until it fits you. You'd rather be in full control over your own product.
Edit:
Also, don't forget that the urge to reinvent things stems (among other things) from a fundamental law of programming:
It's easier to write code than it is to read it
This does not mean that we should take the road that appears to be easier but it explains why we fall for it. Take the challenge and actually read some code, rather than write it, from time to time.
I would build a CMS because it can be fun and a great learning experience.
However, any open source CMS can be customized to any client's need. The biggest problem is that you have to understand how that CMS works in order to be able to change it well.
Either way you would be faced with quite a big task, but I must agree with those who say that you shouldn't start from scratch (unless you are doing it to learn some new technology) exactly for the reasons stated in your question... As they say, don't reinvent the wheel unless you want to learn about wheels.
I've found it works when the context of the project is larger than just a 'content site'. I've worked on a number of real estate sites where the bulk of the content is coming in from data feeds, or already existing in databases that have had their structure set up long before you were involved. Really, we only had a handful of BS 'content' pages that made up the site that were rarely updated. What they really needed was a simple interface to data entry. It was far easier to build some one off components than try and shoehorn an existing system on top of an out of the box CMS.
Like others mentioned though, you must consider overall requirements. Is there workflow involved? Dynamic navigation? Then I'd start leaning more towards out of the box CMS's, but many times people say they need a CMS, when they really just need a WYSIWIG interface to a database. But sometimes not...
It seems to me that the biggest reason NOT to build your own CMS (besides security issues) is lack of support and upgrade path. I consider it a disservice to clients to put them on a custom CMS and then have to rely on you only support and updates. Even worse is having them pay for the development of the custom CMS - they are paying you to reinvent the wheel no matter how simple the site requirements are.
There are plenty of CMS options out there that will allow you to add your own custom extensions if your requirements are beyond what is built in.
The best reason (possibly only) to build a custom CMS is to learn a language well. Building a CMS is a great way to learn web development, but it's not a great way to service your clients.
As a team leader that is always being pushed to do more with less, I too ask the question "why would you write your own?" There are more CMS packages out there than there are programming languages and I find it difficult to believe that you cannot find one that meets most (if not all) customer, business and cost requirements.
If you find that code changes are needed, opt for an open source solution, make your changes and share as needed or desired.
I do know that many times a CMS systems is NOT what is needed. Many customers need a Content Editing System. What I mean is that someone technical puts a site in place and the customer adds/edits/removes pages. The pages are already well designed and formatted. In these cases, I can see where it may be quicker to design & implement something from scratch rather than chopping down a CMS with access rights or removing/hiding functionality.
Unless you're building one for the experience, there's only one real reason for building your own: It's cheaper and/or easier than using one on the market that meets your requirements.

Rules Engine - pros and cons

I'm auditing a project that uses what is called a Rules Engine. In short, it's a way to externalize business logic from application code.
This concept is entirely new to me and I'm pretty skeptical about it. After hearing people talk about Anemic Domain Models for the past few years, I'm questioning the Rules Engine Approach. To me they seem like a great way to WEAKEN a domain model. For example say I'm doing a java webapp interacting with a Rules Engine. Then I decide I want to have an Android app based on the same domain. Unless I want the Android app to interact with the Rules Engine as well, I'm going to have to miss out on whatever business logic was already written.
As I don't have any experience with them yet, just curiosity, I was interested to hear about the pros and cons are in using a Rules Engine? The only pro that I can think of is that you don't need to rebuild your entire Application just to change some business rule (but really, how many apps really have that many changes?). But using a Rules Engine to solve that problem kind of sounds to me like putting a band-aid over a shotgun wound.
UPDATE - since writing this, the god himself, Martin Fowler, has blogged about using a Rules engine.
Most rule engines that I have seen are viewed as a black box by system code. If I were to build a domain model, I would probably want certain business rules to be intrinsic to the domain model, e.g. business rules that tell me when an object has invalid values. This allows multiple systems to share the domain model without duplicating business logic. I could have each system use the same rule service to validate my domain model, but this appears to weaken my domain model (as was pointed out in the question). Why? Because instead of consistently enforcing my business rules across all systems at all times, I am relying on system programmers to determine when the business rules should be enforced (by calling the rule service). This may not be a problem if the domain model comes to you completely populated, but can be problematic if you're dealing with a user interface or system that changes values in the domain model over its lifetime.
There is another class of business rules: decision making. For example, an insurance company may need to classify the risk of underwriting an applicant and arrive at a premium. You could place these types of business rules in your domain model, but a centralized decision for scenarios like this are usually desirable and, actually, fit quite well into a service-oriented architecture. This does beg the question of why a rule engine and not system code. The place where a rule engine may be a better choice is where business rules responsible for the decision change over time (as some other answers have pointed out).
Rule engines usually allow you to change rules without restarting your system or deploying new executable code (regardless of what promises you receive from a vendor, do make sure you test your changes in a non-production environment because, even if the rule engine is flawless, humans are still changing the rules). If you're thinking, "I can do that by using a database to store values that change", you're right. A rule engine is not a magical box that does something new . It is intended to be a tool that provides a higher level of abstraction so you can focus less on reinventing the wheel. Many vendors take this a step further by letting you create templates so that business users can fill in the blanks instead of learning a rule language.
One parting caution about templates: templates can never take less time than writing a rule without a template because the template must, at the bare minimum, describe the rule. Plan for a higher initial cost (the same as if you were to build a system that used a database to store values that change vs. writing the rules in directly in system code) - the ROI is because you save on future maintenance of system code.
I think your concerns about anemic domain models are valid.
I've seen two applications of a well-known commercial Rete rules engine running in production where I work. I'd consider one a success and the other a failure.
The successful application is a decision tree app, consisting of ~10 trees of ~30 branch points each. The rules engine has a UI that does allow business folks to maintain the rules.
The less successful application has ~3000 rules slammed into a rules database. No one has any idea if there are conflicting rules when a new one is added. There is little understanding of the Rete algorithm, and the expertise with the product has left the firm, so it's become a black box that's untouchable and unrefactorable. The deployment cycle is still affected by rules changes - a complete regression test must be done when rules are changed. Memory was an issue, too.
I'd tread lightly. When a rule set is modest in size it's easy to understand changes, like the simplistic e-mail sample given above. Once the number of rules climbs into the hundreds I think you might have a problem.
I'd also worry about a rules engine becoming a singleton bottleneck in your application.
I see nothing wrong with using objects as a way to partition that rules engine space. Embedding behavior in objects that defer to a private rules engine seems okay to me. Problems will hit you when the rules engine requires state that isn't part of its object to fire properly. But that's just another example of design being difficult.
Rule Engines can offer a lot of value, in certain instances.
First, many rule engines work in a more declarative way. A very crude example would be AWK, where you can assign regexes to blocks of code. When the regex is seen by the file scanner, the block of code is executed.
You can see that in this case, if you had, say, a large AWK file and you wanted to add Yet Another "rule", you can readily go to the bottom of the file, add you regex and logic, and be done with it. Specifically, for many applications, you're not particularly concerned with what the other rules are doing, and the rules don't really interoperate with each other.
Thus the AWK file becomes more like a "rule soup". This "rule soup" nature lets folks focus very tightly on their domain with little concern for all of the other rules that may be in the system.
For example, Frank is interested in orders with a total of more than $1000, so he puts in to the rule system that he's interested. "IF order.total > 1000 THEN email Frank".
Meanwhile, Sally wants all orders from the west coast: "IF order.source == 'WEST_COAST' THEN email Sally".
So, you can see in this trivial, contrived case, that an order can satisfy both rules, yet both rules are independent of each other. A $1200 order from the West Coast notifies both Frank and Sally. When Frank is no longer concerned, he'll simply yank his rule out from the soup.
For many situations, this flexibility can be very powerful. It can also, like this case, be exposed to end users for simple rules. Using high level expressions and perhaps lightweight scripting.
Now, clearly, in a complicated system there are all sorts of interrelationships that can happen, an this is why the entire system is not "Done with rules". Someone, somewhere is going to be in charge of the rules not getting out of hand. But that doesn't necessarily lessen the value such a system can provide.
Mind this doesn't even go in to things like expert systems, where rules fire on data that rules can create, but a simpler rules system.
Anyway, I hope this example shows how a rules system can help augment a larger application.
The biggest pro that I've seen for rules engines is that it lets the Business Rule owners implement the business rules, instead of putting the onus on programmers. Even if you have an agile process where you are constantly getting feedback from stakeholders and going through rapid iterations, it's still not going to achieve the level of efficiency that can be achieved by having the people making the business rules implement them as well.
Also, you can't under-emphasize the value in removing the recompile-retest-redeploy cycle that can result from a simple rule change, if the rules are embedded in code. There are often several teams that are involved in putting the blessing on a build, and using a Rules Engine can make much of that unnecessary.
I've written a rules engine for a client. The biggest win was including all the stakeholders. The engine could run (or replay) a query and explain what was happening in text. The business people could look at the text description and quickly point out nuances in rules, exceptions, and other special cases. Once the business side was involved, the validation got much better because it was easy to get their input. Additionally, the rules engine can live separately from other parts of an application code base so you can use it across applications.
The con is that some programmers don't like to learn too much. Rule engines and the rules you put into them, along with the stuff that implements them, can be a bit hairy. Although a good system can easily handle sick and twisted webs of logic (or illogic often ;), it's not as simple as coding a bunch of if statements (no matter what some of the simple-minded rule engines do). The rules engine gives you the tools to handle rule relationships, but you still have to be able to imagine all of that in your mind. Sometimes it's like living in the movie Brazil. :)
It (as everything else) depends on your application. For some applications (usually the ones that never change or the rules are best on real life constants, i.e. won't change noticeably in eons, for instance physical properties and formulae) it doesn't make sense to use a rule engine, it just introduces additional complexity and requires the developer to have a larger skill set.
For other applications it's really a good idea. Take for instance order processing (orders being anything from invoicing to processing currency transactions), every now and then there's a minute change to some relevant law or code (in the judicial sense) that requires you to fulfil a new requirement (for instance sales tax, a classic).
Rather than trying to force your old application into this new situation where all the sudden you have to think about sales tax, where as before you didn't, it is easier to adapt your rule set rather than having to meddle about in potentially a large set of your code.
Then the next amendment from your local government requires reporting of all sales within a certain criteria, rather than you have to go in and add that, too. In the end you'll end up with very complex code that will prove pretty difficult to manage when you turn around and want to revert effect of one of the rules, without influencing all the others...
Everybody thus far has been very positive about rules engines, but I advise the reader to be wary. When a problem becomes a little bit more complicated, you may suddenly find that an entire rules engine has been rendered unsuitable, or much more complicated than in a more powerful language. Also, for many problems, rules engines will not be able to easily detect properties that greatly reduce the runtime and memory footprint of evaluating the condition. There are relatively few situations in which I would prefer a rule engine to a dependency injection framework or a more dynamic programming language.
"but really, how many apps really have that many changes?"
Honestly, every app I have worked on has gone through serious workflow and/or logic changes from concept until well after deployment. It's the number one reason for "maintenance" programming...
The reality is that you can't think of everything up front, hence the reason for Agile processes. Further, the BA's always seem to miss something vital until it's found in testing.
Rule Engines force you to truly separate business logic from presentation and storage. Further, if using the right engine, your BA's can add and remove logic as necessary. As Chris Marasti-Georg said, it puts the onus on the BA. But more than that, it allows the BA to get exactly what they are asking for.
A rules engine is a win on a configurable application where you don't want to have to do custom builds if it can be avoided. They are also good at centralising large bases of rules and algorithms like Rete are efficient for quickly matching against large rule sets.
Lots of good answers already but wanted to add a couple of things:
in automating a decision of any complexity the critical thing rapidly becomes your ability to manage rather than execute the logic involved. A rules engine will not help with this - you need to think about the rule management capabilities that a business rules management system has. Most commercial and open source rules engines have evolved into rule management systems with repositories, reporting on rule usage, versioning etc. A repository of rules, structured into coherent rule sets that can be orchestrated to make business decisions is a lot easier to manage than either thousands of lines of code or a rule soup.
There are many ways to use a declarative, rules-based approach. Using rules to manage a UI or as part of defining a process can be very effective. The most valuable use of a rules approach, however, is to automate business decisions and to deliver this as loosely coupled decision services that take input, execute rules and return an answer - a decision. Thing of these as services that answer questions for other services like "is this customer a good credit risk" or "what discount should I give this customer for this order or "what's the best cross sell for this customer at this time. These decision services can be very effectively built using a rules management system and allow for easy integration of analytics over time, something many decisions benefit from.
I see rule, process and data engines (a.k.a. databases) as essentially being similar. But, for some reason, we never say that blackboxing the persistence subsystem is bad.
Secondly, from my POV, an anemic model is not one that is light in implementation of behaviors, it is one that is light in behaviors itself. The actual method that describes an available behavior in a domain model object does not have to be done by the object itself.
The biggest complexity from my experience in Rule Engines is that:
from OOP POV it's a real pain to refactor and test rules written in a declarative language while you are refactoring code that affects them.
Often we should always think about the execution order of rules which turns into a mess when there are lots of them.
Some minor changes may trigger incorrect behaviour of rules leading to production bugs. In practice it's not always possible to cover all cases with tests up front.
Rules mutating objects used in other ones also increase complexity causing developers to break them into stages.

Has a system that incorporated a rule engine ever been TRULY successful? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 7 years ago.
Improve this question
Our system (exotic commodity derivative trade capture and risk management) is being redeveloped shortly. One proposal that I have heard is that a rule engine will be incorporated to make it easier for the end-users (commodities traders, so fairly sophisticated) to make certain changes to the business logic.
I am a little skeptical of rules engines. The agilist in me wonders if they are just a technical solution to a process problem... ie. it takes too long for our developers to respond to the business's need for change. The solution to that problem should be a more collaborative approach to development, better test coverage, more agile practices all around.
Hearing about situations where a rule engine was truly a boon (especially in a trading environment) would be certainly helpful.
I've seen two applications that used the Blaze Rete engine from Fair Issac.
One application slammed thousands of rules into a single knowledge base, had terrible memory problems, has become a black box that few understand. I would not call that a success, but it is running in production.
Another application used decision trees to represent on the order of hundreds of questions on a medical form to disposition clients. It was done so elegantly that business people can update the rules as needed, without having to involve a developer. (Still has to be deployed by one, though.) I'd call that a great success.
So it depends on how well focused the problem is, the size of the rule set, the knowledge of the developers. My prejudice is that simply making a rules engine a single point of failure and dumping rules into it probably isn't a good approach. I'd start with a data-driven or table-driven approach and grow that until a rules engine was needed. I'd also strive to encapsulate the rules engine as part of the behavior of an object. I'd hide the rules engine from users and try to partition the rules space into the domain model.
I don't know if I'd say they're ever truly a boon, but I think they can certainly be valuable. I worked on a system for a few years in the insurance industry, where a rules engine was employed quite successfully to allow the business users to create rules that determined what policies were legal, depending on the state.
For instance, if you had to have a copay in certain states, or certain combinations of deductible and copay were not allowed, either because of product considerations, or because it was simply illegal due to state law.
The number of states that the company operated in, along with the constant change in rules (quarterly) would make this a dizzying coding practice. More importantly, it's not in the expertise of a programmer. It adds extra pointless communication where the end user is describing the rule to be put in effect to a programmer who is not an insurance industry expert like they are.
Designed correctly, a rules engine can still enable a workflow system that allows for good testing. In this case, the rules were stored in a database, and there were QA and PROD databases. So the BA's could test their rules in QA, and then promote them to PROD.
As with anything, its usually about the implementation, and not the actual technique.
Yes, Microsoft has a Business Rule Engine (BRE) in BizTalk that has been used successfully for years. I've heard that they've had clients buy BizTalk (very expensive) just for the BRE.
In my experience, the practicality of having a business user update the rules is slim to none. It usually takes a technical person to work the business rules editor.
A rule engine is little more than something that executes declarative statements. They come with two primary advantages (that I see):
Your business logic is maintained from a single place instead of being sprinkled throughout application code. Technically, a well-designed application should already do this with the architecture, regardless of a rule engine being present or not.
You need to worry [less] about dependencies between declarative statements. The rule engine should be smart enough to decide the order to run rules based on dependencies. You may find that some rule engines support a sequential ordering of rules within a ruleset or calling rulesets (groups of rules) in a particular order, but this isn't really in the spirit of declarative programming. Many rule engines use Rete (an algorithm) to decide when to schedule the execution of declarative statements.
I suspect most, if not all, rule engines add more overhead than if you were to write the best possible program that doesn't use a rule engine. This is similar to how writing code in assembly is generally faster than a compiler (but you usually don't write assembly because it's more convenient and productive to use higher-level abstractions).
If you were to stop here, then you would probably use programmers to maintain rules and use a rule engine as a convenient way to build a business logic tier in your application. Some rule engines offer something called templates that let you define templates for rules. The advantage here is that non-technical users are supposed to be able to write their own rules and modify existing rules.
A rule engine is one more tool in your tool chest that, when used properly, can be valuable.
The problem with many of these rule engines are the lack of speed and the fact that replacing or augmenting rules can break existing working rules in subtle ways. So you still have to re-test the system thoroughly after each rule change. So you're basically just exchanging one computer language for another one - one with a much smaller base of users. As another poster mentioned, I've yet to see a business analyst successfully use a rule engine. You need a programmer anyway.
I certainly have, but can't publicly talk about them, but its likely you have interacted with one several times this year ;)
I see it in 2 camps: the logic programmers and the business users. Different tools target different sets, some both. The successful cases of business users have only worked when it was a subset of the logic, and they also had a way to define test cases and run them themselves (and they are prepared to think logically).
Logic programmers are rarer, but can often be found coming from non imperative programming backgrounds (they are also the sort of people that find functional programming intuitive).
Keep in mind at the end of the day even with visual tools, if you are telling a computer to do something it is still programming.
I work with lots of vendors in the space and one of the great things about this is that I get to talk to lots of their customers. So, yes, hundreds of companies have got exactly the benefits they were promised - increased agility, better business/IT collaboration, easier regulatory compliance, better consistency of decision making, lower maintenance costs, faster times to market etc.
Over and over again, across all the major vendors and the open source players, I see that used correctly - to automate and improve high-volume operational decisions with many rules, rules that change a lot, rules that interact in complex ways or rules with a high business domain content - business rules management systems work.
Really.
My experience is limited to (i)not much and (ii)prolog; but I can safely say that a rule engine can help you express propositional concepts much cleaner than procedural code.
Rules engines are routinely used in the insurance business. I've worked on systems with hundreds (600ish) rules that were implemented in a rules engine. It worked very well.
Do you have a credit rating? A FICO score, perhaps? That's Fair Isaac COrporation, the developers of the Blaze rules engine.
For a while I worked for the PEATE distributed computing project which was developing a system for calculation of large scale, high volume atmospheric data. The system had three parts to it: the data manager, the scheduler, and the algorithm execution component. There could be any number of any of these components, all done through web services, but what it allowed for was for different researchers to execute arbitrary jobs against arbitrary data, and also allowed for different scheduling mechanisms to be plugged in as requirements changed.
I left the project before it got too far off the ground, but this seems like it could potentially fit the scenario, and serve as another example for some kind of rule engine. That being said, however, if the original developers are still going to be the one's making the algorithms to run, I can't see too much benefit in having a rule engine unless it handled a substantial overhead that each rule or algorithm would incur on it's own.
This sounds like a bit more involved than a simple rule engine, but such an architecture could feasibly apply to a rule engine as well.