Why are the business rules written in C# (Code)? - csla

I am looking at the Lhotka CSLA.NET object library (Lhotka.NET). It seems interesting, but one thing which does not make sense is that the business rules are written in C#. Should these not be coded outside of code (even though it would be a library which is not coupled to the main logic of an app the rules can still change and require recompiling).
Thanks

No, that would be the inner platform anti-pattern.
If you make a system that is advanced enough to handle any business rule that you might possibly need, it will be much more complicated than it has to be.

CSLA business rules are just delegates that return true if the rule passes and false if it does not.
Since the business rules are just code, you are free to do whatever you like. You can create a rules engine if you wish and process the rules outside the objects if you wish.
CSLA also supports attribute based DataAnnotations if you wish to use attribute based rules as well.
Starting in Csla 4, the static rule methods are no longer supported. Instead you create a class which is a subclass of BusinessRule in the Csla.Rules namespace. This allows for better reuse and easier unit testing.

Related

Castle Windsor and Dynamic Wiring

I've been a heavy Windsor users for the last several years. Prior to the Fluent Registration API, I would toggle between Xml Registration and huge piles of AddComponent() code. We've been happily using the Fluent Registration API and Installers specifically for quite some time now. I've gotten the impression from various writings like this:
http://docs.castleproject.org/Windsor.XML-Registration-Reference.ashx
That the Xml Registration approach has fallen out of favor and it wouldn't surprise me if it were marked for deprecation at some point in the near future.
Now, for my question: The Fluent Registration API and Installers work swimmingly for auto-wiring scenarios (i.e. when I want Windsor to just figure out how to construct my object graphs). Auto-wiring is the vast majority of IoC use cases out there, but what about when auto-wiring isn't possible? In other words I have multiple implementations of a service and I need to tell Windsor how to construct parts of my object graph. I've done this many times with the Xml Registration approach, but is there a more preferred approach these days? I'm hesitant to go the Xml Registration approach as its future seems uncertain, but I don't know how else to accomplish this with Windsor.
My uses cases are:
System needs to be able to swap implementations at QA-test (i.e.
credit checks and fraud detection processing where we want to test
without a dependency on a credit bureau API)
Provider patterns in our
system where we need to conditionally turn on and off different
implementations at deploy-time.
This all seems very well suited for IoC and we have all the building blocks in place, but want to make sure I'm taking the most future-proof approach with Windsor.
UPDATE:
While I like the feature toggle approach, I recently discovered a Windsor feature that is very useful on this front - Fallback Components. I'm leaving this edit here for anyone that might stumbled across this later.
Configuring your DI container completely through XML is error prone, verbose, and just too painful. The XML configuration possibilities are always a subset of what you can do with code based configuration; code is always more expressive.
Sometimes though your DI configuration depends on deploy-time configurations, but since the number of knobs you need are often fairly small, using a configuration flag is often a much better approach than polluting your configuration file with fully qualified type names.
Or let me put it differently, when you have large amounts of your DI configuration placed in your configuration file because your might want to change them at deploy time, please think again. Many of the changes need testing (by a developer) anyway, so there is no way you want someone from your operations team to fiddle around with that. And when you need a developer to look at it and verify it, what's the advantage of not having to recompile the project? Is this actually any quicker? A developer would still have to start the application anyway.
It is a false sense of flexibility and in fact a poor interface design (xml is the interface for your maintenance and operations department). BTW, are you the person that needs to document how the configuration file should be changed?
Instead of describing the list of fully qualified type names that are valid somewhere in the middle of the xml file, wouldn't it be much easier of all you have to write is "place 'false' in this field to disable ..."?
Here is an example of how to use a configuration switch:
bool detectFraught =
ConfigurationManager.AppSettings["DetectFraud"] != "false";
container.Register(
Component.For(typeof(IFraughtDetector)).ImplementedBy(
detectFraught ? typeof(RealDectector) : typeof(FakeDetector));
See how the configuration switch is now simply a boolean flag. This makes the configuration file much more maintainable, since the configuration is now a simple boolean switch instead of a complete type name (that can be misspelled).
Of course doing the ["DetectFraud"] != "false" isn't that nice by itself, but this can simply be solved by creating a strongly-typed configuration helper.
This answer might help as well. Allows you to dynamically, at runtime, provide an implementation. Though, sounds like you don't need it that dynamically and it's a little less obvious what's going on.
There are no plans to obsolete or remove the XML config support in Windsor.
Yes, you are right, it isn't a preferred approach due to its numerous drawbacks.
Anything you can do in XML can be done in code (note that inverse is not true).
Also keep in mind XML is not all-or-nothing. There are many ways to achieve the scenarios you gave as examples without resorting to registration in XML.
Feature toggles
Conditional compilation
if/else in your installer based on a appSettings flag
others...
I've used each of them in different projects in the past.

PostSharp & Critical Code Parts

Supposing a critical part of an application's business rules are dependent on a given behavior, but that coding this behavior explicitly will clutter your code, would you rely on encapsulating it in an aspect with PostSharp (or with any other aspect framework for that matter)? Is that a "safe" and "wise" option, or is it always better to explicitly code the behavior no matter how it will clutter the code? What about the maintainability of such a solution?
It is not recommend putting business rules or business logic in aspects. It defeats the purpose of AOP. Business rules/logic are not cross cutting concerns. If you consider part of your business logic as clutter then you should use common OOP practices to abstract it or extract it into it's own method(s).
You can, of course, move it into an aspect, but what would the goal be? To reduce clutter? That isn't acceptable to me. Use aspects to remove clutter not related to business logic so that your business logic is clear. If your business logic is cluttered then you need to refactor.
PostSharp vs other AOP frameworks: If you do end up putting "critical code" into an aspect, then PostSharp will be the best framework to get the best performance. Runtime frameworks like IoC contains that do dynamic interception will not perform as well as statically compiled code. Plus, PostSharp has so many optimizations that it performs based on the code you write that no other framework can match.

Should I still code to the interface even if I am ONLY EVER going to have ONE implementation?

I think the title speaks for itself guys - why should I write an interface and then implement a concrete class if there is only ever going to be 1 concrete implementation of that interface?
I think you shouldn't ;)
There's no need to shadow all your classes with corresponding interfaces.
Even if you're going to make more implementations later, you can always extract the interface when it becomes necessary.
This is a question of granularity. You cannot clutter your code with unnecessary interfaces but they are useful at boundaries between layers.
Someday you may try to test a class that depends on this interface. Then it's nice that you can mock it.
I'm constantly creating and removing interfaces. Some were not worth the effort and some are really needed. My intuition is mostly right but some refactorings are necessary.
The question is, if there is only going to ever be one concrete implementation, should there be an interface?
YAGNI - You Ain't Gonna Need It from Wikipedia
According to those who advocate the YAGNI approach, the temptation to write code that is not necessary at the moment, but might be in the future, has the following disadvantages:
* The time spent is taken from adding, testing or improving necessary functionality.
* The new features must be debugged, documented, and supported.
* Any new feature imposes constraints on what can be done in the future, so an unnecessary feature now may prevent implementing a necessary feature later.
* Until the feature is actually needed, it is difficult to fully define what it should do and to test it. If the new feature is not properly defined and tested, it may not work right, even if it eventually is needed.
* It leads to code bloat; the software becomes larger and more complicated.
* Unless there are specifications and some kind of revision control, the feature may not be known to programmers who could make use of it.
* Adding the new feature may suggest other new features. If these new features are implemented as well, this may result in a snowball effect towards creeping featurism.
Two somewhat conflicting answers to your question:
You do not need to extract an interface from every single concrete class you construct, and
Most Java programmers don't build as many interfaces as they should.
Most systems (even "throwaway code") evolve and change far past what their original design intended for them. Interfaces help them to grow flexibly by reducing coupling. In general, here are the warning signs that you ought to be coding to an interface:
Do you even suspect that another concrete class might need the same interface (like, if you suspect your data access objects might need XML representation down the road -- something that I've experienced)?
Do you suspect that your code might need to live on the other side of a Web Services layer?
Does your code forms a service layer to some outside client?
If you can honestly answer "no" to all these questions, then an interface might be overkill. Might. But again, unforeseen consequences are the name of the game in programming.
You need to decide what the programming interface is, by specifying the public functions. If you don't do a good job of that, the class would be difficult to use.
Therefore, if you decide later you need to create a formal interface, you should have the design ready to go.
So, you do need to design an interface, but you don't need to write it as an interface and then implement it.
I use a test driven approach to creating my code. This will often lead me to create interfaces where I want to supply a mock or dummy implementation as part of my test fixture.
I would not normally create any code unless it has some relevance to my tests, and since you cannot easily test an interface, only an implementation, that leads me to create interfaces if I need them when supplying dependencies for a test case.
I will also sometimes create interfaces when refactoring, to remove duplication or improve code readability.
You can always refactor your code to introduce an interface if you find out you need one later.
The only exception to this would be if I were designing an API for release to a third party - where the cost of making API changes is high. In this case I might try to predict the type of changes I might need to do in the future and work out ways of creating my API to minimise future incompatible changes.
One thing which no one mentioned yet, is that sometimes it is necessary in order to avoid depenency issues. you can have the interface in a common project with few dependencies and the implementation in a separate project with lots of dependencies.
"Only Ever going to have One implementation" == famous last words
It doesn't cost much to make an interface and then derive a concrete class from it. The process of doing it can make you rethink your design and often leads to a better end product. And once you've done it, if you ever find yourself eating those words - as frequently happens - you won't have to worry about it. You're already set. Whereas otherwise you have a pile of refactoring to do and it's gonna be a pain.
Editted to clarify: I'm working on the assumption that this class is going to be spread relatively far and wide. If it's a tiny utility class used by one or two other classes in a single package then yeah, don't worry about it. If it's a class that's going to be used in multiple packages by multiple other classes then my previous answer applies.
The question should be: "how can you ever be sure, that there is only going to ever be one concrete implementation?"
How can you be totally sure?
By the time you thought this through, you would already have created the interface and be on your way without assumptions that might turn out to be wrong.
With today's coding tools (like Resharper), it really doesn't take much time at all to create and maintain interfaces alongside your classes, whereas discovering that now you need an extra implementation and to replace all concrete references can take a long time and is no fun at all - believe me.
A lot of this is taken from a Rainsberger talk on InfoQ: http://www.infoq.com/presentations/integration-tests-scam
There are 3 reasons to have a class:
It holds some Value
It helps Persist some entity
It performs some Service
The majority of services should have interfaces. It creates a boundary, hides implementation, and you already have a second client; all of the tests that interact with that service.
Basically if you would ever want to Mock it out in a unit test it should have an interface.

Rules Engine - pros and cons

I'm auditing a project that uses what is called a Rules Engine. In short, it's a way to externalize business logic from application code.
This concept is entirely new to me and I'm pretty skeptical about it. After hearing people talk about Anemic Domain Models for the past few years, I'm questioning the Rules Engine Approach. To me they seem like a great way to WEAKEN a domain model. For example say I'm doing a java webapp interacting with a Rules Engine. Then I decide I want to have an Android app based on the same domain. Unless I want the Android app to interact with the Rules Engine as well, I'm going to have to miss out on whatever business logic was already written.
As I don't have any experience with them yet, just curiosity, I was interested to hear about the pros and cons are in using a Rules Engine? The only pro that I can think of is that you don't need to rebuild your entire Application just to change some business rule (but really, how many apps really have that many changes?). But using a Rules Engine to solve that problem kind of sounds to me like putting a band-aid over a shotgun wound.
UPDATE - since writing this, the god himself, Martin Fowler, has blogged about using a Rules engine.
Most rule engines that I have seen are viewed as a black box by system code. If I were to build a domain model, I would probably want certain business rules to be intrinsic to the domain model, e.g. business rules that tell me when an object has invalid values. This allows multiple systems to share the domain model without duplicating business logic. I could have each system use the same rule service to validate my domain model, but this appears to weaken my domain model (as was pointed out in the question). Why? Because instead of consistently enforcing my business rules across all systems at all times, I am relying on system programmers to determine when the business rules should be enforced (by calling the rule service). This may not be a problem if the domain model comes to you completely populated, but can be problematic if you're dealing with a user interface or system that changes values in the domain model over its lifetime.
There is another class of business rules: decision making. For example, an insurance company may need to classify the risk of underwriting an applicant and arrive at a premium. You could place these types of business rules in your domain model, but a centralized decision for scenarios like this are usually desirable and, actually, fit quite well into a service-oriented architecture. This does beg the question of why a rule engine and not system code. The place where a rule engine may be a better choice is where business rules responsible for the decision change over time (as some other answers have pointed out).
Rule engines usually allow you to change rules without restarting your system or deploying new executable code (regardless of what promises you receive from a vendor, do make sure you test your changes in a non-production environment because, even if the rule engine is flawless, humans are still changing the rules). If you're thinking, "I can do that by using a database to store values that change", you're right. A rule engine is not a magical box that does something new . It is intended to be a tool that provides a higher level of abstraction so you can focus less on reinventing the wheel. Many vendors take this a step further by letting you create templates so that business users can fill in the blanks instead of learning a rule language.
One parting caution about templates: templates can never take less time than writing a rule without a template because the template must, at the bare minimum, describe the rule. Plan for a higher initial cost (the same as if you were to build a system that used a database to store values that change vs. writing the rules in directly in system code) - the ROI is because you save on future maintenance of system code.
I think your concerns about anemic domain models are valid.
I've seen two applications of a well-known commercial Rete rules engine running in production where I work. I'd consider one a success and the other a failure.
The successful application is a decision tree app, consisting of ~10 trees of ~30 branch points each. The rules engine has a UI that does allow business folks to maintain the rules.
The less successful application has ~3000 rules slammed into a rules database. No one has any idea if there are conflicting rules when a new one is added. There is little understanding of the Rete algorithm, and the expertise with the product has left the firm, so it's become a black box that's untouchable and unrefactorable. The deployment cycle is still affected by rules changes - a complete regression test must be done when rules are changed. Memory was an issue, too.
I'd tread lightly. When a rule set is modest in size it's easy to understand changes, like the simplistic e-mail sample given above. Once the number of rules climbs into the hundreds I think you might have a problem.
I'd also worry about a rules engine becoming a singleton bottleneck in your application.
I see nothing wrong with using objects as a way to partition that rules engine space. Embedding behavior in objects that defer to a private rules engine seems okay to me. Problems will hit you when the rules engine requires state that isn't part of its object to fire properly. But that's just another example of design being difficult.
Rule Engines can offer a lot of value, in certain instances.
First, many rule engines work in a more declarative way. A very crude example would be AWK, where you can assign regexes to blocks of code. When the regex is seen by the file scanner, the block of code is executed.
You can see that in this case, if you had, say, a large AWK file and you wanted to add Yet Another "rule", you can readily go to the bottom of the file, add you regex and logic, and be done with it. Specifically, for many applications, you're not particularly concerned with what the other rules are doing, and the rules don't really interoperate with each other.
Thus the AWK file becomes more like a "rule soup". This "rule soup" nature lets folks focus very tightly on their domain with little concern for all of the other rules that may be in the system.
For example, Frank is interested in orders with a total of more than $1000, so he puts in to the rule system that he's interested. "IF order.total > 1000 THEN email Frank".
Meanwhile, Sally wants all orders from the west coast: "IF order.source == 'WEST_COAST' THEN email Sally".
So, you can see in this trivial, contrived case, that an order can satisfy both rules, yet both rules are independent of each other. A $1200 order from the West Coast notifies both Frank and Sally. When Frank is no longer concerned, he'll simply yank his rule out from the soup.
For many situations, this flexibility can be very powerful. It can also, like this case, be exposed to end users for simple rules. Using high level expressions and perhaps lightweight scripting.
Now, clearly, in a complicated system there are all sorts of interrelationships that can happen, an this is why the entire system is not "Done with rules". Someone, somewhere is going to be in charge of the rules not getting out of hand. But that doesn't necessarily lessen the value such a system can provide.
Mind this doesn't even go in to things like expert systems, where rules fire on data that rules can create, but a simpler rules system.
Anyway, I hope this example shows how a rules system can help augment a larger application.
The biggest pro that I've seen for rules engines is that it lets the Business Rule owners implement the business rules, instead of putting the onus on programmers. Even if you have an agile process where you are constantly getting feedback from stakeholders and going through rapid iterations, it's still not going to achieve the level of efficiency that can be achieved by having the people making the business rules implement them as well.
Also, you can't under-emphasize the value in removing the recompile-retest-redeploy cycle that can result from a simple rule change, if the rules are embedded in code. There are often several teams that are involved in putting the blessing on a build, and using a Rules Engine can make much of that unnecessary.
I've written a rules engine for a client. The biggest win was including all the stakeholders. The engine could run (or replay) a query and explain what was happening in text. The business people could look at the text description and quickly point out nuances in rules, exceptions, and other special cases. Once the business side was involved, the validation got much better because it was easy to get their input. Additionally, the rules engine can live separately from other parts of an application code base so you can use it across applications.
The con is that some programmers don't like to learn too much. Rule engines and the rules you put into them, along with the stuff that implements them, can be a bit hairy. Although a good system can easily handle sick and twisted webs of logic (or illogic often ;), it's not as simple as coding a bunch of if statements (no matter what some of the simple-minded rule engines do). The rules engine gives you the tools to handle rule relationships, but you still have to be able to imagine all of that in your mind. Sometimes it's like living in the movie Brazil. :)
It (as everything else) depends on your application. For some applications (usually the ones that never change or the rules are best on real life constants, i.e. won't change noticeably in eons, for instance physical properties and formulae) it doesn't make sense to use a rule engine, it just introduces additional complexity and requires the developer to have a larger skill set.
For other applications it's really a good idea. Take for instance order processing (orders being anything from invoicing to processing currency transactions), every now and then there's a minute change to some relevant law or code (in the judicial sense) that requires you to fulfil a new requirement (for instance sales tax, a classic).
Rather than trying to force your old application into this new situation where all the sudden you have to think about sales tax, where as before you didn't, it is easier to adapt your rule set rather than having to meddle about in potentially a large set of your code.
Then the next amendment from your local government requires reporting of all sales within a certain criteria, rather than you have to go in and add that, too. In the end you'll end up with very complex code that will prove pretty difficult to manage when you turn around and want to revert effect of one of the rules, without influencing all the others...
Everybody thus far has been very positive about rules engines, but I advise the reader to be wary. When a problem becomes a little bit more complicated, you may suddenly find that an entire rules engine has been rendered unsuitable, or much more complicated than in a more powerful language. Also, for many problems, rules engines will not be able to easily detect properties that greatly reduce the runtime and memory footprint of evaluating the condition. There are relatively few situations in which I would prefer a rule engine to a dependency injection framework or a more dynamic programming language.
"but really, how many apps really have that many changes?"
Honestly, every app I have worked on has gone through serious workflow and/or logic changes from concept until well after deployment. It's the number one reason for "maintenance" programming...
The reality is that you can't think of everything up front, hence the reason for Agile processes. Further, the BA's always seem to miss something vital until it's found in testing.
Rule Engines force you to truly separate business logic from presentation and storage. Further, if using the right engine, your BA's can add and remove logic as necessary. As Chris Marasti-Georg said, it puts the onus on the BA. But more than that, it allows the BA to get exactly what they are asking for.
A rules engine is a win on a configurable application where you don't want to have to do custom builds if it can be avoided. They are also good at centralising large bases of rules and algorithms like Rete are efficient for quickly matching against large rule sets.
Lots of good answers already but wanted to add a couple of things:
in automating a decision of any complexity the critical thing rapidly becomes your ability to manage rather than execute the logic involved. A rules engine will not help with this - you need to think about the rule management capabilities that a business rules management system has. Most commercial and open source rules engines have evolved into rule management systems with repositories, reporting on rule usage, versioning etc. A repository of rules, structured into coherent rule sets that can be orchestrated to make business decisions is a lot easier to manage than either thousands of lines of code or a rule soup.
There are many ways to use a declarative, rules-based approach. Using rules to manage a UI or as part of defining a process can be very effective. The most valuable use of a rules approach, however, is to automate business decisions and to deliver this as loosely coupled decision services that take input, execute rules and return an answer - a decision. Thing of these as services that answer questions for other services like "is this customer a good credit risk" or "what discount should I give this customer for this order or "what's the best cross sell for this customer at this time. These decision services can be very effectively built using a rules management system and allow for easy integration of analytics over time, something many decisions benefit from.
I see rule, process and data engines (a.k.a. databases) as essentially being similar. But, for some reason, we never say that blackboxing the persistence subsystem is bad.
Secondly, from my POV, an anemic model is not one that is light in implementation of behaviors, it is one that is light in behaviors itself. The actual method that describes an available behavior in a domain model object does not have to be done by the object itself.
The biggest complexity from my experience in Rule Engines is that:
from OOP POV it's a real pain to refactor and test rules written in a declarative language while you are refactoring code that affects them.
Often we should always think about the execution order of rules which turns into a mess when there are lots of them.
Some minor changes may trigger incorrect behaviour of rules leading to production bugs. In practice it's not always possible to cover all cases with tests up front.
Rules mutating objects used in other ones also increase complexity causing developers to break them into stages.

Where can I get a simple explanation of policy injection?

I'd like a dead simple explanation of policy injection for less-informed co-workers. Where is a good resource for this? I learned about policy injection from the entlib help files, which I'm sure aren't the best option.
The MSDN documentation for Policy Injection has a pretty clear explanation:
Applications include a mix of business
logic and crosscutting concerns, and
the two are typically
intermingled—which can make the code
harder to read and maintain. Each task
or feature of an application is
referred to as a "concern." Concerns
that implement the features of an
object within the application, such as
the business logic, are core concerns.
Crosscutting concerns are the
necessary tasks, features, or
processes that are common across
different objects—for example,
logging, authorization, validation,
and instrumentation. The purpose of
the Policy Injection Application Block
is to separate the core concerns and
crosscutting concerns.
Simply put, the PI block lets developers define a set of policies that specify the behavior of objects in the system. So your core business logic, such as the code that calculates profit per unit in a fiscal year (one concern), is separated from the logging of that execution of logic (another, but more often used, concern).
The same documentation says that the PI block is not AOP because:
It uses interception to enable only pre-processing handlers and post-processing handlers.
It does not insert code into methods.
It does not provide interception for class constructors.
So trying to look at PI from an AOP perspective can muddy the waters a bit.
What the EntLib calls Policy Injection, is really Aspect Oriented Programming. I wrote a post introducing the concepts of AOP on my blog a while back, maybe it'll be helpful.