CQRS without Event Sourcing - what are the drawbacks? - cqrs

Besides missing some of the benefits of Event Sourcing, are there any other drawbacks to adapting an existing architecture to CQRS without the Event Sourcing piece?
I'm working on large application and the developers should be able to handle separating the existing architecture into Commands and Queries over the next few months, but asking them to also add in the Event Sourcing at this stage would be a HUGE problem from a resourcing perspective. Am I committing sacrilege by not including Event Sourcing?

Event Sourcing is optional and in most cases complicates things more than it helps if introduced too early. Especially when transitioning from a legacy architecture and even more when the team has no experience with CQRS.
Most of the advantages being attributed to ES can be obtained by storing your events in a simple Event Log. You don't have to drop your state-based persistence, (but in the long run you probably will, because at some point it will become the logical next step).
My recommendation: Simplicity is the key. Do one step at a time, especially when introducing such a dramatic paradigm shift. Start with simple CQRS, then introduce an Event Log when you (and your team) have become used to the new concepts. Then, if at all required, change your persistence to Event Sourcing and fire the DBA ;-)

I completely agree with Dennis, ES is no precondition for CQRS, in fact CQRS on its own is pretty easy to implement and has the potential to really simplify your design.
You can find a smooth introduction to it here
Secondly what benefits does CQRS on its own bring to the table?
Simplifies your domain objects, by sucking out all the query concerns
Makes code scalable, your queries are separated and can be easily tuned
As you iterate over your product design you can add/remove/change
individual commands/queries, instead of dealing with larger
structures as a whole (e.g. entities, aggregates, modules).
Commands and queries produce a well-known vocabulary to talk with
domain experts. Other architectural patterns (e.g. pipes and filters,
actors) use terms and concepts that may be harder to grasp by
non-programmers.
Limits the use of ORM (if you use one), I feel ORM's bring in
unwarranted complexity if you try and use them for querying, the
abstractions are leaky and heavy, trying to tune them is a nightmare :). Using an ORM only on the command side makes things much
easier, plain old SQL is the best for queries, probably using a simple library to convert result sets into DTO's is the most you need.
More on how CQRS benefits design can be found here
Also do not forget about the non tangible benefits of CQRS
If you still have your doubts, you may want to read this
We currently use CQRS for projects with medium complexity and have found it be very suitable. We started out using a custom bootstrap code and have now moved on to using the Axon Framework to give us some of the infrastructure components
Feel free to PM me in case you want to know anything more specific.

There is a fundamental problem of Event Sourcing pattern, that is the events in the Event Store may not be compatible with your event handlers in your application any more due to code change.
That being said, whenever you modify the event handler by adding new features, you need to think about the backward compatibility. You have to make sure your code can always handle the same event in different stage created by different version of your code.
When your application becomes more complex, you will find it really pain in the ass to make it backward compatible.

I think Event Sourcing is what makes people to be afraid of CQRS. And thats for a reason. Its not natural - when you interact with something in real world you don't need to get whole history about this object.
“event sourcing is a completely orthogonal concept to CQRS” (source) - technically if you don't use ES you loose nothing from CQRS features.
I have no idea why Event Sourcing is considered as the only foundation for solving of some "messaging" related problems like: duplication/missing of messages, reordering of messages and data collisions, etc. Its not true that if you don't use Event Sourcing you cant create encapsulated means to solve such problems other way.
How i see alternative ways to implement CQRS's messaging using another data-organizing principle you can read here.
I propose "signed documents" approach where you treat your data not as composition of modification events, but as composition of immutable parts signed by responsible users. Im sure there can be a lot of other solutions to implement message flow and data storage. And you need to take your business model into account when selecting which one you like to use.

The best CQRS pattern based framework in my opinion is MediatR by Jimmy Bogard, If you don't need Event Sourcing in the beginning of your application development, MediatR is the right choice. Here is the repository- https://github.com/jbogard/MediatR

In my opinion, you're making a big mistake by not using event sourcing with CQRS.
First up, you'll almost certainly have issues synchronising your Query model with the Command model. With an event store, if the query side ever gets out of synch, you simply need to replay your events to correct it. That's the theory anyway!
But with Event Sourcing, you also get to store the complete history of all entity transactions. And this means you can decide to create new queries and views after implementation. These are very often views that would not be possible with non-Event Sourced CQRS. I've heard Greg Young give the example of querying items that have been added, and then removed, from a shopping cart. With Event Sourcing this is possible. Without ES it's not possible because you only store the final state of the cart.

Related

Implementing SOA with RESTful service and application APIs?

At the moment we have one huge API which is used by our backoffice, our frontend, and also our public API.
This causes me a lot of headaches because when building new endpoints I find a lot of application specific logic in the code which I don't necessarily want to include in my endpoint. For example, the code to create a user might contain code to send a welcome email, but because that's not needed for the backoffice endpoint I will then need to add a new endpoint without that logic.
I was thinking about a large refactor to break our code base in to a number of smaller highly specific service APIs, then building a set of small application APIs on top of those.
So for example, an application endpoint to create a new user might do something like this after the refactor:
customerService.createCustomer();
paymentService.chargeCard();
emailService.sendWelcomeEmail();
The application and service APIs will be entirely separate code bases (perhaps a separate code base per service), they may also be built using different languages. They will only interact through REST API calls. They will be on the same local network, so latency shouldn't be a huge issue.
Is this a bad idea? I've never seen/worked on a codebase which has separated the two before, so perhaps there is a better architecture to achieve the flexibility and maintainability I'm looking for?
Advise, links, or comments would all be appreciated.
Your idea of making multiple, well-defined services is sound and really it is the best way to approach this. Going with purely micro-services approach however trendy it might seem, proves to be an overkill most often than not. This is why I'd just redesign the existing API/services properly and follow solid and sound SOA design principles below. Good Resources could be found on both serviceorientation.com and soapatterns.org I've always used them as reference in my career.
Consider what types of services you need
(image from serviceorientation.com)
Entity services are generally your Client, Payment services - e.g. services centered around an entity in your domain. They should be business-agnostic, and be able to be reused in all scenarios. They could be called sometimes by clients directly if sufficient for their needs. They could be called by Task services.
Utility services contain logic you're likely to reuse in other services, but are generally not called by the clients directly. Rather, they'd be called by Task and Entity services. An example might be a Transliteration service.
Task services combine and reuse Entity and Utility services into meaningful tasks. Most often they are not that agnostic and they do implement some specific business logic. They have meaningful business operations and they are what clients mostly call.
Principles to follow when redesigning
I strongly recommend going over this cheat sheet and making sure everything there is covered when you do your redesign. It's great help.
In general, you should make sure that:
Each service has a common context and follows the separation of concerns principle. E.g. Clients service is only for clients related operations, etc.
Each of the Entity and Utility services is business-agnostic and basic enough. So it can be reused in multiple scenarios and context without being changed. Contract must be simple - CRUD and only common operations that make sense in most usage scenarios.
Services follow a common data model - make sure all the data structures you use are used uniformly in all services in order to prevent need for integration efforts in the future and promote combination of services for clients to exploit. If you need to receive a customer that another service returns, this should be happening without the need for transformation
OK, but where to put the non-agnostic logic?
Now, you have multiple options for abstracting business logic whenever you have a need for complex business functionality. It depends on your scenario what you're going to chose:
Leave logic to all clients. Let them combine your simplified services
If there is business logic that is commonly implemented in multiple of your applications and has the potential to be reused heavily you can implement a composite service that reuses multiple existing underlying services and exposing the logic.
Service Composability. Concerns on multiple API calls communication overhead.
Well, this is an age-old question - should you make multiple API calls when they will probably create some communication overhead? The answer is - it depends on how complex your scenario is, how much reuse you expect and how flexible you want to be. Also is speed critical? To what extent? In Service Oriented Architecture though, this is a very common approach - to reuse your existing services and combine them in new configurations as needed. Yes, it does add some overhead, but I've seen implementations in very complex environments, for example Telecoms, where thanks to the use of ESB solutions, message queues, etc the overhead is negligible compared to the benefits. Here is a common architecture approach (image from serviceorientation.com):
The mandatory legacy refactoring heads-up
More often than not, changing the established contract for multiple existing client systems is a messy business and could very well lead to lots of refactoring and need for looking for needle-in-a-stack functionality that's somewhere deep in the (possibly) legacy code. Business logic might be dispersed everywhere. So make sure you're ready and have the controls, time and will to lead this battle.
Hope this helps
Is this a bad idea?
No, but this is a big overall question to be able to provide very specific advice.
I'd like to separate this into 3 areas:
Approach
Design
Technology
Working backwards, the Technology is the final and most-specific part, and totally depends on what your current environment is (platforms, skills), and (hopefully) will be reasonable self-evident to you once the other things are in progress.
The Design that you outlined above seems like a good end-state - having multiple, specific, focused APIs, each with their own responsibility. Again, the details of the design will depend on the skills of you and your organization, and the existing platforms that you have. E.g. if you are already using TIBCO (for example) and have a lot invested (licenses, platforms, tools, people) then leveraging some of their published patterns/designs/templates makes sense; but (probably) not if you don't already have TIBCO exposure.
In the abstract, the REST API services seems like a good starting point - there are a lot of tools and platforms at all levels of the system for security, deployment, monitoring, scalability, etc. If you are NGINX users, they have a lot of (platform-independent) thoughts on how to do this also NGINX blog, including some smart thinking on scalability and performance. If you are more adventurous, and have an smart, eager team, a look at Event-driven architecture - see this
Approach (or Process) is the key thing here. Ultimately, this is a refactoring, though your description of "a large refactor" does scare me a little - put that way, it sounds like you are talking about a big-bang change and calling it refactoring. Perhaps it is just language, but what's in my mind would be "an evolution of the 'one huge API' into multiple, specific, focused APIs (by refactoring the architecture)". One place to start is Martin Fowler, while this book is about refactoring software, the principles and approach are the same, just at a higher-level. Indeed, he talks about just this here
IBM talk about refactoring to microservices and make it sound easy to do in one step, but it never is (outside the lab).
You have an existing API, serving multiple internal and external clients. I will suggest that you'll want to keep this interface solid for these clients - separate your refactoring of the implementation from the additional concerns of liaising with and coordinating external systems/groups. My high-level starting approach would be:
identify a small (3-7) number of related methods on the API
ideally if a significant, limited-scope change is needed anyway with these methods, that is good - business value with the code change
design/specify a new stand-alone API specifically for these methods
at first, clone the existing model/naming/style
code a new service just for these
with proper automated CI/CD testing and deployment practices
with associated monitoring
modify the existing API to have calls to these methods re-direct to call the new service
perhaps have a run-time switch to change between the old implementation and the new implementation
remove the old implementation from codebase
capture issues, assumptions and problems along the way
the first pass will involve a lot of learning about what works and doesn't.
then repeat the process over & over, incorporating improvements each time.
At some point in the future, when appropriate due to other business-driven needs, the API published to the back-end, front-end and/or public clients can change, but that is a whole different project.
As you can see, if the API is huge (1,000 methods => 140 releases) this is a many-months process, and having a reasonably frequent release schedule is important. And there may be no value improving code that works reliably and never changes, so a (potentially) large portion of the existing API may remain, just wrapped by a new API.
Other considerations:
public API? Maybe a new version (significant changes) will be needed sooner than the internal APIs
focus on the methods/services used by it
what parts/services change the most (have the most enhancement requests approved)
these are the bits most likely to change, and could benefit most from a better process/architecture
what are future plans for change and where would the API be impacted
e.g. change to user management, change to payment processors, change to fulfilment systems
e.g. new business plans (new products/services)
consider affected methods in the API
Also see:
Using Microservices for Legacy System Modernization
Migrating From a Monolith to APIs and Microservices
Break the Monolith! Loosely Coupled Architecture Brings DevOps Success
From the CEO’s Desk: Application Modernization – Assess, Strategize, Modernize! 9
[Microservices Architecture As A Large-Scale Refactoring Tool 10
Probably the biggest 4 pieces of advice that I can give is:
think refactoring: small changes that don't affect function
think agile: small increments that are valuable, testable, achievable
think continuous: have a vision for where you will (eventually) get to, then work the process continuously
script & automate the processes from code, documentation, testing, deployment, monitoring...
improving it every time!
you have an application/API that works - keep it working!
That is always the first priority (you just need to work to carve-out time/budget for maintenance)
Not a bad idea at all.
Also what are your looking is microservices arch. and with that the question comes is how you break your system into well defined services.
We use Domain Driven Design Arch. to break our system into microservices and lagom framework , which allows every service to be in diff. code base and event driven arch. between microservices.
Now lets look at your problem at low level: you said a service contains code like creating a user and sending a email and one with just creating a user and there might be other code as well.
First we need to understand how many type of code you are writing:
Domain Object Logic (eg: User Object) -- what parameters are valid and all -- this should be independent of service endpoint and should be encapsulated in one Class like user class and we say it an Aggregate in Domain Driven Design terms
Business Reactions -- like on user creation send a email -- using event driven arch. these type of logics are separated into process managers or sagas which could most cases work conditionally like a for user created externally send a mail and for user created internally send a email , by having extra data in the event
Also the current way you are doing it , how are you handling transaction across services???

Command Query Responsibility Segregation (CQRS) / Event Sourcing (ES): Why use it? How to address consistency issues?

I first heard about CQRS/ES in a ScalaDays San Francisco presentation some months ago. Since then, I have noticed occasional mentions of the terms in technical writing, most recently Fiverr's blog post on the topic.
It's seemed very interesting so far, but I have encountered a number of related issues, and I am wondering if there are any common solutions:
How do you deal with eventual consistency and potentially inconsistent views in the short term?
How do you deal with business logic which touches multiple resources a.k.a. atomic reads/writes a.k.a. transactions?
How do you deal with use cases where the user interface requires an immediate result to the request?
Moreover, what are some advantages of adopting this pattern, and how do they exceed the cost of the extra layer of complexity?
Thanks in advance!
How do you deal with eventual consistency and potentially inconsistent
views in the short term?
How do you deal with use cases where the user interface requires an
immediate result to the request?
It is a common misconception that using Event Sourcing automatically implies an eventually consistent read model. Many ES practitioners (up to and including Udi Dahan and Greg Young) recommend starting with a synchronous, immediately consistent model and moving toward eventual consistency only when it is needed.
How do you deal with business logic which touches multiple resources a.k.a. atomic reads/writes a.k.a. transactions?
This is a complex question that doesn't have a single answer, and may be more appropriate for a discussion forum vs a Q/A forum like SO. I would recommend poking around (and posting) in the DDD/CQRS discussion group.
Additionally, here are some slides I recently found on the topic. They cover a lot more of the "why"/"why not" aspects from a team which used CQRS/ES in practice.
http://ookami86.github.io/event-sourcing-in-practice/

Entity Framework, application layers and separation of concerns

I'm using the Entity Framework 4.1 and ASP.Net MVC 3 for my application. MVC provides the presentation layer, an intermediate library provides the business logic and the Entity Framework sort of acts as the data layer I guess?
I could separate the Entity Framework code into a set of repository classes, or an appropriate variation thereof, whatever constitutes a worthwhile data layer, but I'm having trouble resolving a design problem I have.
If the multi-layered approach exists to help me keep concerns separated, then it stands to reason that my choice of data persistence should also not be a concern of the presentation layer. The problem is that by using the Entity Framework, I'm basically tightly coupling my application to the notion that entity changes are tracked and persisted automatically.
As such, let's say in a hypothetical world I found a reason not to use the Entity Framework and wanted to swap it out. A well-designed solution should allow me to do this at the appropriate layer and not have dependent layers affected, but because all code is being written with the knowledge that the data layer tracks object changes, I would only be able to swap out the Entity Framework for something that works in a similar fashion, for example nHibernate.
How do I get to use the Entity Framework but not need to write my code in a way that assumes that entity changes are being tracked by the data layer?
UPDATE for those still wondering about this issue in their own scenarios:
Ayende Rahien wrote a great article shooting down this whole argument:
http://ayende.com/blog/4567/the-false-myth-of-encapsulating-data-access-in-the-dal
If you want to continue this way you should give up programming job and go to study philosophy. Entity framework is abstraction of persistence and there is a rule of Leaky abstraction which says that any non-trivial abstraction is to some degree leaky.
Agile methodologies come with really interesting phenomenon: Do not prepare for hypothetical situations. The most of the time it is just Gold plating. Each change has its cost. Changing persistence layer later in the project is costly but it is also very rare. From customer perspective there is no reason to pay part of these costs in the most of projects where this change is not needed. If we discus customer perspective more deeply we can say that he should not pay for that at all because choosing bad API which has to be replaced later on is failure of developers / architects. Refactor your code regularly but only to the point which is needed for adding new features which customer wants otherwise you can hardly be competitive on the market. This of course has some exceptions:
Customer wants (or architecture demands it for any reason and customer agrees with it) such abstraction. In such case you must count with it and define architecture open for such changes.
It is hobby or open source project where you can do what you want because it is not constrained by some resources
Now to your problem. If you want such high level abstraction you should not expose entities to your controller. Expose DTOs from the business layer (or even from repositories) and add fields like IsNew, IsModified, IsDeleted to those DTOs. Now your UI is completely separated from the persistence but your architecture is much more complex and there is probably no reason for such complexity - it is over architected. Another way can be simply turning off tracking (add AsNoTracking() to each query) and proxy creation on your entities (context.Configuration.ProxyCreationEnabled) - lazy loading will not work as well. That's like throwing away most of features persistence frameworks offer to you.
There are also other points of view. I recommend you reading Ayende's recent posts about repository and his comments to Sharp architecture.
Short answer? You don't. You could turn off EF's tracking and then not worry about it, but that's about it.
If you're going to write your presentation layer with the expectation that changes are being tracked and persisted automatically, then whatever you replace EF with has to do that. You can't swap it out for something that doesn't track and persist changes automatically and just expect things to keep working. That'd be like taking a system that relies on a TCP/IP connection for duplex communication, swapping it to a HTTP connection (which by the nature of HTTP isn't really duplex) and expect things to work the same way. It doesn't.
If you want to be able to swap out your persistence layer for something else and not have to change anything else, then you need to wrap EF (or whatever) in your own custom code to provide the functionality you want. Then you have to provide implementations for anything not provided by whatever you swap to.
This is doable, but it's going to be an awful lot of work for a problem that very rarely actually happens. It's also going to add extra complexity to the project. Ladislav is bang on: it's not worth abstracting this far.
You should implement the repository pattern and plain POCOS if you are concerned about potentially swapping out EF.
There is a great project on Codeplex that goes over Domain Driven design including documentation. Take a look at that.
http://microsoftnlayerapp.codeplex.com/
Please after reading Microsoft n-layer project, read ayenede's weblog.
Mr.ayende posted series posts about advantage and disadvantage Microsoft n-layer project.

Do I need to use DDD, Unit of Work, Repositories or something similar for simple web apps?

I'm working on a simple eCart system using .net4 (c#). I've been doing a lot of reading about Unit of Work Pattern, Repository Pattern, and Persistence Ignorance. I think I have a grasp on the strategy and benefits to building my layers this way, but for my simple app I'm wondering if it's necessary and if anyone can point me towards good architecture for my scope.
Please correct me if I'm wrong - the main benefits to using repositories are to create fewer trips to the DB and to separate application architecture from DB architecture. IE - what's good for DB performance isn't always good for application design so it's best to design what's best for both and then create an interface between the two.
So here's the question - I want any business transaction that occurs to be saved to the DB as soon as it occurs, so there doesn't seem to be a point in queuing data in repositories and then saving it immediately. Why not just save it directly?
Are there other benefits of DDD that I'm missing or would it be over engineering to build out such a robust architecture for every simple project that comes along? Thanks for any help.
Do you need to use [insert pattern here]: Nope When it comes right down to it, the best practice is always the one that gets your application done, and meets the time, monetary, and technical requirements.
But if you take the "lets just get it done" approach, then be aware of the Technical Debt you might be incurring.
Also there are a lot of reasons to use some of these patterns (and they don't always have to do with performance), particularly the Unit Of Work pattern. This has more to do with the requirements and restrictions that often come with ORM's and such. These issues can be a bit complex, but I suspect as you begin to implement some of these things you'll start to realize what those issues are and come to understand why these patterns are useful.
Agree with CodingGorilla. Patterns are great unless they conflict with YAGNI.
If every transaction needs to be written immediately (that is, if you have potential contention between the actions of two users) then you will need a queuing mechanism or you can use the underlying transactional mechanism of whatever data repository you might be using (e.g. SQL)

What is CSLA Framework and Its use?

What is CSLA Framework and Its use ?
My Opinions From My Experience w/ a 1.7M LOC code base:
CSLA is intended for a distributed application/database environment. This is why the basic business object is and does everything, for example it's own data persistence. An object (and everything remotely associated w/ its state) is intended to be serialized, sent to a different application and/or data server and work.
If the above is not a problem you need to solve, CSLA is overkill, big time. Our development team regrets having committed to CSLA.
Juggling all the CSLA balls in a complex Windowed UI is tough. We have multi-tabbed screens (which may in turn open sub-screens) that, unless you follow the "left to right, top to bottom" flow of data entry, and click save often, ends up putting and/or fetching incomplete data to/from the database; or dropping data altogether that you just entered. Yes, our original coders are at fault, but so is CSLA... It just seems that there are so many moving parts to enable, control, and coordinate CSLA features. It's like having to deal with all the dials & switches of a fighter jet when all you really need is something more like a Cessna 152.
You will write lots of custom code to enable the CSLA features. For example CSLA will never be confused with object relational mapper (ORM) tools like Hibernate and Entity Framework. Our SAVE() methods are non trivial, so are the trivial ones.
Encouraging the use of code generators compounds problems. We used CodeSMith to generate classes from data tables. So we end up with code that has a 1-1 correspondence of table to c# class. So you must write all the code to handle dataStore to your "real" objects.
Data store/ and fetch is very inefficient w/ CSLA. Because of the Behemoth, monolithic BusinessObject-does-all-and-knows-all centric paradigm, objects end up doing a one-object-at-a-time data fetch and instantiate. Collections of composite objects significantly compound the problem. A single "get this object" always results in a cascade of separate data fetches (one or more for each individual object) to instantiate the entire inheritance & composite relationship chains. Its known as the "N+1 query problem." Oh, and fetching data ALWAYS results in a new object being created, even if we're only updating an existing one. No wonder our more complex screens are FUBAR.
It allows you to architect your application with solid object oriented principals and a good seperation of concerns.
Yes and no. Mostly no.
The BusinessObject handles it's own data storing. That is anti separation of concerns.
"It allows you..." well, yeah - so does a blank text editor screen, but does not force or encourage you like the MVC.NET framework does, for example. IMHO, CLSA provides absolutely zero benefit for ensuring that the code you develop with it follows "solid OO principles". In fact coders w/ weak OO skills (the majority, in my experience) will really stand out when using CSLA! Woe betide the maintenance programmer.
CSLA is the poster child for the solid object oriented principle favor composition over inheritance. CLSA code is untestable. Because an inherited framework BusinessObject is, does, and needs everything, all at once and every time, it's not likely that you will be able to get much test coverage. You can't get at the pieces because everything is tightly coupled.The framework is not amenable to dependency injection. It is an iron curtain of code.
Your code will be difficult to debug. Call stacks get very deep and as you get near the center of the sun so to speak, everything turns into reflection - "what *&^# methods just got called???" And you simply get lost. period.
EDIT 7 Mar 2016
What more insight can I add after the original post? Two things, perhaps:
First, It feels like CSLA has some promise. If we knew how to juggle all those moving parts together. But CSLA is so enigmatic that even things we have done right are corrupted over time. IMHO without a very strong team-wide CSLA wherewithal, any implementation is doomed. Without a vibrant "open source" of technical references, training, and community it's hopeless. In almost a decade our CSLA code, in my final analysis, is just compounding technical debt.
Second, here is a recent comment I made, below:
Our complexity often does not seem to fit in the CSLA infrastructure
so we write outside of the framework. This and cheap labor results in
rampant SRP violations and has me hitting brick walls managing dynamic
rule application, for example. Then, CSLA parent/child infrastructure
propagates composite object validation but we don't always want c/p
relationships, so we write more validation and store code. So today
our CSLA implementation is inconsistent & confusing. Refactoring to
more-better CSLA will have profound domino effects. So after that
initial injection CSLA is essentially abandoned.
end Edit
CSLA is business object framework that allows you to easily create business objects on top of a data layer. It allows you to architect your application with solid object oriented principals and a good seperation of concerns.
I would highly recommend you read the CSLA book by Rocky Lhotka called Expert C# 2008 Business Objects. That will not only teach you about the framework but also teach good software architecture principals.
You can grab the book here on Amazon
I suggest reading the What is CSLA? page, and browse through the CSLA .NET FAQ site.
For the latest published information check out the Using CSLA 4 ebook series.
In reply to #radarbob https://stackoverflow.com/a/10922373/261363, hope I won't regret this and start a flame war.
Our team has been developing a couple of LOB applications with CSLA. From my experience on writing green field apps with CSLA and maintaining existing code here are my replies to your points.
The BO is not suppose to do it's own data persistence, you will have a Factory that will handle all data persistance, for example using a ORM to map to Models that are later on saved.
Sorry to hear that, I make sure I study the framework documentation and write at least one toy application before committing to a existing code database. Furthermore you can even download and browse the CSLA code.
You have BO -> Portal -> Factories that should not be very complicated the existing CSLA examples go a long way on explaining what is happening on each level.
CLSA should never be confused with a ORM
As you should, business object are rarely mapped to one table and thus require a bit of work when saving. In the case they are mapped to one table to you use something like AutoMapper to map your BO to your POCO in 1 line.
Look into CSLA Commands, also is nothing stopping you from keeping your BO as small or big as you want as long as you keep in mind that they are not the same as the POCO's that you will persist.
In a project we worked on we where able to easily test BO to ensure that the business logic was correct. Because of the nice separation of concerns we tested our Factories in isolation to make sure that business objects will be persisted accordingly.
At one point I was able to easily persist part of my BO's in MongoDB so the application was running on a hybrid database MSSQL and MongoDB without having to even change one line of code in my business objects, all I had to do was to update the factories to use Mongo instead of the current ORM.
Hope this addresses all your points in a fair manner,
Regards
CSLA: Component-based Scalable Logical Architecture
A paragraph in a nutshell that described CSLA to me from the website was this:
CSLA .NET enables you to create an object-oriented business layer that abstracts and encapsulates your business logic and data. The framework ensures your business objects work seamlessly with all .NET interface technologies, including WinRT XAML, WPF, ASP.NET MVC, ASP.NET Web Forms, WCF, asmx services, Windows Phone 7, Silverlight, Windows Workflow and Windows Forms.
Why you might use it:
Business rule management. Once you learn the business rule system, it provides a way to enforce business logic in a tidy package. If you have an object with child objects that need to report Validation to the parent most level, there is a way to handle that. (see http://www.lhotka.net/weblog/CSLA4BusinessRulesSubsystem.aspx for more information on the rule system)
You have business objects that you need to support N-level undo? A CSLA BusinessBase has a baked in property management system (like dependency properties) for functionality like N-Level Undo (it is actually completely implemented.) (This also ties into the business rule management. You can fire validation when a primary property changes, or you can change a property based on the value of another property.)
Data portal management. This one was an interesting concept. If I need to execute data operations locally, CSLA is configured for this out of the box. I can also stand up a WCF service that references my business object libraries, and use a few lines of configuration to make a WCF endpoint to manage data operations. The WCF service is a part of the CSLA framework. It was neat to see this in action. Other scenarios? Sure! Your business object library doesn't need to change, the DataPortal class determines if it needs to execute remotely or locally, according to your configuration.
CSLA does force you to use a few mechanisms that may not feel natural at first. I think that is somewhat true of any pattern you choose to implement. However, when it comes to the challenges of implementing Service Oriented Architecture, CSLA offers a lot. Yes, you are going to have an architect level developer authoring some of your libraries. However, if you're building an enterprise class application, shouldn't you be doing that already?
CSLA, when architected correctly, is testable. We use the repository pattern to replace out the actual dal with a mock layer and test both by specification (using NUnit/SpecFlow) and in a unit fashion when appropriate.
As far as support, including Rocky himself, there is a community of contributors that ensure things like CSLA.Net for Xamarin become a reality. There are consultancies that know CSLA and use it on a regular basis depending on the scope of work (and no, not just the consultancy for which I work.)
All things considered, CSLA may not be for you. Like others have indicated, read the site and the books (especially Expert C# 2008 Business Objects.) Ask questions, as the CSLA community tends to give quality advice.
CSLA is described in detail here. The new book is a great starting point. As a great compliment to the book I would recommend checking out our CSLA 3.8 templates. Rocky recommends using a Code Generator, and we have the leading set of templates, that will get you up and running in no time.
Thanks
-Blake Niemyjski (Author of the CodeSmith CSLA Templates)