Undoable sets of changes - perl

I'm looking for a way to manage consistent sets of changes across several data sources, including, but not limited to, a database, some network control tools, and probably other SOAP-based services.
If one change fails for some reason (e.g. real-world app says "no", or a database insert fails), I want the whole set to be undone. So that's like transactions, just not limited to a DB.
I came up with a module that stacks up "change" objects which in turn have their init, commit, and rollback methods. When the set is DESTROYed, it rolls uncommitted changes back. This kinda works.
Still I can't overcome the feeling of a wheel being invented. Is there a standard CPAN module, or a well described common method to perform such a task? (At least GoF's "command" pattern and RAII principle come to mind...)

There are a couple of approaches to executing a Distributed transaction (which is what you're describing):
The standard pattern is called "Two-phase commit protocol".
At the moment I'm not aware of any Perl module which implements Two-phase commit, which is kind of surprising and may likely be due to a lapse in my Googling. The only thing I found was Env::Transaction but I have no clue how stable/good/functional it is.
For certain cases, a solution involving rollback via "Compensating transactions" is possible.
This is basically a special case of general rollback where, when generating task list A designed to change the target system state from S1 to S2, you at the same time generate a "compensating" task list A-neg designed to change the target system state from S2 back to S1. This is obviously only possible for certain systems, and moreover only a small subset of those are commutative (meaning that your can execute both transaction and its compensating transaction non-contiguously, e.g. the result of A + B + A-neg + B-neg is an invariant state.
Please notice that the compensating transactions does NOT always have to be engineered to be a "transaction" - one clever approach (again, only possible on certain subject domains) involves storing your data with a special "finalized" flag; then periodically harvest and destroy data with a false "finalized" flag if the data's "originating transaction timestamp" is older than some sort of threshold.

Related

Composite unique constraint on business fields with Axon

We leverage AxonIQ Framework in our system. We've faced a problem implementing composite uniq constraint based on aggregate business fields.
Consider following Aggregate:
#Aggregate
public class PersonnelCardAggregate {
#AggregateIdentifier
private UUID personnelCardId;
private String personnelNumber;
private Boolean archived;
}
We want to avoid personnelNumber duplicates in the scope of NOT-archived (archived == false) records. At the same time personnelNumber duplicates may exist in the scope of archived records.
Query Side check seems NOT to be an option. Taking into account Eventual Consistency nature of our system, more than one creation request with the same personnelNumber may exist at the same time and the Query Side may be behind.
What the solution would be?
What you're asking is an issue that can occur as soon as you start implementing an application along the CQRS paradigm and DDD modeling techniques.
The PersonnelCardAggregate in your scenario maintains the consistency boundary of a single "Personnel Card". You are however looking to expand this scope to achieve a uniqueness constraints among all Personnel Cards in your system.
I feel that this blog explains the problem of "Set Based Consistency Validation" you are encountering quite nicely.
I will not iterate his entire blog, but he sums it up as having four options to resolving the problem:
Introduce locking, transactions and database constraints for your Personnel Card
Use a hybrid locking field prior to issuing your command
Really on the eventually consistent Query Model
Re-examine the domain model
To be fair, option 1 wont do if your using the Event-Driven approach to updating your Command and Query Model.
Option 3 has been pushed back by you in the original question.
Option 4 is something I cannot deduce for you given that I am not a domain expert, but I am guessing that the PersonnelCardAggregate does not belong to a larger encapsulating Aggregate Root. Maybe the business constraint you've stated, thus the option to reuse personalNumbers, could be dropped or adjusted? Like I said though, I cannot state this as a factual answer for you, as I am not the domain expert.
That leaves option 2, which in my eyes would be the most pragmatic approach too.
I feel this would require a combination of a cache at your command dispatching side to deal with quick successions of commands to resolve the eventual consistency issue. To capture the occurs that an update still comes through accidentally, I'd introduce some form of Event Handler that (1) knows the entire set of "PersonnelCards" from a personalNumber/archived point of view and (2) can react on a faulty introduction by dispatching a compensating action.
You'd thus introduce some business logic on the event handling side of your application, which I'd strongly recommend to segregate from the application part which updates your query models (as the use cases are entirely different).
Concluding though, this is a difficult topic with several ways around it.
It's not so much an Axon specific problem by the way, but more an occurrence of modeling your application through DDD and CQRS.

How to handle application death and other mid-operation faults with Mongo DB

Since Mongo doesn't have transactions that can be used to ensure that nothing is committed to the database unless its consistent (non corrupt) data, if my application dies between making a write to one document, and making a related write to another document, what techniques can I use to remove the corrupt data and/or recover in some way?
The greater idea behind NoSQL was to use a carefully modeled data structure for a specific problem, instead of hitting every problem with a hammer. That is also true for transactions, which should be referred to as 'short-lived transactions', because the typical RDBMS transaction hardly helps with 'real', long-lived transactions.
The kind of transaction supported by RDBMSs is often required only because the limited data model forces you to store the data across several tables, instead of using embedded arrays (think of the typical invoice / invoice items examples).
In MongoDB, try to use write-heavy, de-normalized data structures and keep data in a single document which improves read speed, data locality and ensures consistency. Such a data model is also easier to scale, because a single read only hits a single server, instead of having to collect data from multiple sources.
However, there are cases where the data must be read in a variety of contexts and de-normalization becomes unfeasible. In that case, you might want to take a look at Two-Phase Commits or choose a completely different concurrency approach, such as MVCC (in a sentence, that's what the likes of svn, git, etc. do). The latter, however, is hardly a drop-in replacement for RDBMs, but exposes a completely different kind of concurrency to a higher level of the application, if not the user.
Thinking about this myself, I want to identify some categories of affects:
Your operation has only one database save (saving data into one document)
Your operation has two database saves (updates, inserts, or deletions), A and B
They are independent
B is required for A to be valid
They are interdependent (A is required for B to be valid, and B is required for A to be valid)
Your operation has more than two database saves
I think this is a full list of the general possibilities. In case 1, you have no problem - one database save is atomic. In case 2.1, same thing, if they're independent, they might as well be two separate operations.
For case 2.2, if you do A first then B, at worst you will have some extra data (B data) that will take up space in your system, but otherwise be harmless. In case 2.3, you'll likely have some corrupt data in the event of a catastrophic failure. And case 3 is just a composition of case 2s.
Some examples for the different cases:
1.0. You change a car document's color to 'blue'
2.1. You change the car document's color to 'red' and the driver's hair color to 'red'
2.2. You create a new engine document and add its ID to the car document
2.3.a. You change your car's 'gasType' to 'diesel', which requires changing your engine to a 'diesel' type engine.
2.3.b. Another example of 2.3: You hitch car document A to another car document B, A getting the "towedBy" property set to B's ID, and B getting the "towing" property set to A's ID
3.0. I'll leave examples of this to your imagination
In many cases, its possible to turn a 2.3 scenario into a 2.2 scenario. In the 2.3.a example, the car document and engine are separate documents. Lets ignore the possibility of putting the engine inside the car document for this example. Its both invalid to have a diesel engine and non-diesel gas and to have a non-diesel engine and diesel gas. So they both have to change. But it may be valid to have no engine at all and have diesel gas. So you could add a step that makes the whole thing valid at all points. First, remove the engine, then replace the gas, then change the type of the engine, and lastly add the engine back onto the car.
If you will get corrupt data from a 2.3 scenario, you'll want a way to detect the corruption. In example 2.3.b, things might break if one document has the "towing" property, but the other document doesn't have a corresponding "towedBy" property. So this might be something to check after a catastrophic failure. Find all documents that have "towing" but the document with the id in that property doesn't have its "towedBy" set to the right ID. The choices there would be to delete the "towing" property or set the appropriate "towedBy" property. They both seem equally valid, but it might depend on your application.
In some situations, you might be able to find corrupt data like this, but you won't know what the data was before those things were set. In those cases, setting a default is probably better than nothing. Some types of corruption are better than others (particularly the kind that will cause errors in your application rather than simply incorrect display data).
If the above kind of code analysis or corruption repair becomes unfeasible, or if you want to avoid any data corruption at all, your last resort would be to take mnemosyn's suggestion and implement Two-Phase Commits, MVCC, or something similar that allows you to identify and roll back changes in an indeterminate state.

Expert/Rule Engine that updates facts atomically?

Atomically might not be the right word. When modelling cellular automata or neural networks, usually you have two copies of the system state. One is the current state, and one is the state of the next step that you are updating. This ensures consistency that the state of the system as a whole remains unchanged while running all of the rules to determine the next step. For example, if you run the rules for one cell/neuron to determine the state of it for the next step, you then run the rules for the next cell, it's neighbor, you want to use as the input for those rules the current state of the neighbor cell, not its updated state.
This may seem inefficient due to the fact that each step requires you copy all of the current step states to the next step states before updating them, however it is important to do this to accurately simulate the system as if all cells/neurons were actually being processed simultaneously, and thus all inputs for rules/firing functions were the current states.
Something that has bothered me when designing rules for expert systems is how one rule can run, update some facts that should trigger other rules to run, and you might have 100 rules queued up to run in response, but the salience is used as a fragile way to ensure the really important ones run first. As these rules run, the system changes more. The state of the facts are consistently changing, so by the time you get to processing the 100th rule, the state of the system has changed significantly since the time it was added to the queue when it was really responding to the first fact change. It might have changed so drastically that the rule doesn't have a chance to react to the original state of the system when it really should have. Usually as a workaround you carefully adjust its salience, but then that moves other rules down the list and you run into a chicken or egg problem. Other workarounds involve adding "processing flag" facts that serve as a locking mechanism to suppress certain rules until other rules process. These all feel like hacks and cause rules to include criteria beyond just the core domain model.
If you build a really sophisticated system that modeled a problem accurately, you would really want the changes to the facts to be staged to a separate "updates" queue that doesn't affect the current facts until the rules queue is empty. So lets say you make a fact change that fills the queue of rules to run with 100 rules. All of these rules would run, but none of them would update facts in the current fact list, any change they make gets queued to a change list, and that ensures no other rules get activated while the current batch is processing. Once all rules are processed, then the fact changes get applied to the current fact list, all at once, and then that triggers more rules to be activated. Rinse repeat. So it becomes much like how neural networks or cellular automata are processed. Run all rules against an unchanging current state, queue changes, after running all rules apply the changes to current state.
Is this mode of operation a concept that exist in the academic world of expert systems? I'm wondering if there is a term for it.
Does Drools have the capability to run in a way that allows all rules to run without affecting the current facts, and queue fact changes separately until all rules have run? If so, how? I don't expect you to write the code for me, but just some keywords of what it's called or keywords in the API, some starting point to help me search.
Do any other expert/rule engines have this capability?
Note that in such a case, the order rules run in no longer matters, because all of the rules queued to run will all be seeing only the current state. Thus as the queue of rules is run and cleared, none of the rules see any of the changes the other rules are making, because they are all being run against the current set of facts. Thus the order becomes irrelevant and the complexities of managing rule execution order go away. All fact changes are pending and not applied to the current state until all rules have been cleared from the queue. Then all of those changes are applied at once, and thus cause relevant rules to queue again. So my goal is not to have more control over the order that rules run in, but to avoid the issue of rule execution order entirely by using an engine that simulates simultaneous rule execution.
If I understand what you describe:
You have one fact that is managed by many rules
Each rule should apply on the initial value of your fact and has no right to modify the fact value (to not modify other rules'executions)
You then batch all the updates made by the rules on your fact
Other rules apply on this new fact value in a similar manner 'simutanously'
It seems to me that it is a Unit of Work design pattern just like Hibernate implements it (and many ORM in fact): http://www.codeproject.com/Articles/581487/Unit-of-Work-Design-Pattern
Basically you store in-memory all the changes (in a 'technical' fact for instance) and then execute a 'transaction' when all the rules based on the initial value have been fired that updates the fact value, and so on. Hibernate does that with its session (you modify your attached object, then when required it executes the update query on the database, not all modifications on the java object produce queries on your database).
Still you will have troubles if updates conflict (same fact field value modified, which value to choose? Same as a source version control conflict), you will have to define a determinist way to order updates, but it will be defined only once and available for all rules and for other changes it will work seamlessly.
This workaournd may/may not work based on your rather vague description. If you really are concerned about rules triggering further activations, why not queue the intermediate state yourself. And once the current evaluation is complete, insert those new facts into the working memory.
You would have to invoke fireAllRules() after inserting each fact though, this could be quite expensive. And then in the rules, rather than inserting the facts directly, push these into a queue. Once the above call returns, walk through the queue doing the same (or after inserting the original facts completely...)
I would imagine that this will be quite slow, to speed up, you could have multiple parallel working memories with the same rules, and evaluate multiple facts in one go into several queues etc. But things get pretty hairy..
Anyway, just an idea that's too long for the comments...

J Oliver EventStore V2.0 questions

I am embarking upon an implementation of a project using CQRS and intend to use the J Oliver EventStore V2.0 as my persistence engine for events.
1) In the documentation, ExampleUsage.cs uses 3 serializers in "BuildSerializer". I presume this is just to show the flexibility of the deserialization process?
2) In the "Restart after failure" case where some events were not dispatched I believe I need startup code that invokes GetUndispatchedCommits() and then dispatch them, correct?
3) Again, in "ExampleUseage.cs" it would be useful if "TakeSnapshot" added the third event to the eventstore and then "LoadFromSnapShotForward" not only retrieve the most recent snapshot but also retrieved events that were post snapshot to simulate the rebuild of an aggregate.
4) I'm failing to see the use of retaining older snapshots. Can you give a use case where they would be useful?
5) If I have a service that is handling receipt of commands and generation of events what is a suggested strategy for keeping track of the number of events since the last snapshot for a given aggregate. I certainly don't want to invoke "GetStreamsToSnapshot" too often.
6) In the SqlPersistence.SqlDialects namespace the sql statement name is "GetStreamsRequiringSnaphots" rather than "GetStreamsRequiringSnapShots"
1) There are a few "base" serializers--such as the Binary, JSON, and BSON serializers. The other two in the example--GZip/Compression and Encryption serializers are wrapping serializers and are only meant to modify what's already been serialized into a byte stream. For the example, I'm just showing flexibility. You don't have to encrypt if you don't want to. In fact, I've got stuff running production that uses simple JSON which makes debugging very easy because everything is text.
2) The SynchronousDispatcher and AsychronousDispatcher implementations are both configured to query and find any undispatched commits. You shouldn't have to do anything special.
3) Greg Young talked about how he used to "inline" his snapshots with the main event stream, but there were a number of optimistic concurrency and race conditions in high-performance systems that came up. He therefore decided to move them "out of band". I have followed this decision for many of the same reasons.
In addition snapshots are really a performance consideration when you have extrememly low SLAs. If you have a stream with a few thousand events on it and you don't have low SLAs, why not just take the minimal performance hit instead of adding additional complexity into your system. In other words, snapshots are "ancillary" concepts. They're in the EventStore API, but they're an optional concept that should be considered for certain use cases.
4) Let's suppose you had an aggregate with tens of millions of events and you wanted to run a "what if" scenario from before your most recent snapshot. It's a lot cheaper to go from another snapshot forward. The really nice thing about snapshots being a secondary concept is that if you wanted to drop older snapshots you could and it wouldn't affect your system at all.
5) There is a method in each implementation of IPersistStreams called GetStreamsRequiringSnapshots. You provide a threshold of 50, for example which finds all streams having 50 or more events since their last snapshot. This can (and probably should) be done asynchronously from your normal processing.
6) "Snapshots" is the correct casing for that word. Much like "website" used to be "Web site" but because of common usage it became "website".

Recreate a graph that change in time

I have an entity in my domain that represent a city electrical network. Actually my model is an entity with a List that contains breakers, transformers, lines.
The network change every time a breaker is opened/closed, user can change connections etc...
In all examples of CQRS the EventStore is queried with Version and aggregateId.
Do you think I have to implement events only for the "network" aggregate or also for every "Connectable" item?
In this case when I have to replay all events to get the "actual" status (based on a date) I can have near 10000-20000 events to process.
An Event modify one property or I need an Event that modify an object (containing all properties of the object)?
Theres always an exception to the rule but I think you need to have an event for every command handled in your domain. You can get around the problem of processing so many events by making use of Snapshots.
http://thinkbeforecoding.com/post/2010/02/25/Event-Sourcing-and-CQRS-Snapshots
I assume you mean currently your "connectable items" are part of the "network" aggregate and you are asking if they should be their own aggregate? That really depends on the nature of your system and problem and is more of a DDD issue than simple a CQRS one. However if the nature of your changes is typically to operate on the items independently of one another then then should probably be aggregate roots themselves. Regardless in order to answer that question we would need to know much more about the system you are modeling.
As for the challenge of replaying thousands of events, you certainly do not have to replay all your events for each command. Sure snapshotting is an option, but even better is caching the aggregate root objects in memory after they are first loaded to ensure that you do not have to source from events with each command (unless the system crashes, in which case you can rely on snapshots for quicker recovery though you may not need them with caching since you only pay the penalty of loading once).
Now if you are distributing this system across multiple hosts or threads there are some other issues to consider but I think that discussion is best left for another question or the forums.
Finally you asked (I think) can an event modify more than one property of the state of an object? Yes if that is what makes sense based on what that event represents. The idea of an event is simply that it represents a state change in the aggregate, however these events should also represent concepts that make sense to the business.
I hope that helps.