We are using the Axon Framework to implement the Saga Pattern in Java. Axon uses two tables (ASSOCIATION_VALUE_ENTRY and SAGA_ENTRY) to store all the necessary information after each step of the saga. And at the end of the process (if it is correct, or, in case of error, all the compensations have been executed), it deletes the registers.
If for any reason, after an error, the compensations cannot be executed, we are able to resume the execution at the point where it failed, based on the stored information. Until here, everything is ok.
The issue came when we wanted to improve the resilience of the process and we checked what happened if the service died during the execution of a saga. According to the above, we expected the information of the execution to be persisted in the tables, but they were empty: the information only appeared when the process couldn't continue due to an error in a compensation (and no final delete action was executed).
Analyzing the source code of the Axon's JpaSagaStore class implementation, the interactions with the database (insert, update and delete) are persisted with a flush instead of a commit. The global commit is managed in the AbstractUnitOfWork class (as far as we understand). And here is where we have the doubts:
According to the literature, the flush writes in the database but the register is in a READ_UNCOMMITED state. The only way to see them in the database would be activating the READ_UNCOMMITED isolation level, with the problematic of the 'dirty reads', right? There would be any additional consideration/issue to have into account?
Does Axon have an alternative in order to ensure the persistence of the saga registers? Mainly if we couldn't activate the READ_UNCOMMITED mode (due to internal policies).
EDIT:
Summarizing it a lot, all starts with this method
public void startSaga(SagaWorkflow sagaWorkflow, Serializable sagaInput) {
StartSagaEvt startSagaEvt = StartSagaEvt.builder().sagaWorkflow(sagaWorkflow).sagaInput(sagaInput).build();
eventBus.publish(GenericEventMessage.asEventMessage(startSagaEvt));
}
Where:
eventBus is the Axon's internal one
sagaInput is simply a Serializable with some input values
SagaWorkflow is a Serializable that models the whole saga flow, whose main attribute is a LinkedList of nodes (the different steps of the saga, each one can have a different logic)
StartSagaEvt is just the POJO that models the event sent to the bus
After this, Axon performs all its 'magic' and finally arrives to the internal code:
AnnotatedSagaRepository.doCreateInstance --> AnnotatedSagaRepository.storeSaga --> [...] --> JpaSagaStore.insertSaga
public void insertSaga(Class<?> sagaType, String sagaIdentifier, Object saga, Set<AssociationValue> associationValues) {
EntityManager entityManager = entityManagerProvider.getEntityManager();
AbstractSagaEntry<?> entry = createSagaEntry(saga, sagaIdentifier, serializer);
entityManager.persist(entry);
for (AssociationValue associationValue : associationValues) {
storeAssociationValue(entityManager, sagaType, sagaIdentifier, associationValue);
}
if (logger.isDebugEnabled()) {
logger.debug("Storing saga id {} as {}", sagaIdentifier, serializedSagaAsString(entry));
}
if (useExplicitFlush) {
entityManager.flush();
}
}
The same applies for the update and delete phases. As far as I know, all the handle of the commit/rollback is performed in the class AbstractUnitOfWork, that intervenes just at the end of the complete saga flow.
This leads me to the following considerations/questions:
what sense has to keep the transaction open during the whole process instead of committing after each step? If for any reason the process fails, goes down, the database is not accessible,... all the saved information is lost.
There must be a design reason for this behavior, but I'm not able to see it. Or maybe there is a configuration to change it (hopefully, although I doubt it).
Thanks in advance for any comment!
EDIT 2
Effectively, we are using it as a kind of state machine, where the saga flow is a sequence of steps, each one with an action and a compensation, and we jump from one to another until reach an "END" status.
#Saga
class GenericSaga {
private EventBus eventBus;
private CustomCommandGateway commandGateway;
[...]
#StartSaga
#SagaEventHandler(associationProperty = "sagaId")
public void startStep(StartSagaEvt startSagaEvt) {
// Initializes de GenericSaga and associate several properties with SagaLifecycle.associateWith(key, value);
[...]
// Transit to the next (first) step
eventBus.publish(GenericEventMessage.asEventMessage(new StepSagaEvt(startSagaEvt)));
}
#SagaEventHandler(associationProperty = "sagaId")
public void nextStep(StepSagaEvt stepSagaEvt) {
// Identifies what is the next step in the defined flow, considering if it should be executed sequentially or concurrently, or if it is the end of the flow and then call the SagaLifecycle.end()
[...]
// Also checks if it has to execute the compensation logic of the step
[...]
// Execute
Serializable actionOutput = commandGateway.sendAndWaitEx(stepAction.getActionInput());
}
#SagaEventHandler(associationProperty = "sagaId")
public void resumeSaga(ResumeSagaEvt resumeSagaEvt) {
// Recover information from the execution that we want to resume
[...]
// Transit to the next step
eventBus.publish(GenericEventMessage.asEventMessage(new StepSagaEvt(resumeSagaEvt)));
}
}
As you can see, we don't have an endSaga annotation, and maybe that's the problem. But in our current situation we have kicked forward, and be have defined our custom implementation of the JpaSagaStore, in order to force a local transaction in the insertSaga and updateSaga methods.
Based on my understanding, I think you are somehow misusing the Saga component from Axon Framework. I assume from your question that you are trying to build a form of a 'state machine' using your own SagaWorkflow object. If that is the case, I have to say this is not how Axon intends the usage of Sagas.
To add to that, let me give you a pseudo-sample of what a Saga should look like.
#Saga
class SagaWorkflow {
private transient CommandGateway commandGateway;
#StartSaga
#SagaEventHandler(associationProperty = "yourProperty")
public void on(SagaInputEvent event) {
// validate, associate with another property and fire a command
SagaLifecycle.associateWith("associationPropertyKey", "associationPropertyValue");
commandGateway.send(new GivenCommand());
}
#SagaEventHandler(associationProperty = "associationPropertyValue")
public void on(AnotherEvent event) {
// validate and fire a command or finish the saga
SagaLifecycle.end();
}
#EndSaga
#SagaEventHandler(associationProperty = "anyProperty")
public void on(FinishSagaEvent event) {
// check if you need to fire extra commands to tell others it's finished or just do it silently
}
}
#Saga Annotation will make sure Axon Framework handles the whole Saga process for you, storing (serializing) it to the database when each (Saga)EventHandler is executed
#SagaEventHandler will make sure the 'Event Handling method' reacts to a given Event, only if it contains the associationProperty as part of the Event (to understand it better, I will share our docs link)
#EndSaga will tell Axon Framework to finalize the Saga after the execution of the method (finalizing means deleting it from the database)
SagaLifecycle provides several 'utilities' methods to interact with the Saga's lifecycle and associations
In the example, I made the CommandGateway transient because the Saga is serialized and stored on the database. You would not Axon to serializer any external component, like the gateway, as well
Of course, there is more to it.
You can check Axon's docs for that. But I hope this gives you enough material and ideas to use Sagas within Axon Framework better!
KR
Related
The IStartable.Start() method of a component is invoked before RegisterBuildCallback.
Is it a bug or a feature?
According to the docs:
Startable Components: A startable component is one that is activated by the container when the container is initially built
and
Container Build Callbacks: You can register any arbitrary action to happen at container build time by registering a build callback. A build callback is an Action and will get the built container prior to that container being returned from ContainerBuilder.Build.
So, the order of operation doesn't seem to be defined, but in my opinion container build callback are part of the "container build process", where starting components should happen only when everything else is built already.
Repro:
using System;
using Autofac;
namespace AutofacBuildOrderRepro
{
class Program
{
static void Main(string[] args)
{
var builder = new ContainerBuilder();
builder.RegisterBuildCallback(ctx => StaticClass.ObjectProvider = () => new object());
//this fails:
builder.RegisterType<StartableClass>().As<IStartable>().AsSelf().SingleInstance();
builder.Build();
//this works:
//builder.RegisterType<StartableClass>().AsSelf().SingleInstance();
//var container = builder.Build();
//container.Resolve<StartableClass>().Start();
}
class StartableClass : IStartable
{
public void Start()
{
StaticClass.Run();
}
}
public static class StaticClass
{
public static Func<object> ObjectProvider { private get; set; }
public static void Run()
{
if (ObjectProvider== null)
{
throw new InvalidOperationException("ObjectProvider is null");
}
Console.WriteLine("Success");
}
}
}
}
Exception call stack:
Unhandled Exception: Autofac.Core.DependencyResolutionException: An exception was thrown while executing a resolve operation. See the InnerException for details. ---> ObjectProvider is null (See inner exception for details.) ---> System.InvalidOperationException: ObjectProvider is null
at AutofacBuildOrderRepro.Program.StaticClass.Run() in c:\path\AutofacBuildOrderRepro\Program.cs:line 39
at AutofacBuildOrderRepro.Program.StartableClass.Start() in c:\path\AutofacBuildOrderRepro\Program.cs:line 27
at Autofac.Core.Resolving.InstanceLookup.StartStartableComponent(Object instance)
at Autofac.Core.Resolving.InstanceLookup.Execute()
at Autofac.Core.Resolving.ResolveOperation.GetOrCreateInstance(ISharingLifetimeScope currentOperationScope, IComponentRegistration registration, IEnumerable`1 parameters)
at Autofac.Core.Resolving.ResolveOperation.Execute(IComponentRegistration registration, IEnumerable`1 parameters)
--- End of inner exception stack trace ---
at Autofac.Core.Resolving.ResolveOperation.Execute(IComponentRegistration registration, IEnumerable`1 parameters)
at Autofac.Core.Lifetime.LifetimeScope.ResolveComponent(IComponentRegistration registration, IEnumerable`1 parameters)
at Autofac.Core.Container.ResolveComponent(IComponentRegistration registration, IEnumerable`1 parameters)
at Autofac.Builder.StartableManager.StartStartableComponents(IComponentContext componentContext)
at Autofac.ContainerBuilder.Build(ContainerBuildOptions options)
at AutofacBuildOrderRepro.Program.Main(String[] args) in c:\path\AutofacBuildOrderRepro\Program.cs:line 15
I tried it with Autofac ver. 4.8.1 and 3.0.4.
The short version: Use IStartable.Start(), AutoActivate, and build callbacks sparingly. If you need to control order, instead of using a combination of callbacks and startables, execute the operations you need in the appropriate order in your application code after building the container. Use a build callback to run your specifically ordered logic rather than trying to ensure a particular order across all three of these things.
The long version: In general the current logic is to run IStartable, then AutoActivate, then build callbacks.
That is the logic today, it is not guaranteed that will be the logic tomorrow.
The reason this isn't necessarily guaranteed:
If one IStartable depends on another IStartable, they get run in dependency order (the dependency gets started before the thing consuming the dependency).
If an IStartable or AutoActivate tries to create a child lifetime scope during Start or activation and starts resolving things, that will throw off the order. (Yes, this was a recent issue we had filed. People do this.)
The notion of IStartable and AutoActivate somewhat fit with the notion of build callbacks, so the logic for these and/or other startup "on container build" logic may be refactored/moved to become build callbacks, which may affect ordering.
Personally, I don't use any of these things. They are convenient mechanisms for bolting application startup logic together with the container creation process but it somewhat breaks single responsibility principle by co-opting a dependency setup mechanism with unrelated logic. That may not be what's happening here, but it happens a lot "in the wild." Gotchas like "I need to control the order things run in except there is also a dependency order things need to run in" really get people into sticky situations.
Anyway, if it's not working or it's running in an order you're not expecting, consider things like OnActivating in conjunction with SingleInstance so it happens more lazily; or move some of that initialization logic out of container build and into specific logic for your app where you can control that order manually.
When implementing MVC project, I usually add Service Layer to perform the actual work. But actually sometimes 1 Web Request should be done with several AppService methods. Then the location of Unit-of-Work (UoW) may affect the coding handling.
No matter in C# EF/Java Spring, there's Transaction annotation in Service Layer methods, so the transaction is Per-Service based (i.e. UoW on Service layer). Let's take Java version as example here:
#Transactional(propagation = Propagation.REQUIRED, isolation = Isolation.READ_COMMITTED)
Public class UserAppService{
public UserDTO createUser() {
// Do sth to create a new user
userRepository.save(userBean);
// Convert userBean to userDTO
return userDTO;
}
public xxx DoSth() {
// Break the operation here
throw new Exception("Whatever");
// (never execute actually)
sthRepository.save(someBean);
}
}
Then in Controller:
Public class SomeController : Controller {
Public xxx DoSth(){
UserAppService Service = new UserAppService();
Service.CreateUser(); // DB committed
Service.DoSth(); //Exception thrown
}
}
With this structure, If there's any exception thrown on 2nd service method call, the 1st service method still commit the user to the DB. If I want "all-or-nothing" handling, this structure doesn't work unless I wrap those service method calls into another wrapper service call with single transaction. But it's sort of extra work.
Another version is using transaction on Controller action level instead (i.e. UoW on Controller Action). Let's take C# code as example:
Remarks: AppService in code version 2 here use the DbContext (sth like transaction) defined in controller, and doesn't do any commit inside.
Public class SomeController : Controller {
Public ActionResult DoSth(){
using (var DB = new DbContext()){
Var UserAppService = new UserAppService(DB);
var userEntity = userAppService.GetUser(userId);
UserAppService.DoSth(userEntity);
Var AnotherAppService = new AnotherAppService(DB);
AnotherAppService.DoSthElse(userEntity);
// Throw exception here
throw new Exception("Whatever");
DB.Save(); // commit
}
}
}
In this example, there will be no partial commit to the DB.
Is applying UoW on service-layer really better?
Is applying UoW on service-layer really better?
IMO No. And you've just figured out why. If the service methods are discreet and re-usable, they are also not suitable for being atomic transactions.
In .NET the controller should control the transaction lifecycle, and enlist service methods in the transaction.
Note that this also implies that the service methods should be local method calls, not remote or web service calls.
It is better because your following the main principle of Object Oriented Programming seperation of concerns.What if you made another controller and wanted to do more database processing using the same object? You dont want to instantiate the controller in which your doing something completely different.By the way check out the facade service pattern http://soapatterns.org/design_patterns/service_facade it may help you understand why its so sexy. .Hi the image above shows the pattern, basically you wrap your database access objects with transactional at the service layer so a customerService object can wrap 1,2...inf transactions and either all fail or succeed.
I have watched Julie Lerman's videos about using EF in an enterprise application. Now I am developing a website using "Bounded Contexts" and other stuff she has taught in that series.
The problem is I do not know how to use bounded contexts (BC) from within my "Business Layer". To make it clearer: How should the BL know that which specific BC it should use.
Suppose the UI requests a list of products from the business layer. In BL I have a method that returns a list of products: GetAll(). This method does not know which part of the UI (site admin, moderator or public user) has requested the list of products. Since each user/scenario has its own bounded context, the list needs to be pulled using that related context. How should the BL choose the appropriate BC?
Moreover I do not want the UI layer to interact with data layer.
How can this be done?
If by business layer you mean a place where all your business rules are defined, then that is a bounded context.
A bounded context looks at your system from a certain angle so that business rules can be implemented in a compartmentalised fashion (with the goal that it is easier to handle the overall problem by splitting in to smaller chunks).
http://martinfowler.com/bliki/BoundedContext.html
Front-end
So assuming you have a ASP MVC front end, this controllers are the things that will call your use cases/user stories that are presented from the domain to be called via a standard known interface.
public class UserController : Controller
{
ICommandHandler<ChangeNameCommand> handler;
public UserController(ICommandHandler<ChangeNameCommand> handler)
{
this.handler = handler;
}
public ActionResult ChangeUserName(string id, string name)
{
try
{
var command = new ChangeNameCommand(id,name);
var data = handler.handle(command);
}
catch(Exception e)
{
// add error logging and display info
ViewBag.Error = e.Message;
}
// everything went OK, let the user know
return View("Index");
}
}
Domain Application (Use Cases)
Next, you would have an domain application entry point that implements the use case (this would be a command or query handler).
You may call this directly and have the code run in-process with your front end application, or you may have a WebAPI or WCF service in front of it presenting the domain application services. It doesn't really matter, how you the system is distrusted depends on the system requirements (it is often simpler from an infrastructure perspective to not to distribute if not needed).
The domain application layer then orchestrates the user story - it will new up repositories, fetch entities, perform an operation on them, and then write back to the repository. The code here should not be complex or contain logic.
public class NewUserHandler : ICommandHandler<ChangeNameCommand>
{
private readonly IRepository repository;
public NewUserHandler(IRepository repository)
{
this.repository = repository;
}
public void Handle(ChangeUserName command)
{
var userId = new UserId(command.UserId);
var user = this.repository.GetById<User>(userId);
user.ChangeName(command.NewName);
this.repository.Save(newUser);
}
}
Domain Model
The entities them selves implement their own business logic in the domain model. You may also have domain services for logic which doesn't naturally fit nicely inside an individual entity.
public class User
{
protected string Name;
protected DateTime NameLastChangedOn;
public ChangeName(string newName)
{
// not the best of business rules, just an example...
if((DateTime.UtcNow - NameLastChangedOn).Days < 30)
{
throw new DomainException("Cannot change name more than once every 30 days");
}
this.Name = newName;
this.NameLastChangedOn = DateTime.UtcNow;
}
}
Infrastructure
You would have infrastructure which implements the code to fetch and retrieve entities from your backing store. For you this is Entity Framework and the DbContext (my example code above is not using EF but you can substitute).
Answer to your question - Which bounded context should the front end application call?
Not to make the answer complex or long, but I included the above code to set the background and hope to make it easier to understand as I think the terms you are using are getting a little mixed up.
With the above code as you started implementing more command and query handlers, which bounded context is called from your front end application depends on what specific user story the user wishes to perform.
User stories will generally be clustered across different bounded contexts, so you would just select the command or query for the bounded context that implements the required functionality - don't worry about making it anything more complicated than that.
Let the problem you are trying to solve dictate the mapping, and dont be afraid that this mapping will possibly change as insight in to the problem you are looking to solve improves.
Sidenote
As a side note to mention things I found useful (I started my DDD journey with EF)... with entity framework there are ORM concepts that are often required such as defining mapping relationships and navigation properties between entities, and what happens with cascade deletes and updates. For me, this started to influence how I designed my entities, rather than the problem dictating how the entities should be designed. You may find this interesting: http://mehdi.me/ambient-dbcontext-in-ef6/
You may also want to look at http://geteventstore.com and event sourcing which takes away any headaches of ORM mapping (but comes with added complexity and workarounds needed to get acceptable performance). What is best to use depends on the situation, but its always good to know all the options.
I also use SimpleInjector to wire up my classes and inject in to the MVC controller (as a prebuilt Command or Query handler), more info here: https://cuttingedge.it/blogs/steven/pivot/entry.php?id=91.
Using an IoC container is a personal preference only and not set in stone.
This book is also awesome: https://vaughnvernon.co/?page_id=168
I mention the above as I started my DDD journey with EF and the exact same question you had.
I've been creating a prototype for a modern MUD engine. A MUD is a simple form of simulation and provide a good method in which to test a concept I'm working on. This has led me to a couple of places in my code where things, are a bit unclear, and the design is coming into question (probably due to its being flawed). I'm using model first (I may need to change this) and I've designed a top down architecture of game objects. I may be doing this completely wrong.
What I've done is create a MUDObject entity. This entity is effectively a base for all of my other logical constructs, such as characters, their items, race, etc. I've also created a set of three meta classes which are used for logical purposes as well Attributes, Events, and Flags. They are fairly straightforward, and are all inherited from MUDObject.
The MUDObject class is designed to provide default data behavior for all of the objects, this includes deletion of dead objects. The automatically clearing of floors. etc. This is also designed to facilitate this logic virtually if needed. For example, checking a room to see if an effect has ended and deleting the the effect (remove the flag).
public partial class MUDObject
{
public virtual void Update()
{
if (this.LifeTime.Value.CompareTo(DateTime.Now) > 0)
{
using (var context = new ReduxDataContext())
{
context.MUDObjects.DeleteObject(this);
}
}
}
public virtual void Pause()
{
}
public virtual void Resume()
{
}
public virtual void Stop()
{
}
}
I've also got a class World, it is derived from MUDObject and contains the areas and room (which in turn contain the games objects) and handles the timer for the operation to run the updates. (probably going to be moved, put here as if it works would limit it to only the objects in-world at the time.)
public partial class World
{
private Timer ticker;
public void Start()
{
this.ticker = new Timer(3000.0);
this.ticker.Elapsed += ticker_Elapsed;
this.ticker.Start();
}
private void ticker_Elapsed(object sender, ElapsedEventArgs e)
{
this.Update();
}
public override void Update()
{
this.CurrentTime += 3;
// update contents
base.Update();
}
public override void Pause()
{
this.ticker.Enabled = false;
// update contents
base.Pause();
}
public override void Resume()
{
this.ticker.Enabled = true;
// update contents
this.Resume();
}
public override void Stop()
{
this.ticker.Stop();
// update contents
base.Stop();
}
}
I'm curious of two things.
Is there a way to recode the context so that it has separate
ObjectSets for each type derived from MUDObject?
i.e. context.MUDObjects.Flags or context.Flags
If not how can I query a child type specifically?
Does the Update/Pause/Resume/Stop architecture I'm using work
properly when placed into the EF entities directly? given than it's for
data purposes only?
Will locking be an issue?
Does the partial class automatically commit changes when they are made?
Would I be better off using a flat repository and doing this in the game engine directly?
1) Is there a way to recode the context so that it has separate ObjectSets for each type derived from MUDObject?
Yes, there is. If you decide that you want to define a base class for all your entities it is common to have an abstract base class that is not part of the entity framework model. The model only contains the derived types and the context contains DbSets of derived types (if it is a DbContext) like
public DbSet<Flag> Flags { get; set; }
If appropriate you can implement inheritance between classes, but that would be to express polymorphism, not to implement common persistence-related behaviour.
2) Does the Update/Pause/Resume/Stop architecture I'm using work properly when placed into the EF entities directly?
No. Entities are not supposed to know anything about persistence. The context is responsible for creating them, tracking their changes and updating/deleting them. I think that also answers your question about automatically committing changes: no.
Elaboration:
I think here it's good to bring up the single responsibility principle. A general pattern would be to
let a context populate objects from a store
let the object act according to their responsibilities (the simulation)
let a context store their state whenever necessary
I think Pause/Resume/Stop could be responsibilities of MUD objects. Update is an altogether different kind of action and responsibility.
Now I have to speculate, but take your World class. You should be able to express its responsibility in a short phrase, maybe something like "harbour other objects" or "define boundaries". I don't think it should do the timing. I think the timing should be the responsibility of some core utility which signals that a time interval has elapsed. Other objects know how to respond to that (e.g. do some state change, or, the context or repository, save changes).
Well, this is only an example of how to think about it, probably far from correct.
One other thing is that I think saving changes should be done not nearly as often as state changes of the objects that carry out the simulation. It would probably slow down the process dramatically. Maybe it should be done in longer intervals or by a user action.
First thing to say, if you are using EF 4.1 (as it is tagged) you should really consider going to version 5.0 (you will need to make a .NET 4.5 project for this)
With several improvements on performance, you can benefit from other features also. The code i will show you will work for 5.0 (i dont know if it will work for 4.1 version)
Now, let's go to you several questions:
Is there a way to recode the context so that it has separate
ObjectSets for each type derived from MUDObject? If not how can I
query a child type specifically?
i.e. context.MUDObjects.Flags or context.Flags
Yes, you can. But to call is a little different, you will not have Context.Worlds you will only have the base class to be called this way, if you want to get the set of Worlds (that inherit from MUDObject, you will call:
var worlds = context.MUDObjects.OfType<World>();
Or you can do in direct way by using generics:
var worlds = context.Set<World>();
If you define you inheritance the right way, you should have an abstract class called MUDObjects and all others should iherit from that class. EF can work perfectly with this, you just need to make it right.
Does the Update/Pause/Resume/Stop architecture I'm using work properly
when placed into the EF entities directly? given than it's for data
purposes only?
In this case i think you should consider using a Design Pattern called Strategy Pattern, do some research, it will fit your objects.
Will locking be an issue?
Depends on how you develop the system....
Does the partial class automatically commit changes when they are
made?
Did not understand that question.... Partial classes are just like regular classes, thay are just in different files, but when compiled (or event at Design-Time, because of the vshost.exe) they are in fact just one.
Would I be better off using a flat repository and doing this in the
game engine directly?
Hard to answer, it all depends on the requirements of the game, deploy strategy....
I'm using GWTP, adding a Contract layer to abstract the knowledge between Presenter and View, and I'm pretty satisfied of the result with GWTP.
I'm testing my presenters with Mockito.
But as time passed, I found it was hard to maintain a clean presenter with its tests.
There are some refactoring stuff I did to improve that, but I was still not satisfied.
I found the following to be the heart of the matter :
My presenters need often asynchronous call, or generally call to objects method with a callback to continue my presenter flow (they are usually nested).
For example :
this.populationManager.populate(new PopulationCallback()
{
public void onPopulate()
{
doSomeStufWithTheView(populationManager.get());
}
});
In my tests, I ended to verify the population() call of the mocked PopulationManager object. Then to create another test on the doSomeStufWithTheView() method.
But I discovered rather quickly that it was bad design : any change or refactoring ended to broke a lot of my tests, and forced me to create from start others, even though the presenter functionality did not change !
Plus I didn't test if the callback was effectively what I wanted.
So I tried to use mockito doAnswer method to do not break my presenter testing flow :
doAnswer(new Answer(){
public Object answer(InvocationOnMock invocation) throws Throwable
{
Object[] args = invocation.getArguments();
((PopulationCallback)args[0]).onPopulate();
return null;
}
}).when(this.populationManager).populate(any(PopulationCallback.class));
I factored the code for it to be less verbose (and internally less dependant to the arg position) :
doAnswer(new PopulationCallbackAnswer())
.when(this.populationManager).populate(any(PopulationCallback.class));
So while mocking the populationManager, I could still test the flow of my presenter, basically like that :
#Test
public void testSomeStuffAppends()
{
// Given
doAnswer(new PopulationCallbackAnswer())
.when(this.populationManager).populate(any(PopulationCallback.class));
// When
this.myPresenter.onReset();
// Then
verify(populationManager).populate(any(PopulationCallback.class)); // That was before
verify(this.myView).displaySomething(); // Now I can do that.
}
I am wondering if it is a good use of the doAnswer method, or if it is a code smell, and a better design can be used ?
Usually, my presenters tend to just use others object (like some Mediator Pattern) and interact with the view. I have some presenter with several hundred (~400) lines of code.
Again, is it a proof of bad design, or is it normal for a presenter to be verbose (because its using others objects) ?
Does anyone heard of some project which uses GWTP and tests its presenter cleanly ?
I hope I explained in a comprehensive way.
Thank you in advance.
PS : I'm pretty new to Stack Overflow, plus my English is still lacking, if my question needs something to be improved, please tell me.
You could use ArgumentCaptor:
Check out this blog post fore more details.
If I understood correctly you are asking about design/architecture.
This is shouldn't be counted as answer, it's just my thoughts.
If I have followed code:
public void loadEmoticonPacks() {
executor.execute(new Runnable() {
public void run() {
pack = loadFromServer();
savePackForUsageAfter();
}
});
}
I usually don't count on executor and just check that methods does concrete job by loading and saving. So the executor here is just instrument to prevent long operations in the UI thread.
If I have something like:
accountManager.setListener(this);
....
public void onAccountEvent(AccountEvent event) {
....
}
I will check first that we subscribed for events (and unsubscribed on some destroying) as well I would check that onAccountEvent does expected scenarios.
UPD1. Probably, in example 1, better would be extract method loadFromServerAndSave and check that it's not executed on UI thread as well check that it does everything as expected.
UPD2. It's better to use framework like Guava Bus for events processing.
We are using this doAnswer pattern in our presenter tests as well and usually it works just fine. One caveat though: If you test it like this you are effectively removing the asynchronous nature of the call, that is the callback is executed immediately after the server call is initiated.
This can lead to undiscovered race conditions. To check for those, you could make this a two-step process: when calling the server,the answer method only saves the callback. Then, when it is appropriate in your test, you call sometinh like flush() or onSuccess() on your answer (I would suggest making a utility class for this that can be reused in other circumstances), so that you can control when the callback for the result is really called.