Autofac `IStartable.Start()` invoked before `RegisterBuildCallback` - autofac

The IStartable.Start() method of a component is invoked before RegisterBuildCallback.
Is it a bug or a feature?
According to the docs:
Startable Components: A startable component is one that is activated by the container when the container is initially built
and
Container Build Callbacks: You can register any arbitrary action to happen at container build time by registering a build callback. A build callback is an Action and will get the built container prior to that container being returned from ContainerBuilder.Build.
So, the order of operation doesn't seem to be defined, but in my opinion container build callback are part of the "container build process", where starting components should happen only when everything else is built already.
Repro:
using System;
using Autofac;
namespace AutofacBuildOrderRepro
{
class Program
{
static void Main(string[] args)
{
var builder = new ContainerBuilder();
builder.RegisterBuildCallback(ctx => StaticClass.ObjectProvider = () => new object());
//this fails:
builder.RegisterType<StartableClass>().As<IStartable>().AsSelf().SingleInstance();
builder.Build();
//this works:
//builder.RegisterType<StartableClass>().AsSelf().SingleInstance();
//var container = builder.Build();
//container.Resolve<StartableClass>().Start();
}
class StartableClass : IStartable
{
public void Start()
{
StaticClass.Run();
}
}
public static class StaticClass
{
public static Func<object> ObjectProvider { private get; set; }
public static void Run()
{
if (ObjectProvider== null)
{
throw new InvalidOperationException("ObjectProvider is null");
}
Console.WriteLine("Success");
}
}
}
}
Exception call stack:
Unhandled Exception: Autofac.Core.DependencyResolutionException: An exception was thrown while executing a resolve operation. See the InnerException for details. ---> ObjectProvider is null (See inner exception for details.) ---> System.InvalidOperationException: ObjectProvider is null
at AutofacBuildOrderRepro.Program.StaticClass.Run() in c:\path\AutofacBuildOrderRepro\Program.cs:line 39
at AutofacBuildOrderRepro.Program.StartableClass.Start() in c:\path\AutofacBuildOrderRepro\Program.cs:line 27
at Autofac.Core.Resolving.InstanceLookup.StartStartableComponent(Object instance)
at Autofac.Core.Resolving.InstanceLookup.Execute()
at Autofac.Core.Resolving.ResolveOperation.GetOrCreateInstance(ISharingLifetimeScope currentOperationScope, IComponentRegistration registration, IEnumerable`1 parameters)
at Autofac.Core.Resolving.ResolveOperation.Execute(IComponentRegistration registration, IEnumerable`1 parameters)
--- End of inner exception stack trace ---
at Autofac.Core.Resolving.ResolveOperation.Execute(IComponentRegistration registration, IEnumerable`1 parameters)
at Autofac.Core.Lifetime.LifetimeScope.ResolveComponent(IComponentRegistration registration, IEnumerable`1 parameters)
at Autofac.Core.Container.ResolveComponent(IComponentRegistration registration, IEnumerable`1 parameters)
at Autofac.Builder.StartableManager.StartStartableComponents(IComponentContext componentContext)
at Autofac.ContainerBuilder.Build(ContainerBuildOptions options)
at AutofacBuildOrderRepro.Program.Main(String[] args) in c:\path\AutofacBuildOrderRepro\Program.cs:line 15
I tried it with Autofac ver. 4.8.1 and 3.0.4.

The short version: Use IStartable.Start(), AutoActivate, and build callbacks sparingly. If you need to control order, instead of using a combination of callbacks and startables, execute the operations you need in the appropriate order in your application code after building the container. Use a build callback to run your specifically ordered logic rather than trying to ensure a particular order across all three of these things.
The long version: In general the current logic is to run IStartable, then AutoActivate, then build callbacks.
That is the logic today, it is not guaranteed that will be the logic tomorrow.
The reason this isn't necessarily guaranteed:
If one IStartable depends on another IStartable, they get run in dependency order (the dependency gets started before the thing consuming the dependency).
If an IStartable or AutoActivate tries to create a child lifetime scope during Start or activation and starts resolving things, that will throw off the order. (Yes, this was a recent issue we had filed. People do this.)
The notion of IStartable and AutoActivate somewhat fit with the notion of build callbacks, so the logic for these and/or other startup "on container build" logic may be refactored/moved to become build callbacks, which may affect ordering.
Personally, I don't use any of these things. They are convenient mechanisms for bolting application startup logic together with the container creation process but it somewhat breaks single responsibility principle by co-opting a dependency setup mechanism with unrelated logic. That may not be what's happening here, but it happens a lot "in the wild." Gotchas like "I need to control the order things run in except there is also a dependency order things need to run in" really get people into sticky situations.
Anyway, if it's not working or it's running in an order you're not expecting, consider things like OnActivating in conjunction with SingleInstance so it happens more lazily; or move some of that initialization logic out of container build and into specific logic for your app where you can control that order manually.

Related

Persistence of execution information in Axon Saga

We are using the Axon Framework to implement the Saga Pattern in Java. Axon uses two tables (ASSOCIATION_VALUE_ENTRY and SAGA_ENTRY) to store all the necessary information after each step of the saga. And at the end of the process (if it is correct, or, in case of error, all the compensations have been executed), it deletes the registers.
If for any reason, after an error, the compensations cannot be executed, we are able to resume the execution at the point where it failed, based on the stored information. Until here, everything is ok.
The issue came when we wanted to improve the resilience of the process and we checked what happened if the service died during the execution of a saga. According to the above, we expected the information of the execution to be persisted in the tables, but they were empty: the information only appeared when the process couldn't continue due to an error in a compensation (and no final delete action was executed).
Analyzing the source code of the Axon's JpaSagaStore class implementation, the interactions with the database (insert, update and delete) are persisted with a flush instead of a commit. The global commit is managed in the AbstractUnitOfWork class (as far as we understand). And here is where we have the doubts:
According to the literature, the flush writes in the database but the register is in a READ_UNCOMMITED state. The only way to see them in the database would be activating the READ_UNCOMMITED isolation level, with the problematic of the 'dirty reads', right? There would be any additional consideration/issue to have into account?
Does Axon have an alternative in order to ensure the persistence of the saga registers? Mainly if we couldn't activate the READ_UNCOMMITED mode (due to internal policies).
EDIT:
Summarizing it a lot, all starts with this method
public void startSaga(SagaWorkflow sagaWorkflow, Serializable sagaInput) {
StartSagaEvt startSagaEvt = StartSagaEvt.builder().sagaWorkflow(sagaWorkflow).sagaInput(sagaInput).build();
eventBus.publish(GenericEventMessage.asEventMessage(startSagaEvt));
}
Where:
eventBus is the Axon's internal one
sagaInput is simply a Serializable with some input values
SagaWorkflow is a Serializable that models the whole saga flow, whose main attribute is a LinkedList of nodes (the different steps of the saga, each one can have a different logic)
StartSagaEvt is just the POJO that models the event sent to the bus
After this, Axon performs all its 'magic' and finally arrives to the internal code:
AnnotatedSagaRepository.doCreateInstance --> AnnotatedSagaRepository.storeSaga --> [...] --> JpaSagaStore.insertSaga
public void insertSaga(Class<?> sagaType, String sagaIdentifier, Object saga, Set<AssociationValue> associationValues) {
EntityManager entityManager = entityManagerProvider.getEntityManager();
AbstractSagaEntry<?> entry = createSagaEntry(saga, sagaIdentifier, serializer);
entityManager.persist(entry);
for (AssociationValue associationValue : associationValues) {
storeAssociationValue(entityManager, sagaType, sagaIdentifier, associationValue);
}
if (logger.isDebugEnabled()) {
logger.debug("Storing saga id {} as {}", sagaIdentifier, serializedSagaAsString(entry));
}
if (useExplicitFlush) {
entityManager.flush();
}
}
The same applies for the update and delete phases. As far as I know, all the handle of the commit/rollback is performed in the class AbstractUnitOfWork, that intervenes just at the end of the complete saga flow.
This leads me to the following considerations/questions:
what sense has to keep the transaction open during the whole process instead of committing after each step? If for any reason the process fails, goes down, the database is not accessible,... all the saved information is lost.
There must be a design reason for this behavior, but I'm not able to see it. Or maybe there is a configuration to change it (hopefully, although I doubt it).
Thanks in advance for any comment!
EDIT 2
Effectively, we are using it as a kind of state machine, where the saga flow is a sequence of steps, each one with an action and a compensation, and we jump from one to another until reach an "END" status.
#Saga
class GenericSaga {
private EventBus eventBus;
private CustomCommandGateway commandGateway;
[...]
#StartSaga
#SagaEventHandler(associationProperty = "sagaId")
public void startStep(StartSagaEvt startSagaEvt) {
// Initializes de GenericSaga and associate several properties with SagaLifecycle.associateWith(key, value);
[...]
// Transit to the next (first) step
eventBus.publish(GenericEventMessage.asEventMessage(new StepSagaEvt(startSagaEvt)));
}
#SagaEventHandler(associationProperty = "sagaId")
public void nextStep(StepSagaEvt stepSagaEvt) {
// Identifies what is the next step in the defined flow, considering if it should be executed sequentially or concurrently, or if it is the end of the flow and then call the SagaLifecycle.end()
[...]
// Also checks if it has to execute the compensation logic of the step
[...]
// Execute
Serializable actionOutput = commandGateway.sendAndWaitEx(stepAction.getActionInput());
}
#SagaEventHandler(associationProperty = "sagaId")
public void resumeSaga(ResumeSagaEvt resumeSagaEvt) {
// Recover information from the execution that we want to resume
[...]
// Transit to the next step
eventBus.publish(GenericEventMessage.asEventMessage(new StepSagaEvt(resumeSagaEvt)));
}
}
As you can see, we don't have an endSaga annotation, and maybe that's the problem. But in our current situation we have kicked forward, and be have defined our custom implementation of the JpaSagaStore, in order to force a local transaction in the insertSaga and updateSaga methods.
Based on my understanding, I think you are somehow misusing the Saga component from Axon Framework. I assume from your question that you are trying to build a form of a 'state machine' using your own SagaWorkflow object. If that is the case, I have to say this is not how Axon intends the usage of Sagas.
To add to that, let me give you a pseudo-sample of what a Saga should look like.
#Saga
class SagaWorkflow {
private transient CommandGateway commandGateway;
#StartSaga
#SagaEventHandler(associationProperty = "yourProperty")
public void on(SagaInputEvent event) {
// validate, associate with another property and fire a command
SagaLifecycle.associateWith("associationPropertyKey", "associationPropertyValue");
commandGateway.send(new GivenCommand());
}
#SagaEventHandler(associationProperty = "associationPropertyValue")
public void on(AnotherEvent event) {
// validate and fire a command or finish the saga
SagaLifecycle.end();
}
#EndSaga
#SagaEventHandler(associationProperty = "anyProperty")
public void on(FinishSagaEvent event) {
// check if you need to fire extra commands to tell others it's finished or just do it silently
}
}
#Saga Annotation will make sure Axon Framework handles the whole Saga process for you, storing (serializing) it to the database when each (Saga)EventHandler is executed
#SagaEventHandler will make sure the 'Event Handling method' reacts to a given Event, only if it contains the associationProperty as part of the Event (to understand it better, I will share our docs link)
#EndSaga will tell Axon Framework to finalize the Saga after the execution of the method (finalizing means deleting it from the database)
SagaLifecycle provides several 'utilities' methods to interact with the Saga's lifecycle and associations
In the example, I made the CommandGateway transient because the Saga is serialized and stored on the database. You would not Axon to serializer any external component, like the gateway, as well
Of course, there is more to it.
You can check Axon's docs for that. But I hope this gives you enough material and ideas to use Sagas within Axon Framework better!
KR

Limit instance count with Autofac?

I have a console app that will create an instance of a class and execute a method on it, and that's really all it does (but this method may do a lot of things). The class is determined at runtime based on command line args, and this is registered to Autofac so it can be correctly resolved, supplying class-specific constructor parameters extracted from the command line. All this works.
Now, I need to impose a system-wide limit to the number of instances per class that can be running at any one time. I will probably use a simple SQL database to keep track of number of allowed and running instances per class, and I have no problem with the SQL side of things.
But how do I actually impose this limit in a nice manner using Autofac?
I am thinking that I would have some "slot service" that would do something like this:
Try to reserve a new instance "slot".
If no more slots, log a message and terminate the process.
If slot successfully reserved, create instance and return it.
My idea is also to free the instance's slot in the class' Dispose method, preferably by using another method on the slot service.
How would I fit this into Autofac?
One possibility would be to register the class I want to instantiate with a lambda/delegate that does the above steps. But in that case, how do I "terminate"? Throw an exception? That would require some code to catch the exception and either log it or simply ignore it before terminating the process. I don't like it. I'd like the entire slot reservation stuff inside the delegate, lambda or service.
Another solution might be to do the slot reservation outside of Autofac, but that also seems somewhat messy.
I would prefer a solution where the "slot service" itself can be nicely unit tested, i.e. non-static and with an interface, and preferably resolved with Autofac.
I'm sure I'm missing something obvious here... Any suggestions?
This is my "best bet" so far:
static void Main(string[] args)
{
ReadCommandLine(args, out Type itemClass, out Type paramsClass, out Type paramsInterface, out object parameters);
BuildContainer(itemClass, paramsClass, paramsInterface, parameters);
IInstanceHandler ih = Container.Resolve<IInstanceHandler>();
if (ih.RegisterInstance(itemClass, out long instanceid))
{
try
{
Container.Resolve<IItem>().Execute();
}
finally
{
ih.UnregisterInstance(itemClass, instanceid);
}
}
}

Benefit of applying Unit-of-Work on Service-layer instead of Controller

When implementing MVC project, I usually add Service Layer to perform the actual work. But actually sometimes 1 Web Request should be done with several AppService methods. Then the location of Unit-of-Work (UoW) may affect the coding handling.
No matter in C# EF/Java Spring, there's Transaction annotation in Service Layer methods, so the transaction is Per-Service based (i.e. UoW on Service layer). Let's take Java version as example here:
#Transactional(propagation = Propagation.REQUIRED, isolation = Isolation.READ_COMMITTED)
Public class UserAppService{
public UserDTO createUser() {
// Do sth to create a new user
userRepository.save(userBean);
// Convert userBean to userDTO
return userDTO;
}
public xxx DoSth() {
// Break the operation here
throw new Exception("Whatever");
// (never execute actually)
sthRepository.save(someBean);
}
}
Then in Controller:
Public class SomeController : Controller {
Public xxx DoSth(){
UserAppService Service = new UserAppService();
Service.CreateUser(); // DB committed
Service.DoSth(); //Exception thrown
}
}
With this structure, If there's any exception thrown on 2nd service method call, the 1st service method still commit the user to the DB. If I want "all-or-nothing" handling, this structure doesn't work unless I wrap those service method calls into another wrapper service call with single transaction. But it's sort of extra work.
Another version is using transaction on Controller action level instead (i.e. UoW on Controller Action). Let's take C# code as example:
Remarks: AppService in code version 2 here use the DbContext (sth like transaction) defined in controller, and doesn't do any commit inside.
Public class SomeController : Controller {
Public ActionResult DoSth(){
using (var DB = new DbContext()){
Var UserAppService = new UserAppService(DB);
var userEntity = userAppService.GetUser(userId);
UserAppService.DoSth(userEntity);
Var AnotherAppService = new AnotherAppService(DB);
AnotherAppService.DoSthElse(userEntity);
// Throw exception here
throw new Exception("Whatever");
DB.Save(); // commit
}
}
}
In this example, there will be no partial commit to the DB.
Is applying UoW on service-layer really better?
Is applying UoW on service-layer really better?
IMO No. And you've just figured out why. If the service methods are discreet and re-usable, they are also not suitable for being atomic transactions.
In .NET the controller should control the transaction lifecycle, and enlist service methods in the transaction.
Note that this also implies that the service methods should be local method calls, not remote or web service calls.
It is better because your following the main principle of Object Oriented Programming seperation of concerns.What if you made another controller and wanted to do more database processing using the same object? You dont want to instantiate the controller in which your doing something completely different.By the way check out the facade service pattern http://soapatterns.org/design_patterns/service_facade it may help you understand why its so sexy. .Hi the image above shows the pattern, basically you wrap your database access objects with transactional at the service layer so a customerService object can wrap 1,2...inf transactions and either all fail or succeed.

DI and inheritance

Another question appeared during my migration from an E3 application to a pure E4.
I got a Structure using inheritance as in the following pic.
There I have an invocation sequence going from the AbstractRootEditor to the FormRootEditor to the SashCompositeSubView to the TableSubView.
There I want to use my EMenuService, but it is null due to it can´t be injected.
The AbstractRootEditor is the only class connected to the Application Model (as a MPart created out of an MPartDescriptor).
I´d like to inject the EMenuService anyway in the AbstractSubView, otherwise I would´ve the need to carry the Service through all of my classes. But I don´t have an IEclipseContext there, due to my AbstractSubView is not connected with Application Model (Do I ?).
I there any chance to get the service injected in the AvstractSubView?
EDIT:
I noticed that injecting this in my AbstractSubView isn´t possible (?), so I´m trying to get it into my TableSubView.
After gregs comment i want to show some code:
in the AbstractRootEditor:
#PostConstruct
public final void createPartControl(Composite parent, #Active MPart mPart) {
...
ContextInjectionFactory.make(TableSubView.class, mPart.getContext());
First I got an Exception, saying that my TableSubView.class got an invalid constructor, so now the Constructor there is:
public TableSubView() {
this.tableInputController=null;
}
as well as my Field-Injection:
#Inject EMenuService eMenuService
This is kind of not working, eMenuService is still null
If you create your objects using ContextInjectionFactory they will be injected. Use:
MyClass myClass = ContextInjectionFactory.make(MyClass.class, context);
where context is an IEclipseContext (so you have to do this for every class starting from one that is injected by Eclipse).
There is also a seconds version of ContextInjectionFactory.make which lets you provide two contexts the second one being a temporary context which can contain additional values.

Autofac and Quartz.Net Integration

Does anyone have any experience integrating autofac and Quartz.Net? If so, where is it best to control lifetime management -- the IJobFactory, within the Execute of the IJob, or through event listeners?
Right now, I'm using a custom autofac IJobFactory to create the IJob instances, but I don't have an easy way to plug in to a ILifetimeScope in the IJobFactory to ensure any expensive resources that are injected in the IJob are cleaned up. The job factory just creates an instance of a job and returns it. Here are my current ideas (hopefully there are better ones...)
It looks like most AutoFac integrations somehow wrap a ILifetimeScope around the unit of work they create. The obvious brute force way seems to be to pass an ILifetimeScope into the IJob and have the Execute method create a child ILifetimeScope and instantiate any dependencies there. This seems a little too close to a service locator pattern, which in turn seems to go against the spirit of autofac, but it might be the most obvious way to ensure proper handling of a scope.
I could plug into some of the Quartz events to handle the different phases of the Job execution stack, and handle lifetime management there. That would probably be a lot more work, but possibly worth it if it gets cleaner separation of concerns.
Ensure that an IJob is a simple wrapper around an IServiceComponent type, which would do all the work, and request it as Owned<T>, or Func<Owned<T>>. I like how this seems to vibe more with autofac, but I don't like that its not strictly enforceable for all implementors of IJob.
Without knowing too much about Quartz.Net and IJobs, I'll venture a suggestion still.
Consider the following Job wrapper:
public class JobWrapper<T>: IJob where T:IJob
{
private Func<Owned<T>> _jobFactory;
public JobWrapper(Func<Owned<T>> jobFactory)
{
_jobFactory = jobFactory;
}
void IJob.Execute()
{
using (var ownedJob = _jobFactory())
{
var theJob = ownedJob.Value;
theJob.Execute();
}
}
}
Given the following registrations:
builder.RegisterGeneric(typeof(JobWrapper<>));
builder.RegisterType<SomeJob>();
A job factory could now resolve this wrapper:
var job = _container.Resolve<JobWrapper<SomeJob>>();
Note: a lifetime scope will be created as part of the ownedJob instance, which in this case is of type Owned<SomeJob>. Any dependencies required by SomeJob that is InstancePerLifetimeScope or InstancePerDependency will be created and destroyed along with the Owned instance.
Take a look at https://github.com/alphacloud/Autofac.Extras.Quartz. It also available as NuGet package https://www.nuget.org/packages/Autofac.Extras.Quartz/
I know it a bit late:)