What is the difference between factory and pipeline design patterns? - class

What is the difference between factory and pipeline design patterns?
I am asking because I need making classes, each of which has a method that will transform textual data in a certain way.
I have other classes whose data needs to be transformed. However, the order and selection of the transformations depends on (and only on) which base class from which these classes inherit.
Is this somehow related pipeline and/or a factory pattern?

Factory creates objects without exposing the instantiation logic to the client and refers to the newly created object through a common interface. So, goal is to make client completely unaware of what concrete type of product it uses and how that instance created.
public interface IFactory // used by clients
{
IProduct CreateProduct();
}
public class FooFactory : IFactory
{
public IProduct CreateProduct()
{
// create new instance of FooProduct
// setup something
// setup something else
// return it
}
}
All creation details are encapsulated. You can create instance via new() call. Or you can clone some existing sample FooProduct. You can skip setup. Or you can read some data from database before. Anything.
Here we go to Pipeline. Pipeline purpose is to divide a larger processing task into a sequence of smaller, independent processing steps (Filters). If creation of your objects is a large task AND setup steps are independent, you can use pipeline for setup inside factory. But instantiation step definitely not independent in this case. It mast occur prior to other steps.
So, you can provide Filters (i.e. Pipeline) to setup your product:
public class BarFilter : IFilter
{
private IFilter _next;
public IProduct Setup(IProduct product)
{
// do Bar setup
if (_next == null)
return product;
return _next.Setup(product);
}
}
public abstract class ProductFactory : IProductFactory
{
protected IFilter _filter;
public IProduct CreateProduct()
{
IProduct product = InstantiateProduct();
if (_filter == null)
return product;
return _filter.Setup(product);
}
protected abstract IProduct InstantiateProduct();
}
And in concrete factories you can setup custom set of filters for your setup pipeline.

Factory is responsible for creating objects:
ICar volvo = CarFactory.BuildVolvo();
ICar bmw = CarFactory.BuildBMW();
IBook pdfBook = BookFactory.CreatePDFBook();
IBook htmlBook = BookFactory.CreateHTMLBook();
Pipeline will help you to separate processing into smaller tasks:
var searchQuery = new SearchQuery();
searchQuery.FilterByCategories(categoryCriteria);
searchQuery.FilterByDate(dateCriteria);
searchQuery.FilterByAuthor(authorCriteria);
There is also a linear pipeline and non-linear pipeline. Linear pipeline would require us to filter by category, then by date and then by author. Non-linear pipeline would allow us to run these simultaneously or in any order.
This article explains it quite well:
http://www.cise.ufl.edu/research/ParallelPatterns/PatternLanguage/AlgorithmStructure/Pipeline.htm

Related

wicket :how to combine CompoundPropertyModel and LoadableDetachableModel

I want to achieve two goals:
I want my model to be loaded every time from the DB when it's in a life-cycle (for every request there will be just one request to the DB)
I want my model to be attached dynamically to the page and that wicket will do all this oreable binding for me
In order to achieve these two goals I came to a conclusion that I need to use both CompoundPropertyModel and LoadableDetachableModel.
Does anyone know if this is a good approach?
Should I do new CompoundPropertyModel(myLoadableDetachableModel)?
Yes, you are right, it is possible to use
new CompoundPropertyModel<T>(new LoadableDetachableModel<T> { ... })
or use static creation (it does the same):
CompoundPropertyModel.of(new LoadableDetachableModel<T> { ... })
that has both features of compound model and lazy detachable model. Also detaching works correctly, when it CompoudPropertyModel is detached it also proxies detaching to inner model that is used as the model object in this case.
I use it in many cases and it works fine.
EXPLANATION:
See how looks CompoundPropertyModel class (I'm speaking about Wicket 1.6 right now):
public class CompoundPropertyModel<T> extends ChainingModel<T>
This mean, CompoundPropertyModel adds the property expression behavior to the ChainingModel.
ChainingModel has the following field 'target' and the constructor to set it.
private Object target;
public ChainingModel(final Object modelObject)
{
...
target = modelObject;
}
This take the 'target' reference to tho object or model.
When you call getObject() it checks the target and proxies the functionality if the target is a subclass of IModel:
public T getObject()
{
if (target instanceof IModel)
{
return ((IModel<T>)target).getObject();
}
return (T)target;
}
The similar functionality is implemented for setObject(T), that also sets the target or proxies it if the target is a subclass of IModel
public void setObject(T object)
{
if (target instanceof IModel)
{
((IModel<T>)target).setObject(object);
}
else
{
target = object;
}
}
The same way is used to detach object, however it check if the target (model object) is detachable, in other words if the target is a subclass if IDetachable, that any of IModel really is.
public void detach()
{
// Detach nested object if it's a detachable
if (target instanceof IDetachable)
{
((IDetachable)target).detach();
}
}

Resolving dependency based on custom criteria

My app relies on multiple event bus objects which are basic publish/subscribe notification model (http://caliburn.codeplex.com/wikipage?title=The%20Event%20Aggregator).
What I want to do is share certain an instance of aggregators with a groups of components. Say component I have a single event bus that's shared between component A, B, and C, and then another event bus that's shared between D,E,F.
I essentially want to declare the event busses as singleton and inject them based on some criteria. I kinda wanna avoid subtyping the event busses just for the purposes of distinguishing resolution.
I've used Google Guice IoC in java which allows metadata resolution for a parameter. Aka in java it allowed me to something equivalent to this.
Example:
public A([SpecialUseAggregator]IEventAggregator something)
public B([SpecialUseAggregator]IEventAggregator something)
public E([AnotherUseAggregator]IEventAggregator something)
public F([AnotherUseAggregator]IEventAggregator something)
Any suggestions?
Autofac does not have/use attributes for the registration. One solution is to use the Named/Keyed registration feature.
So you need to need to register you two EventAggreator with different names/keys and when registering your consumer types A,B, etc you can use the WithParameter to tell Autofac which IEventAggreator it should use for the given instance:
var contianerBuilder = new ContainerBuilder();
contianerBuilder.Register(c => CreateAndConfigureSpecialEventAggregator())
.Named<IEventAggreator>("SpecialUseAggregator");
contianerBuilder.Register(c => CreateAndConfigureAnotherUseAggregator())
.Named<IEventAggreator>("AnotherUseAggregator");
contianerBuilder.RegisterType<A>).AsSelf()
.WithParameter(ResolvedParameter
.ForNamed<IEventAggreator>("SpecialUseAggregator"));
contianerBuilder.RegisterType<B>().AsSelf()
.WithParameter(ResolvedParameter
.ForNamed<IEventAggreator>("SpecialUseAggregator"));
contianerBuilder.RegisterType<C>).AsSelf()
.WithParameter(ResolvedParameter
.ForNamed<IEventAggreator>("AnotherUseAggregator"));
contianerBuilder.RegisterType<D>().AsSelf()
.WithParameter(ResolvedParameter
.ForNamed<IEventAggreator>("AnotherUseAggregator"));
var container = contianerBuilder.Build();
I you still would like to use attributes then you can do it with Autofac because it has all the required extension points it just requires some more code to teach Autofac about your attribute and use it correctly.
If you are registering your types with scanning you cannot use the easily use the WithParameter registration however you use the Metadata facility in Autofac:
Just create an attribute which will hold your EventAggreator key:
public class EventAggrAttribute : Attribute
{
public string Key { get; set; }
public EventAggrAttribute(string key)
{
Key = key;
}
}
And attribute your classes:
[EventAggrAttribute("SpecialUseAggregator")]
public class AViewModel
{
public AViewModel(IEventAggreator eventAggreator)
{
}
}
Then when you do the scanning you need to use the WithMetadataFrom to register the metadata:
contianerBuilder.RegisterAssemblyTypes(Assembly.GetExecutingAssembly())
.Where(t => t.Name.EndsWith("ViewModel"))
.OnPreparing(Method)
.WithMetadataFrom<EventAggrAttribute>();
And finally you need the OnPreparing event where you do the metadata based resolution:
private void Method(PreparingEventArgs obj)
{
// Metadata["Key"] is coming from the EventAggrAttribute.Key
var key = obj.Component.Metadata["Key"].ToString();
ResolvedParameter resolvedParameter =
ResolvedParameter.ForNamed<IEventAggreator>();
obj.Parameters = new List<Parameter>() { resolvedParameter};
}
Here is gist of a working unit test.

How do you refactor a God class?

Does anyone know the best way to refactor a God-object?
Its not as simple as breaking it into a number of smaller classes, because there is a high method coupling. If I pull out one method, i usually end up pulling every other method out.
It's like Jenga. You will need patience and a steady hand, otherwise you have to recreate everything from scratch. Which is not bad, per se - sometimes one needs to throw away code.
Other advice:
Think before pulling out methods: on what data does this method operate? What responsibility does it have?
Try to maintain the interface of the god class at first and delegate calls to the new extracted classes. In the end the god class should be a pure facade without own logic. Then you can keep it for convenience or throw it away and start to use the new classes only
Unit Tests help: write tests for each method before extracting it to assure you don't break functionality
I assume "God Object" means a huge class (measured in lines of code).
The basic idea is to extract parts of its functions into other classes.
In order to find those you can look for
fields/parameters that often get used together. They might move together into a new class
methods (or parts of methods) that use only a small subset of the fields in the class, the might move into a class containing just those field.
primitive types (int, String, boolean). They often are really value objects before their coming out. Once they are value object, they often attract methods.
look at the usage of the god object. Are there different methods used by different clients? Those might go in separate interfaces. Those intefaces might in turn have separate implementations.
For actually doing these changes you should have some infrastructure and tools at your command:
Tests: Have a (possibly generated) exhaustive set of tests ready that you can run often. Be extremely careful with changes you do without tests. I do those, but limit them to things like extract method, which I can do completely with a single IDE action.
Version Control: You want to have a version control that allows you to commit every 2 minutes, without really slowing you down. SVN doesn't really work. Git does.
Mikado Method: The idea of the Mikado Method is to try a change. If it works great. If not take note what is breaking, add them as dependency to the change you started with. Rollback you changes. In the resulting graph, repeat the process with a node that has no dependencies yet. http://mikadomethod.wordpress.com/book/
According to the book "Object Oriented Metrics in Practice" by Lanza and Marinescu, The God Class design flaw refers to classes that tend to centralize the intelligence of the system. A God Class performs too much work on its own, delegating only minor details to a set of trivial classes and using the data from other classes.
The detection of a God Class is based on three main characteristics:
They heavily access data of other simpler classes, either directly or using accessor methods.
They are large and complex
They have a lot of non-communicative behavior i.e., there is a low
cohesion between the methods belonging to that class.
Refactoring a God Class is a complex task, as this disharmony is often a cumulative effect of other disharmonies that occur at the method level. Therefore, performing such a refactoring requires additional and more fine-grained information about the methods of the class, and sometimes even about its inheritance context. A first approach is to identify clusters of methods and attributes that are tied together and to extract these islands into separate classes.
Split Up God Class method from the book "Object-Oriented Reengineering Patterns" proposes to incrementally redistribute the responsibilities of the God Class either to its collaborating classes or to new classes that are pulled out of the God Class.
The book "Working Effectively with Legacy Code" presents some techniques such as Sprout Method, Sprout Class, Wrap Method to be able to test the legacy systems that can be used to support the refactoring of God Classes.
What I would do, is to sub-group methods in the God Class which utilize the same class properties as inputs or outputs. After that, I would split the class into sub-classes, where each sub-class will hold the methods in a sub-group, and the properties which these methods utilize.
That way, each new class will be smaller and more coherent (meaning that all their methods will work on similar class properties). Moreover, there will be less dependency for each new class we generated. After that, we can further reduce those dependencies since we can now understand the code better.
In general, I would say that there are a couple of different methods according to the situation at hand. As an example, let's say that you have a god class named "LoginManager" that validates user information, updates "OnlineUserService" so the user is added to the online user list, and returns login-specific data (such as Welcome screen and one time offers)to the client.
So your class will look something like this:
import java.util.ArrayList;
import java.util.List;
public class LoginManager {
public void handleLogin(String hashedUserId, String hashedUserPassword){
String userId = decryptHashedString(hashedUserId);
String userPassword = decryptHashedString(hashedUserPassword);
if(!validateUser(userId, userPassword)){ return; }
updateOnlineUserService(userId);
sendCustomizedLoginMessage(userId);
sendOneTimeOffer(userId);
}
public String decryptHashedString(String hashedString){
String userId = "";
//TODO Decrypt hashed string for 150 lines of code...
return userId;
}
public boolean validateUser(String userId, String userPassword){
//validate for 100 lines of code...
List<String> userIdList = getUserIdList();
if(!isUserIdValid(userId,userIdList)){return false;}
if(!isPasswordCorrect(userId,userPassword)){return false;}
return true;
}
private List<String> getUserIdList() {
List<String> userIdList = new ArrayList<>();
//TODO: Add implementation details
return userIdList;
}
private boolean isPasswordCorrect(String userId, String userPassword) {
boolean isValidated = false;
//TODO: Add implementation details
return isValidated;
}
private boolean isUserIdValid(String userId, List<String> userIdList) {
boolean isValidated = false;
//TODO: Add implementation details
return isValidated;
}
public void updateOnlineUserService(String userId){
//TODO updateOnlineUserService for 100 lines of code...
}
public void sendCustomizedLoginMessage(String userId){
//TODO sendCustomizedLoginMessage for 50 lines of code...
}
public void sendOneTimeOffer(String userId){
//TODO sendOneTimeOffer for 100 lines of code...
}}
Now we can see that this class will be huge and complex. It is not a God class by book definition yet, since class fields are commonly used among methods now. But for the sake of argument, we can treat it as a God class and start refactoring.
One of the solutions is to create separate small classes which are used as members in the main class. Another thing you could add, could be separating different behaviors in different interfaces and their respective classes. Hide implementation details in classes by making those methods "private". And use those interfaces in the main class to do its bidding.
So at the end, RefactoredLoginManager will look like this:
public class RefactoredLoginManager {
IDecryptHandler decryptHandler;
IValidateHandler validateHandler;
IOnlineUserServiceNotifier onlineUserServiceNotifier;
IClientDataSender clientDataSender;
public void handleLogin(String hashedUserId, String hashedUserPassword){
String userId = decryptHandler.decryptHashedString(hashedUserId);
String userPassword = decryptHandler.decryptHashedString(hashedUserPassword);
if(!validateHandler.validateUser(userId, userPassword)){ return; }
onlineUserServiceNotifier.updateOnlineUserService(userId);
clientDataSender.sendCustomizedLoginMessage(userId);
clientDataSender.sendOneTimeOffer(userId);
}
}
DecryptHandler:
public class DecryptHandler implements IDecryptHandler {
public String decryptHashedString(String hashedString){
String userId = "";
//TODO Decrypt hashed string for 150 lines of code...
return userId;
}
}
public interface IDecryptHandler {
String decryptHashedString(String hashedString);
}
ValidateHandler:
public class ValidateHandler implements IValidateHandler {
public boolean validateUser(String userId, String userPassword){
//validate for 100 lines of code...
List<String> userIdList = getUserIdList();
if(!isUserIdValid(userId,userIdList)){return false;}
if(!isPasswordCorrect(userId,userPassword)){return false;}
return true;
}
private List<String> getUserIdList() {
List<String> userIdList = new ArrayList<>();
//TODO: Add implementation details
return userIdList;
}
private boolean isPasswordCorrect(String userId, String userPassword)
{
boolean isValidated = false;
//TODO: Add implementation details
return isValidated;
}
private boolean isUserIdValid(String userId, List<String> userIdList)
{
boolean isValidated = false;
//TODO: Add implementation details
return isValidated;
}
}
Important thing to note here is that the interfaces () only has to include the methods used by other classes. So IValidateHandler looks as simple as this:
public interface IValidateHandler {
boolean validateUser(String userId, String userPassword);
}
OnlineUserServiceNotifier:
public class OnlineUserServiceNotifier implements
IOnlineUserServiceNotifier {
public void updateOnlineUserService(String userId){
//TODO updateOnlineUserService for 100 lines of code...
}
}
public interface IOnlineUserServiceNotifier {
void updateOnlineUserService(String userId);
}
ClientDataSender:
public class ClientDataSender implements IClientDataSender {
public void sendCustomizedLoginMessage(String userId){
//TODO sendCustomizedLoginMessage for 50 lines of code...
}
public void sendOneTimeOffer(String userId){
//TODO sendOneTimeOffer for 100 lines of code...
}
}
Since both methods are accessed in LoginHandler, interface has to include both methods:
public interface IClientDataSender {
void sendCustomizedLoginMessage(String userId);
void sendOneTimeOffer(String userId);
}
There are really two topics here:
Given a God class, how its members be rationally partitioned into subsets? The fundamental idea is to group elements by conceptual coherency (often indicated by frequent co-usage in client modules) and by forced dependencies. Obviously the details of this are specific to the system being refactored. The outcome is a desired partition (set of groups) of God class elements.
Given a desired partition, actually making the change. This is difficult if the code base has any scale. Doing this manually, you are almost forced to retain the God class while you modify its accessors to instead call new classes formed from the partitions. And of course you need to test, test, test because it is easy to make a mistake when manually making these changes. When all accesses to the God class are gone, you can finally remove it. This sounds great in theory but it takes a long time in practice if you are facing thousands of compilation units, and you have to get the team members to stop adding accesses to the God interface while you do this. One can, however, apply automated refactoring tools to implement this; with such a tool you specify the partition to the tool and it then modifies the code base in a reliable way. Our DMS can implement this Refactoring C++ God Classes and has been used to make such changes across systems with 3,000 compilation units.

Decouple EF queries from BL - Extension Methods VS Class-Per-Query

I have read dozens of posts about PROs and CONs of trying to mock \ fake EF in the business logic.
I have not yet decided what to do - but one thing I know is - I have to separate the queries from the business logic.
In this post I saw that Ladislav has answered that there are 2 good ways:
Let them be where they are and use custom extension methods, query views, mapped database views or custom defining queries to define reusable parts.
Expose every single query as method on some separate class. The method
mustn't expose IQueryable and mustn't accept Expression as parameter =
whole query logic must be wrapped in the method. But this will make
your class covering related methods much like repository (the only one
which can be mocked or faked). This implementation is close to
implementation used with stored procedures.
Which method do you think is better any why ?
Are there ANY downsides to put the queries in their own place ? (maybe losing some functionality from EF or something like that)
Do I have to encapsulate even the simplest queries like:
using (MyDbContext entities = new MyDbContext)
{
User user = entities.Users.Find(userId); // ENCAPSULATE THIS ?
// Some BL Code here
}
So I guess your main point is testability of your code, isn't it? In such case you should start by counting responsibilities of the method you want to test and than refactor your code using single responsibility pattern.
Your example code has at least three responsibilities:
Creating an object is a responsibility - context is an object. Moreover it is and object you don't want to use in your unit test so you must move its creation elsewhere.
Executing query is a responsibility. Moreover it is a responsibility you would like to avoid in your unit test.
Doing some business logic is a responsibility
To simplify testing you should refactor your code and divide those responsibilities to separate methods.
public class MyBLClass()
{
public void MyBLMethod(int userId)
{
using (IMyContext entities = GetContext())
{
User user = GetUserFromDb(entities, userId);
// Some BL Code here
}
}
protected virtual IMyContext GetContext()
{
return new MyDbContext();
}
protected virtual User GetUserFromDb(IMyDbContext entities, int userId)
{
return entities.Users.Find(userId);
}
}
Now unit testing business logic should be piece of cake because your unit test can inherit your class and fake context factory method and query execution method and become fully independent on EF.
// NUnit unit test
[TestFixture]
public class MyBLClassTest : MyBLClass
{
private class FakeContext : IMyContext
{
// Create just empty implementation of context interface
}
private User _testUser;
[Test]
public void MyBLMethod_DoSomething()
{
// Test setup
int id = 10;
_testUser = new User
{
Id = id,
// rest is your expected test data - that is what faking is about
// faked method returns simply data your test method expects
};
// Execution of method under test
MyBLMethod(id);
// Test validation
// Assert something you expect to happen on _testUser instance
// inside MyBLMethod
}
protected override IMyContext GetContext()
{
return new FakeContext();
}
protected override User GetUserFromDb(IMyContext context, int userId)
{
return _testUser.Id == userId ? _testUser : null;
}
}
As you add more methods and your application grows you will refactor those query execution methods and context factory method to separate classes to follow single responsibility on classes as well - you will get context factory and either some query provider or in some cases repository (but that repository will never return IQueryable or get Expression as parameter in any of its methods). This will also allow you following DRY principle where your context creation and most commonly used queries will be defined only once on one central place.
So at the end you can have something like this:
public class MyBLClass()
{
private IContextFactory _contextFactory;
private IUserQueryProvider _userProvider;
public MyBLClass(IContextFactory contextFactory, IUserQueryProvider userProvider)
{
_contextFactory = contextFactory;
_userProvider = userProvider;
}
public void MyBLMethod(int userId)
{
using (IMyContext entities = _contextFactory.GetContext())
{
User user = _userProvider.GetSingle(entities, userId);
// Some BL Code here
}
}
}
Where those interfaces will look like:
public interface IContextFactory
{
IMyContext GetContext();
}
public class MyContextFactory : IContextFactory
{
public IMyContext GetContext()
{
// Here belongs any logic necessary to create context
// If you for example want to cache context per HTTP request
// you can implement logic here.
return new MyDbContext();
}
}
and
public interface IUserQueryProvider
{
User GetUser(int userId);
// Any other reusable queries for user entities
// Non of queries returns IQueryable or accepts Expression as parameter
// For example: IEnumerable<User> GetActiveUsers();
}
public class MyUserQueryProvider : IUserQueryProvider
{
public User GetUser(IMyContext context, int userId)
{
return context.Users.Find(userId);
}
// Implementation of other queries
// Only inside query implementations you can use extension methods on IQueryable
}
Your test will now only use fakes for context factory and query provider.
// NUnit + Moq unit test
[TestFixture]
public class MyBLClassTest
{
private class FakeContext : IMyContext
{
// Create just empty implementation of context interface
}
[Test]
public void MyBLMethod_DoSomething()
{
// Test setup
int id = 10;
var user = new User
{
Id = id,
// rest is your expected test data - that is what faking is about
// faked method returns simply data your test method expects
};
var contextFactory = new Mock<IContextFactory>();
contextFactory.Setup(f => f.GetContext()).Returns(new FakeContext());
var queryProvider = new Mock<IUserQueryProvider>();
queryProvider.Setup(f => f.GetUser(It.IsAny<IContextFactory>(), id)).Returns(user);
// Execution of method under test
var myBLClass = new MyBLClass(contextFactory.Object, queryProvider.Object);
myBLClass.MyBLMethod(id);
// Test validation
// Assert something you expect to happen on user instance
// inside MyBLMethod
}
}
It would be little bit different in case of repository which should have reference to context passed to its constructor prior to injecting it to your business class.
Your business class can still define some queries which are never use in any other classes - those queries are most probably part of its logic. You can also use extension methods to define some reusable part of queries but you must always use those extension methods outside of your core business logic which you want to unit test (either in query execution methods or in query provider / repository). That will allow you easy faking query provider or query execution methods.
I saw your previous question and thought about writing a blog post about that topic but the core of my opinion about testing with EF is in this answer.
Edit:
Repository is different topic which doesn't relate to your original question. Specific repository is still valid pattern. We are not against repositories, we are against generic repositories because they don't provide any additional features and don't solve any problem.
The problem is that repository alone doesn't solve anything. There are three patterns which have to be used together to form proper abstraction: Repository, Unit of Work and Specifications. All three are already available in EF: DbSet / ObjectSet as repositories, DbContext / ObjectContext as Unit of works and Linq to Entities as specifications. The main problem with custom implementation of generic repositories mentioned everywhere is that they replace only repository and unit of work with custom implementation but still depend on original specifications => abstraction is incomplete and it is leaking in tests where faked repository behaves in the same way as faked set / context.
The main disadvantage of my query provider is explicit method for any query you will need to execute. In case of repository you will not have such methods you will have just few methods accepting specification (but again those specifications should be defined in DRY principle) which will build query filtering conditions, ordering etc.
public interface IUserRepository
{
User Find(int userId);
IEnumerable<User> FindAll(ISpecification spec);
}
The discussion of this topic is far beyond the scope of this question and it requires you to do some self study.
Btw. mocking and faking has different purpose - you fake a call if you need to get testing data from method in the dependency and you mock the call if you need to assert that method on dependency was called with expected arguments.

Replace registration in Autofac

I have an application which does data processing. There is
class Pipeline {
IEnumerable<IFilter> Filters {get; set;}
I register filters implementations as
builder.RegisterType<DiversityFilter>().As<IFilter>();
builder.RegisterType<OverflowFilter>().As<IFilter>();
...
So far so good. Now, for experimentation and fine-tuning I want to be able to override any filter implementation in config file with a program(script) which would read data from stdin, process it and send data to stdout. I've implemented a module with "fileName", "args" and "insteadOf" custom properties, described module in xml and got it called.
In the module I register my "ExecutableFilter" but how do I make it run "instead of" desired service? If I try do it like this:
builder.RegisterType<ExecutableFilter>().As<DiversityFilter>()
then I get an exception " The type 'ExecutableFilter' is not assignable to service 'DiversityFilter'.". Ok, this is logical. But what are my options then?
Once you've overridden the registration for IFilter "After" with your wire-tap, you won't be able to resolve it from the container, as the new registration will be activated instead, hence the circular lookup.
Instead, create and register a module that hooks into the filter's creation, and replaces the instance with the 'wire tapped' one:
class WiretapModule : Module
{
override void AttachToComponentRegistration(
IComponentRegistration registration,
IComponentRegistry registry)
{
if (registration.Services.OfType<KeyedService>().Any(
s => s.ServiceKey == After && s.ServiceType == typeof(IFilter)))
{
registration.Activating += (s, e) => {
e.Instance = new WireTap((IFilter)e.Instance, new ExecuteProvider(fileName, args))
};
}
}
}
(Cross-posted to the Autofac group: https://groups.google.com/forum/#!topic/autofac/yLbTeuCObrU)
What you describe is part container work, part business logic. The challenge is to keep separation of concerns here. IMO, the container should do what it is supposed to do, that is building and serving up instances or collections thereof. It should not do the "instead of" in this case. I would rather "enrich" the services with enough information so that the pipeline make the decision.
The "enrichment" can be accomplished by making the ExecutableFilter implement a more distinct interface.
interface IInsteadOfFilter : IFilter { }
...
builder.RegisterType<ExecutableFilter>().As<IFilter>();
...
class Pipeline
{
IEnumerable<IFilter> Filters {get;set;}
public void DoTheFiltering()
{
var filters = Filters.OfType<IInsteadOfFilter>();
if (!insteadof.Any())
filters = Filters;
foreach(var filter in filters)
{
...
}
}
You could also solve this using the metadata infrastructure, which gives us an even more expressive way of differentiating services.