Can an Autofac Decorator be registered to override existing registrations? - autofac

Given multiple types are registered with the same interface, Autofac uses the last type registered when resolving a component.
An exception to this behavior seems to be with Autofac's decorator feature. Given a type is registered as IDoSomething either before or after the type is decorated using the standard process of registering a keyed type and then registering a decorator with the same key, it seems the instance returned is the non-decorated type.
I'm running into this situation with a library I've written to decorate instances of an IConsumer. Existing clients of this library currently have modules which first register everything by convention and then specific modules to override registrations with specific needs (e.g. other lifetime scope, custom factory logic, etc.) The library works fine if the convention-based module is modified to have an exception for types implementing IConsumer, but I'd rather that this exception wouldn't need to be done as it's inconsistent with how other registrations are dealt with and it's just a hassle to debug if you forget or don't know to exclude the types prior to wiring up the decorators.
Is there a better solution for this?

Related

Scala: Dependency Injection via Reader vs parameter list

There are some options to inject dependencies in FP. I want to compare here only the two:
Injection via parameter list
Injection via Reader
The 2nd case is more composable and less verbose, in case I invoke several methods with dependencies. It lets me pass a dependency once. But I still do not feel it, or do not have exact rules, when it is better to pass via Reader or not.
For instance, passing external dependency to a service, is more convenient via Reader. But for instance, an instance of Map or some DTO/case class objects, which can be considered just as a holder of some input data attributes.
During my current experience, I've found for instance, that when readers with different set of dependencies are used, it is getting more verbose to transform them to each other. And sometimes a code does not look clear.
I know, such questions are not good for [stackoverflow.com], but I believe this issue is not subjective and certain arguments can be given to choose a correct option.

Method contract without Traits in Scala

I'm trying to add some re-usability to a Java library which has some common methods across classes, but whose methods are not part of a common hierarchy. I'm pretty certain I've seen it previously that Scala allows non-trait based contracts for parameter classes, but for the life of me I cannot find this information anywhere at the moment.
Does my memory serve me correctly? Would anybody be able to point me in the right direction for documentation on said language feature (if I am not mistaken)?
For some added context, I'm trying to reduce duplicate code when using some Google Java libraries where things like getNextPageToken(), setPageToken(), etc. are common between many classes, but are not implemented further up in the hierarchy where I would have the option to specify a common parent class as the parameter type. So essentially I'd like to enforce that these methods exist and offload the duplicate request & pagination code to a common function using said method contracts.
You probably want to use structural types:
example:
def method(param: { def getNextPageToken(): Unit })
param will be required to have getNextPageToken method with no parameters and returning Unit. It is handled using reflection.

Creating Self-Documenting Actors in Scala

I'm looking at implementing a JSON-RPC based web service in Scala using finagle. I'm trying to work out how best to structure the RPC invocation code (ie. taking the deserialized request and invoking the appropriate method).
The service needs to be able to spit out a help page on all the possible requests accepted and their parameters. In Java, I would simply use annotations (to both expose and document functions) and then have the RPC service reflect on the appropriate classes, detect all exposed methods and then use the reflected MethodInfo's to invoke the functions where appropriate.
What is the idiomatic Scala way to achieve something similar? Should I use a message-passing approach (ie. just pass a request object into an actor, have it determine if it can invoke it, etc.)
We had success doing something similar to the approach suggested by #Jan above. More specifically, we defined a parent class for all request objects which takes the expected return type as a type parameter. Going one step further, we're generating our protocol IDL and serialization bindings by reflecting on API objects (little more than sets of requests).
In the future, the experimental typed channels feature in Akka may help with some of the mechanics.

Is there a way to GWT compiler/serializer/linker issue?

Lets say I have a class...
com.mycom.app.AbstractMessage
There is another class in
com.mycom.model.QueryResponse
QueryResponse extends AbstractMessage and notice they are in different pacakges
com.mycom.model is a GWT Module and in the module XML
When I compile model there are errors. However when I try to use QueryReponse in another GWT module, I get runtime errors
"No source code is available for type com.mycom.app.AbstractMessage; did you forget to inherit a required module"
This lends me to believe that AbstractMessage was not compiled/compiled right to begin understandably because I DO NOT WANT to have "app" package be a GWT module
In other words, I only want to compile all classes in "model" and not any super classes. How can I tell the GWT compiler/rpc/linker/serializer etc not to do so?
i.e Is there a way to tell GWT not to walk beyond certain classes when it serializing/compiling it
I am doing this a source environment where we have a lot of packages, most of them depend on MODEL only and I DO NOT want to make a GWT module out of every package, just so it compiles.
Thoughts anyone?
I did a little bit of research on this one, you are right GWT will look for all implementations of an Abstract class, if and only if, the AbstractClass is referenced in an RPC GWTAsync interface, even though some are in non-GWT packages.
Let's say an object of type AbstractClass comes in over the network, and the GWT deserializer is now tasked with coverting the network data into a specific instance. It needs to know about all implementations of AbstractClass, to find which is coming over the network right now! -- So to accomplish this it, at compiletime, generates a .rpc file for each GWT service interface, listing all possible concrete types that the service methods can return.
Ray Ryan (Google employee) once mentioned that it is a bad idea to use interfaces arguments or return types in any RPC interface. - because it makes it difficult for the deserializer to know the exact type.
You can hand edit the generated RPC file and remove the offending types, or mark the other implementations as Non Serializable by not implementing Serializable in those implementations in other packages.
A Better way could be - I suspect you wrote code : "implements java.io.Serializable" at the top level (for the AbstractClass itself), maybe it's now time to move it to each implementation.
Now the GWT RPC deserializer's task is clear and straightforward - it knows that only certain implementations (that are serializable) of the AbstractClass will come over the network, and reach and compile them only. So it will not compile the other non serializable subclassess of your AbstractClass - as it knows they arent serializable.
There is one more option : If as I suspect you are using the command pattern - I have seen all the abstract interfaces, super classes for Command and Response etc always go in the client side packages - i.e., those that are GWT compiled. They are referrable and usable and instantiable for the server end of the application - so these source files are compiled twice, once by GWT into javascript for browser usage, and once by javac into bytecode for allowing reference from serverside. Thus in all GWT modules, including gwt-user.jar if you open them with 7Zip or WinZip you will see source and class files JARed together.
I recommend Moving AbstractMessage into the models package - as it is the model QueryResponse's super class.
And also inhertance in models is only a good idea, if you have 0 fields and only methods(behaviour) in the super class.
Lastly, if GWT is to make your QueryResponse into javascript - it needs ALL Types mentioned in the source file, to compile properly. So do not mention any server-only-classes in a source file meant to become javascript.
Have a region that has all the server-side java classes that will be run in a JVM on the server, and another region full of source files that will be compiled into javascript by the GWT compiler. The server-side region code/classes CAN refer to client region code/classes but defenitely NOT the vice versa. Make sure that no code thats gonna become javascript is referring (even an unused import statement) to a server side class.
GWT compiler works with source files only, however you need to compile client code into .class files so your serverside classes can refer to them.
NEW EDIT :
I did a little bit of research on this one, you are right GWT will look for all implementations of an Abstract class, if and only if, the AbstractClass is referenced in an RPC GWTAsync interface, even though some are in non-GWT packages.
Let's say an object of type AbstractClass comes in over the network, and the GWT deserializer is now tasked with coverting the network data into a specific instance. It needs to know about all implementations of AbstractClass, to find which is coming over the network right now! -- So to accomplish this it, at compiletime, generates a .rpc file for each GWT service interface, listing all possible concrete types that the service methods can return.
Ray Ryan (Google employee) once mentioned that it is a bad idea to use interfaces arguments or return types in any RPC interface. - because it makes it difficult for the deserializer to know the exact type.
You can hand edit the generated RPC file and remove the offending types, or mark the other implementations as Non Serializable by not implementing Serializable in those implementations in other packages.
A Better way could be -
I suspect you wrote code : "implements java.io.Serializable" at the top level (for the AbstractClass itself), maybe it's now time to move it to each implementation.
Now the GWT RPC deserializer's task is clear and straightforward - it knows that only certain implementations (that are serializable) of the AbstractClass will come over the network, and reach and compile them only. So it will not compile the other non serializable subclassess of your AbstractClass - as it knows they arent serializable.

Creating A Single Generic Handler For Agatha?

I'm using the Agatha request/response library (and StructureMap, as utilized by Agatha 1.0.5.0) for a service layer that I'm prototyping, and one thing I've noticed is the large number of handlers that need to be created. It generally makes sense that any request/response type pair would need their own handler. However, as this scales to a large enterprise environment that's going to be A LOT of handlers.
What I've started doing is dividing up the enterprise domain into logical processor classes (dozens of processors instead of many hundreds or possibly eventually thousands handlers). The convention is that each request/response type (all of which inherit from a domain base request/response pair, which inherit from Agatha's) gets exactly one function in a processor somewhere.
The generic handler (which inherits from Agatha's RequestHandler) then uses reflection in the Handle method to find the method for the given TREQUEST/TRESPONSE and invoke it. If it can't find one or if it finds more than one, it returns a TRESPONSE containing an error message (messages are standardized in the domain's base response class).
The goal here is to allow developers across the enterprise to just concern themselves with writing their request/response types and processor functions in the domain and not have to spend additional overhead creating handler classes which would all do exactly the same thing (pass control to a processor function).
However, it seems that I still need to have defined a handler class (albeit empty, since the base handler takes care of everything) for each request/response type pair. Otherwise, the following exception is thrown when dispatching a request to the service:
StructureMap Exception Code: 202
No Default Instance defined for PluginFamily Agatha.ServiceLayer.IRequestHandler`1[[TSFG.Domain.DTO.Actions.HelloWorldRequest, TSFG.Domain.DTO, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null]], Agatha.ServiceLayer, Version=1.0.5.0, Culture=neutral, PublicKeyToken=6f21cf452a4ffa13
Is there a way that I'm not seeing to tell StructureMap and/or Agatha to always use the base handler class for all request/response type pairs? Or maybe to use Reflection.Emit to generate empty handlers in memory at application start just to satisfy the requirement?
I'm not 100% familiar with these libraries and am learning as I go along, but so far my attempts at both those possible approaches have been unsuccessful. Can anybody offer some advice on solving this, or perhaps offer another approach entirely?
I'm not familiar with Agatha. But if you want all requests for IRequestHandler<T> to be fulfilled by BaseHandler<T>, you can use the following StructureMap registration:
For(typeof(IRequestHandler<>)).Use(typeof(BaseHandler<>));
When something asks for an IRequestHandler<Foo>, it should get a BaseHandler<Foo>.