Thread safety and the MEF CompositionContainer - mef

I'm planning on using MEF within ASP.NET looking for some insight into thread safety of the CompositionContainer.
My first approach associated a distinct CompositionContainer to each request but I'm worried that's gonna be expensive and not scale very well, on the other hand the CompositionContainer support thread safe operations through a simple flag in the constructor.
I've also considered the hybrid approach where I might use a thread-safe static CompositionContainer and one which is tied to each request.
Besides the thread safety argument I'm relying heavily on ExportFactory to be able to construct objects as needed. Though, I'm still bugged by this ExportLifeTimeContext thing and I'm uncertain about the resource requirements of this approach.
Anyone got some insight into this?

Creating CompositionContainers is cheap so it should be fine to create one for each request. Creating the catalog is not as cheap, but the catalogs are thread-safe so you should be able to create a global one on startup and use that for each request.
One thing to be aware of is that even "thread-safe" composition containers aren't thread-safe for operations which can cause recomposition, such as changes to the catalogs, or calling the Compose methods on the container.
As for the hybrid approach, you would go that route if you have some parts (which should be thread-safe) that you want to be shared between requests and some parts that you want to be request-specific. In that case only the shared container would need to be created as thread-safe.

Related

Why using AndroidViewModel?

I read a lot of ViewModel derived from AndroidViewModel, which then requires of course an application reference.
class SomeViewModel(application: Application) : AndroidViewModel(application)
But why would one do this? It hurts me to see application handed over to ViewModel. What would be an acceptable use case for this?
If there is any reason to use AndroidViewModel, can one not derive from ViewModel + use dagger2 for the application inject?
Not all codebases are created equal. AndroidViewModel can be a useful tool for incremental refactoring in "legacy" codebases that don't have many abstractions or layering in place (read: Activity/Fragment god objects).
As a bridge from a "legacy" codebase, it makes sense to use it in this situation.
But why would one do this? It hurts me to see application handed over to ViewModel. What would be an acceptable use case for this?
The use case for AndroidViewModel is for accessing the Application. In a "legacy" codebase, it's relatively safe to move "Context/Application-dependent code" out of Activities and Fragments without requiring a risky refactor. Accessing Application in the view model will be necessary in that scenario.
If there is any reason to use AndroidViewModel, can one not derive from ViewModel + use dagger2 for the application inject?
If you're not injecting anything else, then at best it's a convenient way to get an Application reference without having to type cast or use any DI at all.
If you're injecting other members, be it with a DI framework or ViewModelFactory, it's a matter of preference.
If you're injecting a ViewModel directly into your Activity/Fragment, you're losing the benefits the platform is providing you with. You'll have to manually scope the lifecycle of your VM and manually clear your VM for your UI's lifecycle unless you also mess around with ViewModelStores or whatever other components are involved in retention. At that point, it's a view model by name only.
Because it requires an application reference, it can provide Context which can be useful (e.g: for a system service).
See - AndroidViewModel vs ViewModel

Encapsulating an external data source in a repository pattern

I am creating the high level design for a new service. The complexity of the service warrants using DDD (I think). So I did the conventional thing and created domain services, aggregates, repositories, etc. My repositories encapsulate the data source. So a query can look for an object in the cache, failing that look in the db, failing that make a REST call to an external service to fetch the required information. This is fairly standard. Now the argument put forward by my colleagues is that abstracting the data source this way is dangerous because the developer using the repository will not be aware of the time required to execute the api and consequently not be able to calculate the execution time for any apis he writes above it. May be he would want to set up his component's behaviour differently if he knew that his call would result in a REST call. They are suggesting I move the REST call outside of the repository and maybe even the caching strategy along with it. I can see their point but the whole idea behind the repository pattern is precisely to hide this kind of information and not have each component deal with caching strategies and data access. My question is, is there a pattern or model which addresses this concern?
They are suggesting I move the REST call outside of the repository
Then you won't have a repository. The repository means we don't know persistence details, not that we don't know there is persistence. Every time we're using a repository, regardless of its implementation (from a in memory list to a REST call) we expect 'slowness' because it's common knowledge that persistence usually is the bottleneck.
Someone who will use a certain repository implementation (like REST based) will know it will deal with latency and transient errors. A service having just a IRepository dependency still knows it deals with persistence.
About caching strategies, you can have some service level (more generic) caching and repository level (persistence specific) caching. These probably should be implementation details.
Now the argument put forward by my colleagues is that abstracting the data source this way is dangerous because the developer using the repository will not be aware of the time required to execute the api and consequently not be able to calculate the execution time for any apis he writes above it. May be he would want to set up his component's behaviour differently if he knew that his call would result in a REST call.
This is wasting time trying to complicate your life. The whole point of an abstraction is to hide the dirty details. What they suggest is basically: let's make the user aware of some implementation detail, so that the user can couple its code to that.
The point is, a developer should be aware of the api they're using. If a component is using an external service (db, web service), this should be known. Once you know there's data to be fetched, you know you'll have to wait for it.
If you go the DDD route then you have bounded contexts (BC). Making a model dependent on another BC is a very bad idea . Each BC should publish domain events and each interested BC should subscribe and keep their very own model based on those events. This means the queries will be 'local' but you'll still be hitting a db.
Repository pattern aim to reduce the coupling with persistence layer. In my opinion I wouldn't risk to make a repository so full of responsibility.
You could use an Anti Corruption Layer against changes in external service and a Proxy to hide the caching related issues.
Then in the application layer I will code the fallback strategy.
I think it all depends where you think the fetching/fallback strategy belongs, in the Service layer or in the Infrastructure layer (latter sounds more legit to me).
It could also be a mix of the two -- the Service is passed an ordered series of Repositories to use one after the other in case of failure. Construction of the series of Repos could be placed in the Infrastructure layer or somewhere else. Fallback logic in one place, fallback configuration in another.
As a side note, asynchrony seems like a good way to signal the users that something is potentially slow and would be blocking if you waited for it. Better than hiding everything behind a vanilla, inconspicuous Repository name and better than adding some big threatening "this could be slow" prefix to your type, IMO.

Entity Framework and ObjectContext references

I am playing around with Entity Framework to see how it can be used in a new project I am working on. I put my edmx file in a class library so the Entities (and database access) can be used in multiple places. Currently I have a web project and a console project both referencing the class library.
One of my Entities has a Partial class defined with a static method. The purpose of the method is to accept some parameters and create one or more instances of the specific class. My first version of the method created an ObjectContext instance, created the Entity class (or classes), and returned the Entities to the calling method. The calling method then updated some properties and tried to save the Entities using a new ObjectContext instance. Obviously this did not work because the Entities were bound (correct term ??) to the Context created within the static method.
After some research, I modified the static method to also accept an ObjectContext reference to ensure that all the Entities where created and then later on manipulated and saved using the same Context. This works fine but the design just feels wrong.
Assuming that my one static method may grow into many more, or that my app (especially the web app) would probably benefit from additional layers (DAL or even a Service Layer), does it make sense for all these classes to require an ObjectContext parameter?
I have read on many postings that creating an ObjectContext via a Singleton pattern is a bad idea because "many clients would use the same object". My problem with that is I do not see how that is possible. In a local console app there is only a single user running the app. In a web app there would only be a single user on each request. Where is the user sharing problem? Not a single article/posting mentioned it...but where they referring to a Singleton pattern storing the object instance in the Application context?
I have also seen postings focused on web usage and storing the object instance in the users Session object via the HttpContext object. This makes sense but does not seem to address non-web usage.
I think that whatever solution is appropriate (static methodm, Factory object, etc.) would most likely be implemented in my class library so obviously it needs to support both web and non-web solutions. Maybe check for HttpContext to determine what environment it is running in.
I was hoping http://www.west-wind.com/weblog/posts/2008/Feb/05/Linq-to-SQL-DataContext-Lifetime-Management would be informative but I am having a hard time wrapping my head around the post and the example code seems like overkill for instantiating and sharing a simple object. (Although I am sure I am just not getting it...)
Any thoughts are appreciated.
Thanks.
The issue is not that “many clients would use the same object.” The issue is that the ObjectContext is intended to be a single unit of work. If you use it for many different units of work, you will find that there are a number of problems.
Memory usage will grow and grow.
Your application will become slower as object fixup has to do increasing amounts of work.
Multithreading won't work
The solution is to use the ObjectContext in the manner it is intended, namely, as a single unit of work.

Arguments against Inversion of Control containers

Seems like everyone is moving towards IoC containers. I've tried to "grok" it for a while, and as much as I don't want to be the one driver to go the wrong way on the highway, it still doesn't pass the test of common sense to me. Let me explain, and please correct/enlighten me if my arguments are flawed:
My understanding: IoC containers are supposed to make your life easier when combining different components. This is done through either a) constructor injection, b) setter injection and c) interface injection. These are then "wired up" programmatically or in a file that's read by the container. Components then get summoned by name and then cast manually whenever needed.
What I don't get:
EDIT: (Better phrasing)
Why use an opaque container that's not idiomatic to the language, when you can "wire up" the application in (imho) a much clearer way if the components were properly designed (using IoC patterns, loose-coupling)? How does this "managed code" gain non-trivial functionality? (I've heard some mentions to life-cycle management, but I don't necessarily understand how this is any better/faster than do-it-yourself.)
ORIGINAL:
Why go to all the lengths of storing the components in a container, "wiring them up" in ways that aren't idiomatic to the language, using things equivalent to "goto labels" when you call up components by name, and then losing many of the safety benefits of a statically-typed language by manual casting, when you'd get the equivalent functionality by not doing it, and instead using all the cool features of abstraction given by modern OO languages, e.g. programming to an interface? I mean, the parts that actually need to use the component at hand have to know they are using it in any case, and here you'd be doing the "wiring" using the most natural, idiomatic way - programming!
There are certainly people who think that DI Containers add no benefit, and the question is valid. If you look at it purely from an object composition angle, the benefit of a container may seem negligible. Any third party can connect loosely coupled components.
However, once you move beyond toy scenarios you should realize that the third party that connects collaborators must take on more that the simple responsibility of composition. There may also be decommissioning concerns to prevent resource leaks. As the composer is the only party that knows whether a given instance was shared or private, it must also take on the role of doing lifetime management.
When you start combining various instance scopes, using a combination of shared and private services, and perhaps even scoping some services to a particular context (such as a web request), things become complex. It's certainly possible to write all that code with poor man's DI, but it doesn't add any business value - it's pure infrastructure.
Such infrastructure code constitutes a Generic Subdomain, so it's very natural to create a reusable library to address such concerns. That's exactly what a DI Container is.
BTW, most containers I know don't use names to wire themselves - they use Auto-wiring, which combines the static information from Constructor Injection with the container's configuration of mappings from interfaces to concrete classes. In short, containers natively understand those patterns.
A DI Container is not required for DI - it's just damned helpful.
A more detailed treatment can be found in the article When to use a DI Container.
I'm sure there's a lot to be said on the subject, and hopefully I'll edit this answer to add more later (and hopefully more people will add more answers and insights), but just a couple quick points to your post...
Using an IoC container is a subset of inversion of control, not the whole thing. You can use inversion of control as a design construct without relying on an IoC container framework. At its simplest, inversion of control can be stated in this context as "supply, don't instantiate." As long as your objects aren't internally depending on implementations of other objects, and are instead requiring that instantiated implementations be supplied to them, then you're using inversion of control. Even if you're not using an IoC container framework.
To your point on programming to an interface... I'm not sure what your experience with IoC containers has been (my personal favorite is StructureMap), but you definitely program to an interface with IoC. The whole idea, at least in how I've used it, is that you separate your interfaces (your types) from your implementations (your injected classes). The code which relies on the interfaces is programmed only to those, and the implementations of those interfaces are injected when needed.
For example, you can have an IFooRepository which returns from a data store instances of type Foo. All of your code which needs those instances gets them from a supplied object of type IFooRepository. Elsewhere, you create an implementation of FooRepository and configure your IoC to supply that anywhere an IFooRepository is needed. This implementation can get them from a database, from an XML file, from an external service, etc. Doesn't matter where. That control has been inverted. Your code which uses objects of type Foo doesn't care where they come from.
The obvious benefit is that you can swap out that implementation any time you want. You can replace it with a test version, change versions based on environment, etc. But keep in mind that you also don't need to have such a 1-to-1 ratio of interfaces to implementations at any given time.
For example, I once used a code generating tool at a previous job which spit out tons and tons of DAL code into a single class. Breaking it apart would have been a pain, but what wasn't much of a pain was to configure it to spit it all out in specific method/property names. So I wrote a bunch of interfaces for my repositories and generated this one class which implemented all of them. For that generated class, it was ugly. But the rest of my application didn't care because it saw each interface as its own type. The IoC container just supplied that same class for each one.
We were able to get up and running quickly with this and nobody was waiting on the DAL development. While we continued to work in the domain code which used the interfaces, a junior dev was tasked with creating better implementations. Those implementations were later swapped in, all was well.
As I mentioned earlier, this can all be accomplished without an IoC container framework. It's the pattern itself that's important, really.
First of all what is IOC? It means that responsibility of creating the dependent object is taken away from the main object and delegated to third party framework. I always use spring as my IOC framework and it bring tons of benefit to the table.
Promotes coding to interface and decoupling - The key benefit is that IOC promotes and makes decoupling very easy. You can always inject an interface in your main object and then use the interface methods to perform tasks. The main object does not need to know which dependent object is assigned to the interface. When you want to use a different class as dependency all you need is to swap the old class with a new one in the config file without a single line of code change. Now you can argue that this can be done in the code using various interface design patterns. But IOC framework makes its walk in a park. So even as a newbie you become expert in levering various interface design patterns like bridge, factory etc.
Clean code - As most of object creation and object life-cycle operations are delegated to IOC container you saved from the writing broiler point repetitive code. So you have a cleaner, smaller and more understandable code.
Unit testing - IOC makes unit testing easy. Since you are left with decoupled code you can easily test the decoupled code in isolation. Also you can easily inject dependencies in your test cases and see how different component interacts.
Property Configurators - Almost all the applications have some properties file where they store application specific static properties. Now to access those properties developers need to write wrappers which will read and parse the properties file and store the properties in format that application can access. Now all the IOC frameworks provide a way of injecting static properties/values in specific class. So this again becomes walk in the park.
These are some of the points I can think right away I am sure there are more.

EF4: ObjectContext Lifetime?

I am developing a WPF desktop app that uses Entity Framework 4 and SQL Compact 4. I have seen two distinct styles of Repository classes:
The Repository instantiates an ObjectContext, which is disposed of when the Repository is garbage-collected. The lifetime of the ObjectContext is the same as the lifetime of the application.
A separate DataStoreManager class creates and holds an ObjectContext for the life of the application. When a repository is needed, a command gets an ObjectContext reference from the DataStoreManager and passes it to the constructor for the New Repository. The lifetime of the ObjectContext is the lifetime of the application.
Is either approach considered a bad practice? Does either present any absolute advantages over the other? Is either approach considered best practice? Is either more widely accepted or used by developers than the other? Thanks for your help.
I would have thought that holding an ObjectContext open over multiple accesses would be bad practice. As soon as it becomes corrupted then you would need to recycle in and handle the corruption.
The Repository pattern is more for abstraction of data access but doesn't necessarily map to lifetime of context.
The unit of work pattern is more about encapsulation of one or more database/repository accesses i.e. a use case may have you adding a new Blog and then adding the first default post, this may require calling two repositories, at this point you might want to share the context and encapsulate these two commands in a transaction. Adding a second post might be done hours later and be a new context/unit of work.
DJ is right in mentioning Context lifetimes which you'd set generally at an application level.
The best practice is depended on how your users are going to use the application:
And how your application is structured.
if there is only one user using your application at one time, you can even create your entity context as a static instance.
context can be used per request, per thread, per form.
read more: http://blogs.microsoft.co.il/blogs/gilf/archive/2010/02/07/entity-framework-context-lifetime-best-practices.aspx