Where in your solution do you typically put your StructureMap Registry classes? - inversion-of-control

Here's the current layout:
Solution:
Core
Domain
Interfaces
DataAccess
Providers
Session
Service
UI
UnitTests
IntegrationTests
I typically try to keep my core domain entities / POCOs as light as possible without very many external dependencies.. So I was thinking it might make sense to put it in the Service layer as it typically has a project reference to all of the layers.
I have noticed that in CodeCampServer they have actually created a separate project called DependencyResolution for their IoC configuration:
http://code.google.com/p/codecampserver/source/browse/trunk#trunk/src/DependencyResolution
Thoughts?

IOC configuration should be off to the side. It doesn't necessarily need to be in a separate project, but it needs to be away from the application code. We put it in another project in CodeCampServer to make 'off to the side' more real. But in a current production app, we keep it in a separate namespace in our main project. We consolidated projects to increase compile time.

Related

Dependency Injection, EF Core + web api 2 architecture

My layout
project.web (.net core 2.1 web api)
Some binding models (for post/put requests) and resource models for GET requests
Controllers.
I only call interfaces from (x.api) which are resolved to x.core services.
No validation or anything. This happens inside the core layer
I've setup a few things like automapper and swagger, that are not relevant for my question.
project.api (class lib)
only contains interfaces for .core and .store projects (services, repositories and domain models)
project.core (class lib)
two kinds of services
1) Services which call the repository services (interfaces). But validate the data before calling the repo service.
2) Services that will have to execute long term stuff (IE: scanning folders, handling file information, ...). I actually created HostedServices for these as a folder could easily contain thousands of files.
project.store (class lib)
Wrapper services for my storage (Only contains helper methods so I don't have to write the same queries a hundred times.)
Problem / question
At this time I have registered all of my services and repositories as singletons in public void ConfigureServices(IServiceCollection services)
because I was using a different storage (nosql, litedb) before refactoring code to EF (sqllite)
Now the problem is that I want to register my DbContext as scoped (by default)
But my repositories (singleton) depend on dbcontext. Which means I will have to make these scoped as well. I'm ok with this, as these are only wrapper services, so I don't have to write the same queries all the time.
But some other services, that will need access to my data are singletons, and I cannot register these as scoped. Contains some data that needs to be the same for every request, and some collections and long running jobs.
I can think of two solutions
The first solution is to make a dependency to IServiceScopeFactory in my repository and use something like using (var scope = ServiceScopeFactory.CreateScope()) { scope.ServiceProvider.GetService(typeof(MyDbContext))... }
this way I can remove the dependency from my repository wrapper, but this doesn't sound clean to me.
The other solution is to register all of my services that only handle database stuff as scoped. (IE customerSservice in core only does validations and calls customerRepository) I remove dependencies from my remaining singleton services.
In those singletons, instead of depending on the customersService, I could use a rest call with restsharp or something similar
Just like how I would consume them from my windows client applications and web client apps.
I don't actually like either. But perheps someone can give me some advice or thoughts?
Well, the two options you laid out are in fact your only two options. The first is the service locator antipattern, which as the name implies, is something you should avoid. However, when you are dealing with singleton-scoped objects needing access to objects in other scopes, there is no other way.
The only other option is to reduce the scope of your services from singletons, such that you can then inject the context directly. Not everything necessarily needs to be a singleton. Generally, if you need to utilize something like DbContext, there's a strong argument to be made that your object should not be singleton-scope in the first place. If you need it to be singleton-scoped, that's most likely an indication that the class is either doing too much or is otherwise brittle.

Creating new Services in service fabric will cause duplicated code

The visual studio project templates for a Service fabric services contains code that can be reused over other multiple projects. For example the ServiceEventSource.cs or ActorEventSource.cs
My programmer instinct wants to move this code to a shared library, so I don't have duplicate code. But maybe this isn't the way to go with microservices, since you want to have small independent services. Introducing a library will make it more dependent. But they are already dependent on the EventSource class.
My solution will be to move some reusable code to a base class in a shared project and inherit that class in my services. Is this the best approach?
I'm guessing all your services are going to be doing lots of different jobs so once you pad out your EventSource classes they'll be completely different from each other except one method which would be service started?
Like with any logging there is many different approaches, one of the main ones I like is using AOP or interceptor proxies using IoC containers, this will keeps your classes clean but allows re-use of the ETW code and a decent amount of logging to be able to debug later down the line.
I moved a lot of duplicate code to my own nuget libraries which is working quiet well. It is a extra dependency, but always better then duplicate code. Now I'm planning to make my one SF templates in visual studio, so I don't have to remove and adjust some files.
I found a nice library (EventSourceProxy) which helps me managing the EventSource code for ETW: https://github.com/jonwagner/EventSourceProxy

Caliburn.Micro, MVVM and Business layer

I'm building a WPF application using MVVM pattern, and Caliburn.Micro is the framework choice to boost up the development.
Different from a conventional MVVM-based application, I add a business layer (BL) below the ViewModel (VM) layer to handle logic for specific business cases. VM is left with data binding and simple conversion/presentation logic. Below BL is an extra Data Access layer (DAL) that encapsulates the Data model (DM) underneath built with Entity Framework.
I'm pretty new to both WPF, MVVM and, of course, know almost nothing about Caliburn. I have read plenty of questions and answers about the Caliburn usage and now trying to use what I've learnt so far in my application.
My questions are:
Does it sound okay with the above layered architecture?
In the application bootstrapper, is it correct that we can register all services that will later be used (like EventAgreggator (EA), WindowManager or extra security and validation services), and also all the concerned VMs? These should be injected into VM instances via constructors or so (supposed I'll be using SimpleContainer). So from any VM that are properly designed and instantiated, we can have these services ready to be used. If I understand correctly, Caliburn and its IoC maintain a kind of global state so that different VMs can use and share it.
Navigation: I know this topic has been discussed so many times. But just to be sure I'm doing the right way: There'd be a ShellViewModel acting as the main window for the whole application with different VMs (or screens) loaded dynamically. Each VM can inherit either Screen or ViewModelBase or NotifyChangedBase. When I'm in, let's say, VM A and want to switch to VM B. I'd from inside VM A send a message (using EA) to the ShellViewModel, saying that I want to change to B. ShellViewModel receives the message and reloads its CurrentViewModel property. What should be a proper data structure to maintain the list of VMs to be loaded? How can stuffs like Conductor or WindowManger come into the place?
Can/Should Caliburn in one way or another support the access to the database (via EF). Or this access should be exposed to VM and/or BL only?
Thanks a lot!
Different from a conventional MVVM-based application, I add a business layer (BL) below the ViewModel (VM) layer
That's the standard case. ViewModels can't/shouldn't contain business logic which is considered to be part of the Model (Model in MVVM is considered a layer, not an object or data structure) in the MVVM. ViewModel is for presentation logic only.
Yes, as long as your Business (Domain) Layer has no dependency on the DAL (no reference to it's assembly). Repository interfaces should be defined in the Business Layer, their implementations in the Data Access Layer.
Yes, Bootstrapper is where you build your object graph (configuration the IoC container).
Registering ViewModels: Depends on IoC framework. Some frameworks let you resolve unregistered types, as long as they are not abstract or interfaces (i.e. Unity). Not sure about Caliburn, haven't used it. If the IoC supports it, you don't need to register them.
One possible way to do it. I prefer navigation services though, works better for passing around parameters to views and windows that are not yet instantiated and you always know there is exactly one objects handling it.
With messages, there could be 0, 1 or many objects listening to it. Up to you
What do you mean with support access to the database? You can use it to inject your repositories and/or services into your ViewModels, other than that there isn't much DB related stuff to it.

EF + WCF in three-layered application with complex object graphs. Which pattern to use?

I have an architectural question about EF and WCF.
We are developing a three-tier application using Entity Framework (with an Oracle database), and a GUI based on WPF. The GUI communicates with the server through WCF.
Our data model is quite complex (more than a hundred tables), with lots of relations. We are currently using the default EF code generation template, and we are having a lot of trouble with tracking the state of our entities.
The user interfaces on the client are also fairly complex, sometimes an object graph with more than 50 objects are sent down to a single user interface, with several layers of aggregation between the entities. It is an important goal to be able to easily decide in the BLL layer, which of the objects have been modified on the client, and which objects have been newly created.
What would be the clearest approach to manage entities and entity states between the two layers? Self tracking entities? What are the most common pitfalls in this scenario?
Could those who have used STEs in a real production environment tell their experiences?
STEs are supposed to solve this scenario but they are not silver bullet. I have never used them in real project (I don't like them) but I spent some time playing with them. The main pitfalls I found are:
Coupling your data layer with your client application - you must share entity assembly between projects (it also means it is .NET only solution but it should not be a problem in your case)
Large data transfers - you pass 50 entities to clients, client change single entity and you will pass 50 entities back. It will require some fighting with STEs to avoid passing unnecessary data
Unnecessary updates to database - normally when EF works with attached entities it track changes on property level but with STEs it track changes on entity level. So if user modify single property in entity with 100 properties it will generate update with setting all of them. It will require modifying template and adding property level change tracking to avoid this.
Client application should use STEs directly (binding STEs to UI) to get most of its self tracking ability. Otherwise you will have to implement code which will move data from UI back to self tracking entity and modify its state.
They are not proxied = they don't support lazy loading (in case of WCF service it is good behavior)
I described today the way to solve this without STEs. There is also related question about tracking over web services (check #Richard's answer and provided links).
We have developed a layered application with STE's. A user interface layer with ASP.NET and ModelViewPresenter, a business layer, a WCF service layer and the data layer with Entity Framework.
When I first read about STE's the documentation said that they are easier then using custom DTO's. They should be the 'quick and easy way' and that only on really big projects you should use hand written DTO's.
But we've run in a lot of problems using STE's. One of the main problems is that if your entities come from multiple service calls (for example in a master detail view) and so from different contexts you will run into problems when composing the graphs on the server and trying to save them. So our server function still have to check manually which data has changed and then recompose the object graph on the server. A lot has been written about this topic but it's still not easy to fix.
Another problem we ran into was that the STE's wouldn't work without WCF. The change tracking is activated when the entities are serialized. We've originally designed an architecture where WCF could be disabled and the service calls would just be in process (this was a requirement for our unit tests, which would run a lot faster without wcf and be easier to setup). It turned out that STE's are not the right choice for this.
I've also noticed that developers sometimes included a lot of data in their query and just send it to the client instead of really thinking about which data they needed.
After this project we've decided to use custom DTO's with automapper from server to client and use the POCO template in our data layer in a new project.
So since you already state that your project is big I would opt for custom DTO's and service functions that are a specifically created for one goal instead of 'Update(Person person)' functions that send a lot of data
Hope this helps :)

Impacts of configuring IoC Container from code

I currently have a home-made IoC container that I will soon replace with a new one. My home-made IoC container is configured using config files. From what I have read on the net, the ability to "configure from code" seems to be a very popular feature.
I don’t like the idea of having a class that knows every other class in the system in order to setup the IoC Container. Such class would have to be in an assembly that depends on the 80 other assemblies of my project.
Are there best practices on how to organize the code that configures the container?
I have read this post. Using conventions and auto-wiring is good when there are patterns in the types to be registered. But I have hundreds of types that are in different assemblies and that don’t have anything in common. How should I organize the code for those?
Regards,
Update: I chose an approach where the code that configures the container is decentralized. Each assembly in my system is given a chance to configure the container. The method at the entry points in my system (many .exe apps, the web app, the web services app and the unit test fixtures are all entry points) are responsible for calling each assembly to let them setup the container. I'm currently implementing that, I' not sure if it is going to be satisfactory. I will post another update soon.
Depending on your programming language (I use c#) you might want to look something like Autofac modules: http://code.google.com/p/autofac/wiki/StructuringWithModules