A self contained service DLL with Entity Framework - entity-framework

I am looking for some best practice advice with regards to building a self contained service, that is a DLL with all of the domain logic and data layer. I would like to use an off the self CMS, such as orchard, then talk to the service to carry out CRUD operations. The service should have it's own IOC, and ORM, in this case I am using Ninject and Entity Framework. In this design I will have a separate database than the CMS, and can port it to other CMS systems when required.
The CMS should start the service and pass it a connection string or file name. If I use orchard it has different ORM, and IOC frameworks, so this leads me to wanting to keep Ninject and Entity Framework inside the service.
I have setup an experiment where the DbConext and domain are in the service DLL, and I call it from a console app. This only works if I have entity framework referenced in the console application, even though I don't use it in that dll. Here is the error message when EF is not referenced by the console app.
No Entity Framework provider found for 'System.Data.SqlClient' ADO.NET provider.
Why is this and how best to solve my design problem?

If your library (DLL) depends on Entity Framework, it's perfectly normal that you need to reference both in your application (whether it's console, web or whatever else). You always need to reference all dependencies.
Wiring your custom library with Orchard would be fairy simple. The only thing you'd need to do on Orchard side would be to register the services coming from your library with Autofac, in order to have them available for dependency injection. This post describes a similar scenario to yours.
Please bear in mind that using multiple database connections is a bit troublesome in Orchard <= 1.6, because of the usage of TransactionScope - you need to run all your custom database code in a suppressed scope, otherwise you'd have transaction errors and/or MSDTC-related problems. It will be a non-issue since Orchard 1.7 which is going to arrive in about a week. I'd strongly recommend waiting for the new version. You can also fetch the pre-release code from 1.x branch.

Related

.NET Framework and .NET 6 compatibility (Entity Framework)

I have a solution with several projects that use .NET Framework 4.7.2. One of the projects contains all of my Entity Framework models and the DbContext (EF v6.4.4). It also uses the ASPNet.Identity.EntityFramework (v2.2.3) extensions.
The projects contains an API layer, common layer (shared DTO models, helper functions, etc), WebJob project, and a Unit Test project.
I need to add in an Azure Function project to process messages from a service bus queue (which receives messages from our ERP system) and update our front facing e-commerce web app.
I need to get with the times and get more familiar with .NET Core / Standard project types. My issue is that the Azure Function project is using .NET 6.0. I want to reference my Entity Framework project.
I have been reading that with later version of .NET Core / Standard, there is "more" compatibility between those and .NET Framework projects.
I was able to reference everything in my Azure Function project and get it to compile with not much issue. However, when I go to do anything with the Entity Framework (read from the DB or write), I get an exception like this:
System.Private.CoreLib: Exception while executing function: InboundFunctions. EntityFramework: The type initializer for 'MethodCallTranslator' threw an exception. System.Private.CoreLib: Value cannot be null. (Parameter 'key').
After digging some more, I seem to have an issue with the Microsoft.AspNet.Identity.EntityFramework package. I had to include this in my AZ function project to satisfy the implementation of the IdentityDbContext. According to VS, this is the library that is throwing a warning about compatibility issues.
At the moment, I am not sure how to resolve this. Would it be easier to make a new DbContext for just the .NET Core stuff or is there a better approach? I don't need any of the Identity stuff in my Azure functions project - it's really only used in my API layer.
Thank you for your help!

Is it better to use POCO objects or detached EntityFramework object to expose database via WCF?

I created a WCF service in charge of exposing my database's data since I don't want the database to be directly accessed by my application (for security reasons) and i need to be able to share data with third-party applications.
My solution is structured this way: WPF application -> WCFService library -> DataAccessLayer library. (Arrows define assembly dependencies 'depends on')
To implement the WCF service I considered to simply return detached EntityFramework objects from the service BUT it forces the main application to have a dependency on the DataAccessLayer library.
The only way i can get around that is generating POCO objects and use them to send them over the wire, but now i have to map values back and forth EntityFramework.
At the moment i'm generating POCOs dynamically via a T4 template and I'm using AutoMapper to map values back and forth EntityFramework.
The Wcf service will just have to implement the repository pattern to expose data.
Is this a good solution? Are there any other option?
Is there any shortcoming i should be aware of?
Depending on your constraints, I would have to agree with this solution.
I created an almost identical solution, although our motivations were slightly different. Our client was Delphi Win32, and at the time didn't have good support for JSON, so we had to use SOAP.
The client also didn't support nullable primitives, so the POCOs removed all unsupported types, and performed other changes to ensure interoperability, then we used Automapper custom mappings to handle the two way conversions.
All the WCF services (contracts and implementations) where also generated by T4 templates, using a generic repository. With T4 templates, I was able to generate a separate WCF service per table for CRUD operations, and then manually created WCF services that were business specific.
Finally, I was also able to used T4 templates to generate the Delphi repositories that interacted with the SOAP services.
Or
You could just as easily move the POCOs (and code generation) to a separate project, change your DataAccessLayer library to reference the POCOs library and only contain the Db context made up of DbSets of your POCOs, and Data access logic but no entities (which are now POCOs). Your clients will not need to have a dependency on the DataAccessLayer library.
So... a good solution, depending on your constraints.

Is MEF a Service locator?

I'm trying to design the architecture for a new LOB MVVM project utilising Caliburn Micro and nHibernate and am now at the point of looking into DI and IOC.
A lot of the examples for bootstrapping Caliburn Micro use MEF as the DI\IOC mechanism.
What I'm struggling with is that MEF seems to by reasonably popular but the idea of the Mef [Imports] annotations smells to me like another flavour of a Service Locator?
Am I missing something about MEF whereby almost all the examples I've seen are not using it correctly or have I completely not understood something about how it's used whereby it side steps the whole service locator issues?
MEF is not a Service Locator, on it's own. It can be used to implement a Service Locator (CompositionInitializer in the Silverlight version is effectively a Service Locator built into MEF), but it can also do dependency injection directly.
While the attributes may "smell" to you, they alone don't cause this to be a service locator, as you can use [ImportingConstructor] to inject data at creation time.
Note that the attributes are actually not the only way to use MEF - it can also work via direct registration or convention based registration instead (which is supported in the CodePlex drops and .NET 4.5).
I suppose if you were to just new up parts that had property imports and try to use them, then you could run into some of the same problems described here: Service Locator is an Anti-Pattern
But in practice you get your parts from the container, and if you use [Import] without the additional allowDefault property, then the part is required and the container will blow up on you if you ask for the part doing the import. It will blow up at runtime, yes, but unlike a run of the mill service-locator, its fairly straightforward to do static analysis of your MEF container using a test framework. I've written about it a couple times here and here.
It hasn't been a problem for me in practice, for a couple of reasons:
I get my parts from the container.
I use composition tests.
I'm writing applications, not framework code.

Should CSLA be used with a dependency injection framework?

My development team is evaluating the various frameworks available for .NET to simplify our programming, one of which is CSLA. I have to admit to being a bit confused as to whether or not CSLA would benefit from being used in conjunction with a dependency injection framework, such as Spring.net or Windsor. If we combined one of those two DI frameworks with, say, the Entity Framework to handle ORM duties, does that negate the need or benefit of using CSLA altogether?
I have various levels of understanding of all these frameworks, and I'm trying to get a big picture of what will best benefit our enterprise architecture and object design.
Thank you!
CSLA is a framework for creating business entities, so has separate concerns than an IoC container or ORM. In a enterprise application you should consider the benefits of all three.
In particular, you should consider CSLA if you want data binding built in to your models, dirty checking, N-level undo, validation and business rules, as well as the data portal implementation which allows easy configuration for n-tier deployments.
Short answer: Yes.
Long answer: It requires a bit of grunt work and some experimentation to setup, but it can be done without fundamentally breaking CSLA. I put together a working prototype using StructureMap and the repository pattern and used the BuildUp method of Setter Injection to inject within CSLA. I used a method similar to the one found here to ensure that my business objects are re-injected when the objects are serialized.
I also use the registry base class of StructureMap to separate my configuration into presentation, CSLA client, CSLA server, and CSLA global settings. This way I can use the linked file feature of Visual Studio to include the CSLA server and CSLA global configuration files within the server-side Data Portal and the configuration will always be the same in both places. This was to ensure I can still change the Data Portal configuration settings in CSLA from 2 tier to 3 tier without breaking anything.
Anyway, I am still weighing the potential benefits with the drawbacks to using DI, but so far I am leaning in the direction of using it because testing will be much easier although I am skeptical of trying to use any of the advanced features of DI such as interception. I recommend reading the book Dependency Injection in .NET by Mark Seemann to understand the right and wrong way to use DI because there is a lot of misinformation on the Internet.

What's wrong with this ASP.net MVC system design?

I have a ASP.Net MVC 3 photo gallery, which is designed in this way:
Data Repositories(IImageRepoSitory, ITagRepository etc)
|
Services (IGalleryService, IWebService etc)
|
Web Application
Which I use Ninject to inject the required Services and repositories into the web application.
Before I use actual database, I used a simple ArrayList (and JSON serialization) as my presistent logic (That will be JsonImageRepository/JSonTagRepository) which works perfectly fine. But later on, I moved to EF4 CTP5 (Code First), and many problems appeared. Basically, I injected those repositories and services as Singleton (which declared in Global.asax.cs), but when I have several threads that access the repositories, it saids:
Data Connection is closed.
I changed to something like Thread Mode or Request Mode in Ninject but various exception raised (regarding to multiple instances of context, so I think Singleton should be the only option).
Is there anything wrong with the design? or how should I configure those components?
Normally, repository access should be in request scope (at least the ones that change data). I recommend looking at bob's blog posts about a repository pattern implementation using Ninject and NHibernate. It should be pretty much the same for EF4:
http://blog.bobcravens.com/2010/06/the-repository-pattern-with-linq-to-fluent-nhibernate-and-mysql/
http://blog.bobcravens.com/2010/07/using-nhibernate-in-asp-net-mvc/
http://blog.bobcravens.com/2010/09/the-repository-pattern-part-2/
I planned adding this to the sample application in near future.