Is it possible to map a table column to a class field instead to a class property and how?
YOU CAN DO IT :)
Follow this link: http://weblogs.asp.net/ricardoperes/archive/2013/08/22/mapping-non-public-members-with-entity-framework-code-first.aspx
This is a common request, and really makes sense; we need to use LINQ expressions and a bit of reflection magic. First, an helper function for returning an expression that points to a member:
public static class ExpressionHelper
{
public static Expression<Func<TEntity, TResult>> GetMember<TEntity, TResult>(String memberName)
{
ParameterExpression parameter = Expression.Parameter(typeof(TEntity), "p");
MemberExpression member = Expression.MakeMemberAccess(parameter, typeof(TEntity).GetMember(memberName, BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Instance).Single());
Expression<Func<TEntity, TResult>> expression = Expression.Lambda<Func<TEntity, TResult>>(member, parameter);
return (expression);
}
}
Then, we call it on the DbContext.OnModelCreating method, as a parameter to StructuralTypeConfiguration.Property:
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
modelBuilder.Entity<Project>().Property(ExpressionHelper.GetMember<Project, Decimal>("Budget")).IsRequired();
base.OnModelCreating(modelBuilder);
}
Entity Framework (Code First or not) does not support mapping to a field; only to properties.
UPDATE
As pointed out in the comments, these documents are a bit dated but might still help any beginner along:
Entity Framework Code first development Resources and Documentation
For the sake of completeness, heres a link to whats included in EF 4.1 RC: EF 4.1 Release Candidate Available
Changes since CTP5 (From the link above):
Rename of ‘DbDatabase’ to ‘Database’. This class has also moved
to the ‘System.Data.Entity’
namespace, along with the database
initializer classes.
Rename of ‘ModelBuilder’ to ‘DbModelBuilder’, to align with the
other core classes.
Validation in Model First and Database First. The new validation
feature was only supported in Code
First in CTP5. In RC the validation
feature will work with all three
development workflows (Model First,
Database First, and Code First).
Complete Intellisense docs. Feature CTPs were not extensively documented
because the API surface was changing
significantly between each release.
This release includes complete
documentation.
Removal of Code First Pluggable Conventions. Pluggable Conventions
were previewed in Feature CTP5 but
were not at go-live quality for this
release. This release still supports
the removal of default conventions.
Consolidation of IsIndependent in the Code First relationship API. When
configuring relationships in Feature
CTP5 the IsIndependent method was used
to identify that the relationship did
not have a foreign key property
exposed in the object model. This is
now done by calling the Map method.
HasForeignKey is still used for
relationships where the foreign key
property is exposed in the object
model.
Related
I really want to be able to use NodaTime in my Entity Framework Code First database projects but haven't found a "clean" way to do it. What I really want to do is this:
public class Photoshoot
{
public Guid PhotoshootId{get; set;}
public LocalDate ShootDate{get; set;} //ef ignores this property
}
Is there any supported or recommended approach to using NodaTime with EF Code First?
Until custom primitive type persistence is natively supported in Entity Framework, a common work around is to use buddy properties.
For each custom primitive within your domain model, you create an associated mapped primitive to hold the value in a format supported by Entity Framework. The custom primitive properties are then calculated from the value of their corresponding buddy property.
For example:
public class Photoshoot
{
// mapped
public Guid PhotoshootId{get; set;}
// mapped buddy property to ShootDate
public DateTime ShootDateValue { get; set; }
// non-mapped domain properties
public LocalDate ShootDate
{
get { // calculate from buddy property }
set { // set the buddy property }
}
}
We use NodaTime in our code first POCO's using exactly this approach.
Obviously this leaves you with a single type acting as both a code first POCO and a domain type. This can be improved at the expense of complexity by separating out the different responsibilities into two types and mapping between them. A half-way alternative is to push the domain properties into a subtype and make all mapped buddy properties protected. With a certain amount of wanging Entity Framework can be made to map to protected properties.
This rather splendid blog post evaluates Entity Framework support for various domain modelling constructs including encapsulated primitives. This is where I initially found the concept of buddy properties when setting up our POCO's:
http://lostechies.com/jimmybogard/2014/04/29/domain-modeling-with-entity-framework-scorecard/
A further blog post in that series discusses mapping to protected properties: http://lostechies.com/jimmybogard/2014/05/09/missing-ef-feature-workarounds-encapsulated-collections/
EF Core 2.1 has a new feature Value Conversions, which is exactly for this scenario.
//OnModelCreating
builder.Entity<MyEntity>
.Property(e => e.SomeInstant)
.HasConversion(v => v.ToDateTimeOffset(), v => Instant.FromDateTimeOffset(v));
.HasConversion has some other overloads to make this logic re-useable, for example you can define your own ValueConverter.
No "clean" way that I'm aware of because EF, as of this writing, doesn't have a mechanism for simple type conversion like you see in NHibernate (IUserType). A real limitation in EF as an ORM which causes me to change my domain to suit my ORM.
There is a provider specific way that works with Postgres (Npgsql).
Install the library
dotnet add package Npgsql.EntityFrameworkCore.PostgreSQL.NodaTime
And then while configuring DbContext, use this,
services.AddDbContext<PhotoshootDbContext>(opt =>opt.UseNpgsql(Configuration.GetConnectionString("ConnectionString"), o => o.UseNodaTime()));
There are some third party libraries for other providers too.
I am about to implement an Entity Framework 6 design with a repository and unit of work.
There are so many articles around and I'm not sure what the best advice is: For example I realy like the pattern implemented here: for the reasons suggested in the article here
However, Tom Dykstra (Senior Programming Writer on Microsoft's Web Platform & Tools Content Team) suggests it should be done in another article: here
I subscribe to Pluralsight, and it is implemented in a slightly different way pretty much every time it is used in a course so choosing a design is difficult.
Some people seem to suggest that unit of work is already implemented by DbContext as in this post, so we shouldn't need to implement it at all.
I realise that this type of question has been asked before and this may be subjective but my question is direct:
I like the approach in the first (Code Fizzle) article and wanted to know if it is perhaps more maintainable and as easily testable as other approaches and safe to go ahead with?
Any other views are more than welcome.
#Chris Hardie is correct, EF implements UoW out of the box. However many people overlook the fact that EF also implements a generic repository pattern out of the box too:
var repos1 = _dbContext.Set<Widget1>();
var repos2 = _dbContext.Set<Widget2>();
var reposN = _dbContext.Set<WidgetN>();
...and this is a pretty good generic repository implementation that is built into the tool itself.
Why go through the trouble of creating a ton of other interfaces and properties, when DbContext gives you everything you need? If you want to abstract the DbContext behind application-level interfaces, and you want to apply command query segregation, you could do something as simple as this:
public interface IReadEntities
{
IQueryable<TEntity> Query<TEntity>();
}
public interface IWriteEntities : IReadEntities, IUnitOfWork
{
IQueryable<TEntity> Load<TEntity>();
void Create<TEntity>(TEntity entity);
void Update<TEntity>(TEntity entity);
void Delete<TEntity>(TEntity entity);
}
public interface IUnitOfWork
{
int SaveChanges();
}
You could use these 3 interfaces for all of your entity access, and not have to worry about injecting 3 or more different repositories into business code that works with 3 or more entity sets. Of course you would still use IoC to ensure that there is only 1 DbContext instance per web request, but all 3 of your interfaces are implemented by the same class, which makes it easier.
public class MyDbContext : DbContext, IWriteEntities
{
public IQueryable<TEntity> Query<TEntity>()
{
return Set<TEntity>().AsNoTracking(); // detach results from context
}
public IQueryable<TEntity> Load<TEntity>()
{
return Set<TEntity>();
}
public void Create<TEntity>(TEntity entity)
{
if (Entry(entity).State == EntityState.Detached)
Set<TEntity>().Add(entity);
}
...etc
}
You now only need to inject a single interface into your dependency, regardless of how many different entities it needs to work with:
// NOTE: In reality I would never inject IWriteEntities into an MVC Controller.
// Instead I would inject my CQRS business layer, which consumes IWriteEntities.
// See #MikeSW's answer for more info as to why you shouldn't consume a
// generic repository like this directly by your web application layer.
// See http://www.cuttingedge.it/blogs/steven/pivot/entry.php?id=91 and
// http://www.cuttingedge.it/blogs/steven/pivot/entry.php?id=92 for more info
// on what a CQRS business layer that consumes IWriteEntities / IReadEntities
// (and is consumed by an MVC Controller) might look like.
public class RecipeController : Controller
{
private readonly IWriteEntities _entities;
//Using Dependency Injection
public RecipeController(IWriteEntities entities)
{
_entities = entities;
}
[HttpPost]
public ActionResult Create(CreateEditRecipeViewModel model)
{
Mapper.CreateMap<CreateEditRecipeViewModel, Recipe>()
.ForMember(r => r.IngredientAmounts, opt => opt.Ignore());
Recipe recipe = Mapper.Map<CreateEditRecipeViewModel, Recipe>(model);
_entities.Create(recipe);
foreach(Tag t in model.Tags) {
_entities.Create(tag);
}
_entities.SaveChanges();
return RedirectToAction("CreateRecipeSuccess");
}
}
One of my favorite things about this design is that it minimizes the entity storage dependencies on the consumer. In this example the RecipeController is the consumer, but in a real application the consumer would be a command handler. (For a query handler, you would typically consume IReadEntities only because you just want to return data, not mutate any state.) But for this example, let's just use RecipeController as the consumer to examine the dependency implications:
Say you have a set of unit tests written for the above action. In each of these unit tests, you new up the Controller, passing a mock into the constructor. Then, say your customer decides they want to allow people to create a new Cookbook or add to an existing one when creating a new recipe.
With a repository-per-entity or repository-per-aggregate interface pattern, you would have to inject a new repository instance IRepository<Cookbook> into your controller constructor (or using #Chris Hardie's answer, write code to attach yet another repository to the UoW instance). This would immediately make all of your other unit tests break, and you would have to go back to modify the construction code in all of them, passing yet another mock instance, and widening your dependency array. However with the above, all of your other unit tests will still at least compile. All you have to do is write additional test(s) to cover the new cookbook functionality.
I'm (not) sorry to say that the codefizzle, Dyksta's article and the previous answers are wrong. For the simple fact that they use the EF entities as domain (business) objects, which is a big WTF.
Update: For a less technical explanation (in plain words) read Repository Pattern for Dummies
In a nutshell, ANY repository interface should not be coupled to ANY persistence (ORM) detail. The repo interface deals ONLY with objects that makes sense for the rest of the app (domain, maybe UI as in presentation). A LOT of people (with MS leading the pack, with intent I suspect) make the mistake of believing that they can reuse their EF entities or that can be business object on top of them.
While it can happen, it's quite rare. In practice, you'll have a lot of domain objects 'designed' after database rules i.e bad modelling. The repository purpose is to decouple the rest of the app (mainly the business layer) from its persistence form.
How do you decouple it when your repo deals with EF entities (persistence detail) or its methods return IQueryable, a leaking abstraction with wrong semantics for this purpose (IQueryable allows you to build a query, thus implying that you need to know persistence details thus negating the repository's purpose and functionality)?
A domin object should never know about persistence, EF, joins etc. It shouldn't know what db engine you're using or if you're using one. Same with the rest of the app, if you want it to be decoupled from the persistence details.
The repository interface know only about what the higher layer know. This means, that a generic domain repository interface looks like this
public interface IStore<TDomainObject> //where TDomainObject != Ef (ORM) entity
{
void Save(TDomainObject entity);
TDomainObject Get(Guid id);
void Delete(Guid id);
}
The implementation will reside in the DAL and will use EF to work with the db. However the implementation looks like this
public class UsersRepository:IStore<User>
{
public UsersRepository(DbContext db) {}
public void Save(User entity)
{
//map entity to one or more ORM entities
//use EF to save it
}
//.. other methods implementation ...
}
You don't really have a concrete generic repository. The only usage of a concrete generic repository is when ANY domain object is stored in serialized form in a key-value like table. It isn't the case with an ORM.
What about querying?
public interface IQueryUsers
{
PagedResult<UserData> GetAll(int skip, int take);
//or
PagedResult<UserData> Get(CriteriaObject criteria,int skip, int take);
}
The UserData is the read/view model fit for the query context usage.
You can use directly EF for querying in a query handler if you don't mind that your DAL knows about view models and in that case you won't be needing any query repo.
Conclusion
Your business object shouldn't know about EF entities.
The repository will use an ORM, but it never exposes the ORM to the rest of the app, so the repo interface will use only domain objects or view models (or any other app context object that isn't a persistence detail)
You do not tell the repo how to do its work i.e NEVER use IQueryable with a repo interface
If you just want to use the db in a easier/cool way and you're dealing with a simple CRUD app where you don't need (be sure about it) to maintain separation of concerns then skip the repository all together, use directly EF for everything data. The app will be tightly coupled to EF but at least you'll cut the middle man and it will be on purpose not by mistake.
Note that using the repository in the wrong way, will invalidate its use and your app will still be tightly coupled to the persistence (ORM).
In case you believe the ORM is there to magically store your domain objects, it's not. The ORM purpose is to simulate an OOP storage on top of relational tables. It has everything to do with persistence and nothing to do with domain, so don't use the ORM outside persistence.
DbContext is indeed built with the Unit of Work pattern. It allows all of its entities to share the same context as we work with them. This implementation is internal to the DbContext.
However, it should be noted that if you instantiate two DbContext objects, neither of them will see the other's entities that they are each tracking. They are insulated from one another, which can be problematic.
When I build an MVC application, I want to ensure that during the course of the request, all my data access code works off of a single DbContext. To achieve that, I apply the Unit of Work as a pattern external to DbContext.
Here is my Unit of Work object from a barbecue recipe app I'm building:
public class UnitOfWork : IUnitOfWork
{
private BarbecurianContext _context = new BarbecurianContext();
private IRepository<Recipe> _recipeRepository;
private IRepository<Category> _categoryRepository;
private IRepository<Tag> _tagRepository;
public IRepository<Recipe> RecipeRepository
{
get
{
if (_recipeRepository == null)
{
_recipeRepository = new RecipeRepository(_context);
}
return _recipeRepository;
}
}
public void Save()
{
_context.SaveChanges();
}
**SNIP**
I attach all my repositories, which are all injected with the same DbContext, to my Unit of Work object. So long as any repositories are requested from the Unit of Work object, we can be assured that all our data access code will be managed with the same DbContext - awesome sauce!
If I were to use this in an MVC app, I would ensure the Unit of Work is used throughout the request by instantiating it in the controller, and using it throughout its actions:
public class RecipeController : Controller
{
private IUnitOfWork _unitOfWork;
private IRepository<Recipe> _recipeService;
private IRepository<Category> _categoryService;
private IRepository<Tag> _tagService;
//Using Dependency Injection
public RecipeController(IUnitOfWork unitOfWork)
{
_unitOfWork = unitOfWork;
_categoryRepository = _unitOfWork.CategoryRepository;
_recipeRepository = _unitOfWork.RecipeRepository;
_tagRepository = _unitOfWork.TagRepository;
}
Now in our action, we can be assured that all our data access code will use the same DbContext:
[HttpPost]
public ActionResult Create(CreateEditRecipeViewModel model)
{
Mapper.CreateMap<CreateEditRecipeViewModel, Recipe>().ForMember(r => r.IngredientAmounts, opt => opt.Ignore());
Recipe recipe = Mapper.Map<CreateEditRecipeViewModel, Recipe>(model);
_recipeRepository.Create(recipe);
foreach(Tag t in model.Tags){
_tagRepository.Create(tag); //I'm using the same DbContext as the recipe repo!
}
_unitOfWork.Save();
Searching around the internet I found this http://www.thereformedprogrammer.net/is-the-repository-pattern-useful-with-entity-framework/ it's a 2 part article about the usefulness of the repository pattern by Jon Smith.
The second part focuses on a solution. Hope it helps!
Repository with unit of work pattern implementation is a bad one to answer your question.
The DbContext of the entity framework is implemented by Microsoft according to the unit of work pattern. That means the context.SaveChanges is transactionally saving your changes in one go.
The DbSet is also an implementation of the Repository pattern. Do not build repositories that you can just do:
void Add(Customer c)
{
_context.Customers.Add(c);
}
Create a one-liner method for what you can do inside the service anyway ???
There is no benefit and nobody is changing EF ORM to another ORM nowadays...
You do not need that freedom...
Chris Hardie is argumenting that there could be instantiated multiple context objects but already doing this you do it wrong...
Just use an IOC tool you like and setup the MyContext per Http Request and your are fine.
Take ninject for example:
kernel.Bind<ITeststepService>().To<TeststepService>().InRequestScope().WithConstructorArgument("context", c => new ITMSContext());
The service running the business logic gets the context injected.
Just keep it simple stupid :-)
You should consider "command/query objects" as an alternative, you can find a bunch of interesting articles around this area, but here is a good one:
https://rob.conery.io/2014/03/03/repositories-and-unitofwork-are-not-a-good-idea/
When you need a transaction over multiple DB objects, use one command object per command to avoid the complexity of the UOW pattern.
A query object per query is likely unnecessary for most projects. Instead you might choose to start with a 'FooQueries' object
...by which I mean you can start with a Repository pattern for READS but name it as "Queries" to be explicit that it does not and should not do any inserts/updates.
Later, you might find splitting out individual query objects worthwhile if you want to add things like authorization and logging, you could feed a query object into a pipeline.
I always use UoW with EF code first. I find it more performant and easier tot manage your contexts, to prevent memory leaking and such. You can find an example of my workaround on my github: http://www.github.com/stefchri in the RADAR project.
If you have any questions about it feel free to ask them.
I am trying to use Entity Framework data migrations, as described in this post.
However, when I try to execute the Enable-Migrations step, I receive the following error in Package Manager Console:
The target context 'MyDataContext' is not constructible. Add a default constructor or provide an implementation of IDbContextFactory
So, I created a factory class that implements IDbContextFactory in the project that contains my DbContext class, but data migrations doesn't appear to recognize it.
Is there something that I should explicitly do to instruct data migrations to use this factory class?
I also hit this problem as i wrote my context to take a connection string name (and then used ninject to provide it).
The process you've gone through seems correct, here is a snippet of my class implementation if it's of any help:
public class MigrationsContextFactory : IDbContextFactory<MyContext>
{
public MyContext Create()
{
return new MyDBContext("connectionStringName");
}
}
That should be all you need.
Like #Soren pointed out, instead of using IDbContextFactory, not supported on some earlier EF Core releases (i.e. EF Core 2.1), we can implement IDesignTimeDbContextFactory<TContext>, which supports the missing ConnectionString parameter.
For a settings.json based aproach, which you can use with either of the referred interfaces, check #Arayn's sample which allows us to define "ConnectionStrings:DefaultConnection" value path
Update 1
According to #PaulWaldman's comment, on EF Core 5 support for IDbContextFactory was reintroduced. For further details, check his comment below.
few days ago i read tutorial about GenericRepository and Unit Of Work patterns http://www.asp.net/mvc/tutorials/getting-started-with-ef-using-mvc/implementing-the-repository-and-unit-of-work-patterns-in-an-asp-net-mvc-application. I use web forms and i have EntityFramework CTP4 package installed. (I can't using EF 5).
I want to code generic repository for my project but i was stuck at this line:
this.dbSet = context.Set<TEntity>();
I know that this line doesn't work because a use ObjectContext in my project and database first approach. How can i deal with it? Can I code generic repository without migration to code first (which is not an option in my case) ?
This is the equivalent for ObjectContext:
this.dbSet = context.CreateObjectSet<TEntity>();
Now this creates an ObjectSet<TEntity> rather than a DbSet<TEntity>, but for your pattern you can use it in the same way.
UPDATE
The ObjectSet class does not have a utility method like that matches the Find() method of the DbSet. In order to "Get by key" you would need to construct an EntityKey and use the ObjectContext.GetEntityByKey(), unfortunately that's not a really simple thing to do.
There really isn't a simple way to tackle this, that I've found. In my case what I've done is to base all of my entities from a common class (using custom T4 templates to generate the classes from the model). And then I can add a generic constraint to my repositories, like:
public class MyRepository<TEntity> where TEntity : MyEntityBaseClass
And then my common base class has an Id field which is inherited by all the entities so I can can simply do:
return myObjectSet.SingleOrDefault(x => x.Id == myId);
I'm sure there are other approaches, that might be a good topic for another question.
1. You want to add the DbContextGenerator template to your visual studio templates:
2. After this make sure you clear out the default generation tool on your .edmx file.
3. Now you can implement the GenericRepository pattern as you wish.
I have a Foo entity in Entity Framework. But I'm making it inherit from IFoo so that my business logic only knows IFoo - thus abstracting Entity Framework away.
The problem is that Foo has a collection of Bar entities. And this collection is of type EntityCollection<Bar> .
If I put this collection in IFoo as it is, I make IFoo dependent on Entity Framework. So I thought of putting it as ICollection<IBar>, but this doesn't compile (naturally).
The only solution I can think of is to go to the concrete Foo implementation generated by the Entity Framework designer and change the collection from EntityCollection<Bar> to ICollection<IBar> there. But I dread the thought of the implications this will have on Entity Framework "behind the scenes".
Is there any way for me to define IFoo and IBar independently of Entity Framework while still maintaining Foo and Bar as EF Entities that implement them? Do IFoo and IBar even make sense, if I cannot achieve this independence that I aim for?
The general concept you are referring to is "persistence ignorance" (PI), although that generally applies directly to entities themselves rather than the code that consumes the entities.
In any case, Hibernate and NHibernate natively support PI, but the initial version of Microsoft's Entity Framework does not. MS caught a lot of flak for this and PI is probably the #1 most discussed feature for the next version (whenever that is).
As far as what you are trying to do with interfaces, does the collection of Bars need to be modified after it is retrieved? If the answer is yes, there is no easy answer. Even covariance couldn't help you here because ICollection<T> has an Add method.
If the collection is read-only, then you might consider exposing it as IEnumerable<IBar>. The Enumerable.Cast method makes this fairly convenient.
interface IFoo
{
IEnumerable<IBar> Bars { get; }
}
partial class Foo : IFoo
{
IEnumerable<IBar> IFoo.Bars
{
get { return Bars.Cast<IBar>(); }
}
}
Also, I know of at least one effort to make the current version of EF support persistence ignorance.
I'm a Java developer, so I can't comment with any authority on Entity Framework. I can tell you that ORM solutions like Hibernate make it possible to have POJO persistence without having to resort to common abstract classes, interfaces, or modifying byte code. It handles relationships like the 1:m you cite for your Foo and Bar without having to use special collection classes.
The special sauce is externalized into mapping configuration and Hibernate itself.
The little bit that I read about Entity Framework suggests that it's an ORM solution with the same aim: POCO persistence. I didn't see any mention of interfaces. I can't see the need for them from your example, because it's too abstract.
I'm inferring that it's possible to get that independence between business objects and persistence tier without having to resort to those interfaces, because I know Hibernate does it. I'd say that Spring's JDBC solution accomplishes it as well, because there's no need for common interfaces. They use a RowMapper construct to ferry data out of a query and into an object.
I wish I could advise you precisely how to do it with Entity Framework, but maybe you'll take heart knowing that it can be done.
I recently wrote a comprehensive post about this: Persistence Ignorance in ADO.NET Entity Framework. You might want to look at EFPocoAdapter. That does just this and it will eventually deprecate into EF v2.
For what it's worth, I am using EFPocoAdapater and it's been working well for me.
We've been doing the exact same thing for LINQ to SQL. I got around the collection issue by writing a class which wraps an IList and casts to and from the correct type as required. It looks a bit like this:
public class ListWrapper<TSource, TTarget> : IList<TTarget>
where TTarget : class
where TSource : class, TTarget
{
private IList<TSource> internalList;
public ListWrapper(IList<TSource> internalList)
{
this.internalList = internalList;
}
public void Add(TTarget item)
{
internalList.Add((TSource)item);
}
public IEnumerator<TTarget> GetEnumerator()
{
return new EnumeratorWrapper<TSource, TTarget>(internalList.GetEnumerator());
}
// and all the other IList members
}
EnumeratorWrapper similarly wraps an IEnumerator and performs the casting.
In the LINQ to SQL partial classes we expose the property like this:
public IList<ICustomer> Foos
{
get
{
return new ListWrapper<Foo, IFoo>(this.Foos_internal);
}
}
Any changes to the exposed list will be performed on the internal EntitySet so they stay in sync.
This works well enough but my feeling is that this whole approach is more trouble than it's worth, I'm a huge NHibernate fan and a strong believer in P.I. but we've put in a LOT of extra effort doing this and haven't really seen any advantage. We use the repository pattern to abstract away the actual DataContext access which I would say is the key part of decoupling ourselves from LINQ to SQL.
Use a partial class to seperate your logic and rules from the autogenerated EF objects. In the example below FooEntityObject class is split into two using the partial keyword. I've used this technique before with EF and LINQ to SQL. The partial classes can be stored in seperate files so if your regenerate your EF object again your custom code doesn't get overwriten.
interface IFoo
{
public ICollection<IBar> GetBars();
}
public partial class FooEntityObject : IFoo
{
public ICollection<IBar> GetBars()
{
// convert EntityCollection<Bar> into ICollection<IBar> here
}
}
public partial class FooEntityObject
{
EntityCollection<Bar> Bars{get;set;}
}