I've been creating a prototype for a modern MUD engine. A MUD is a simple form of simulation and provide a good method in which to test a concept I'm working on. This has led me to a couple of places in my code where things, are a bit unclear, and the design is coming into question (probably due to its being flawed). I'm using model first (I may need to change this) and I've designed a top down architecture of game objects. I may be doing this completely wrong.
What I've done is create a MUDObject entity. This entity is effectively a base for all of my other logical constructs, such as characters, their items, race, etc. I've also created a set of three meta classes which are used for logical purposes as well Attributes, Events, and Flags. They are fairly straightforward, and are all inherited from MUDObject.
The MUDObject class is designed to provide default data behavior for all of the objects, this includes deletion of dead objects. The automatically clearing of floors. etc. This is also designed to facilitate this logic virtually if needed. For example, checking a room to see if an effect has ended and deleting the the effect (remove the flag).
public partial class MUDObject
{
public virtual void Update()
{
if (this.LifeTime.Value.CompareTo(DateTime.Now) > 0)
{
using (var context = new ReduxDataContext())
{
context.MUDObjects.DeleteObject(this);
}
}
}
public virtual void Pause()
{
}
public virtual void Resume()
{
}
public virtual void Stop()
{
}
}
I've also got a class World, it is derived from MUDObject and contains the areas and room (which in turn contain the games objects) and handles the timer for the operation to run the updates. (probably going to be moved, put here as if it works would limit it to only the objects in-world at the time.)
public partial class World
{
private Timer ticker;
public void Start()
{
this.ticker = new Timer(3000.0);
this.ticker.Elapsed += ticker_Elapsed;
this.ticker.Start();
}
private void ticker_Elapsed(object sender, ElapsedEventArgs e)
{
this.Update();
}
public override void Update()
{
this.CurrentTime += 3;
// update contents
base.Update();
}
public override void Pause()
{
this.ticker.Enabled = false;
// update contents
base.Pause();
}
public override void Resume()
{
this.ticker.Enabled = true;
// update contents
this.Resume();
}
public override void Stop()
{
this.ticker.Stop();
// update contents
base.Stop();
}
}
I'm curious of two things.
Is there a way to recode the context so that it has separate
ObjectSets for each type derived from MUDObject?
i.e. context.MUDObjects.Flags or context.Flags
If not how can I query a child type specifically?
Does the Update/Pause/Resume/Stop architecture I'm using work
properly when placed into the EF entities directly? given than it's for
data purposes only?
Will locking be an issue?
Does the partial class automatically commit changes when they are made?
Would I be better off using a flat repository and doing this in the game engine directly?
1) Is there a way to recode the context so that it has separate ObjectSets for each type derived from MUDObject?
Yes, there is. If you decide that you want to define a base class for all your entities it is common to have an abstract base class that is not part of the entity framework model. The model only contains the derived types and the context contains DbSets of derived types (if it is a DbContext) like
public DbSet<Flag> Flags { get; set; }
If appropriate you can implement inheritance between classes, but that would be to express polymorphism, not to implement common persistence-related behaviour.
2) Does the Update/Pause/Resume/Stop architecture I'm using work properly when placed into the EF entities directly?
No. Entities are not supposed to know anything about persistence. The context is responsible for creating them, tracking their changes and updating/deleting them. I think that also answers your question about automatically committing changes: no.
Elaboration:
I think here it's good to bring up the single responsibility principle. A general pattern would be to
let a context populate objects from a store
let the object act according to their responsibilities (the simulation)
let a context store their state whenever necessary
I think Pause/Resume/Stop could be responsibilities of MUD objects. Update is an altogether different kind of action and responsibility.
Now I have to speculate, but take your World class. You should be able to express its responsibility in a short phrase, maybe something like "harbour other objects" or "define boundaries". I don't think it should do the timing. I think the timing should be the responsibility of some core utility which signals that a time interval has elapsed. Other objects know how to respond to that (e.g. do some state change, or, the context or repository, save changes).
Well, this is only an example of how to think about it, probably far from correct.
One other thing is that I think saving changes should be done not nearly as often as state changes of the objects that carry out the simulation. It would probably slow down the process dramatically. Maybe it should be done in longer intervals or by a user action.
First thing to say, if you are using EF 4.1 (as it is tagged) you should really consider going to version 5.0 (you will need to make a .NET 4.5 project for this)
With several improvements on performance, you can benefit from other features also. The code i will show you will work for 5.0 (i dont know if it will work for 4.1 version)
Now, let's go to you several questions:
Is there a way to recode the context so that it has separate
ObjectSets for each type derived from MUDObject? If not how can I
query a child type specifically?
i.e. context.MUDObjects.Flags or context.Flags
Yes, you can. But to call is a little different, you will not have Context.Worlds you will only have the base class to be called this way, if you want to get the set of Worlds (that inherit from MUDObject, you will call:
var worlds = context.MUDObjects.OfType<World>();
Or you can do in direct way by using generics:
var worlds = context.Set<World>();
If you define you inheritance the right way, you should have an abstract class called MUDObjects and all others should iherit from that class. EF can work perfectly with this, you just need to make it right.
Does the Update/Pause/Resume/Stop architecture I'm using work properly
when placed into the EF entities directly? given than it's for data
purposes only?
In this case i think you should consider using a Design Pattern called Strategy Pattern, do some research, it will fit your objects.
Will locking be an issue?
Depends on how you develop the system....
Does the partial class automatically commit changes when they are
made?
Did not understand that question.... Partial classes are just like regular classes, thay are just in different files, but when compiled (or event at Design-Time, because of the vshost.exe) they are in fact just one.
Would I be better off using a flat repository and doing this in the
game engine directly?
Hard to answer, it all depends on the requirements of the game, deploy strategy....
Related
There're likely no more than 2-4 widely used approaches to this problem.
I have a situation in which there's a common class I use all over the place, and (on occasion) I'd like to give it special abilities. For arguments sake, let's say that type checking is not a requirement.
What are some means of giving functionality to a class without it being simply inheritance or member functions?
One way I've seen is the "decorator" pattern in which a sort of mutator wraps around the class, modifies it a bit, and spits out a version of it with more functions.
Another one I've read about but never used is for gaming. It has something to do with entities and power-ups/augments. I'm not sure about the specifics, but I think they have a list of them.
???
I don't need specific code of a specific language so much as a general gist and some keywords. I can implement from there.
So as far as I understand, you're looking to extend an interface to allow client-specific implementations that may require additional functionality, and you want to do so in a way that doesn't clutter up the base class.
As you mentioned, for simple systems, the standard way is to use the Adaptor pattern: subclass the "special abilities", then call that particular subclass when you need it. This is definitely the best choice if the extent of the special abilities you'll need to add is known and reasonably small, i.e. you generally only use the base class, but for three-to-five places where additional functionality is needed.
But I can see why you'd want some other possible options, because rarely do we know upfront the full extent of the additional functionality that will be required of the subclasses (i.e. when implementing a Connection API or a Component Class, each of which could be extended almost without bound). Depending on how complex the client-specific implementations are, how much additional functionality is needed and how much it varies between the implementations, this could be solved in a variety of ways:
Decorator Pattern as you mentioned (useful in the case where the special entities are only ever expanding the pre-existing methods of the base class, without adding brand new ones)
class MyClass{};
DecoratedClass = decorate(MyClass);
A combined AbstractFactory/Adaptor builder for the subclasses (useful for cases where there are groupings of functionality in the subclasses that may differ in their implementations)
interface Button {
void paint();
}
interface GUIFactory {
Button createButton();
}
class WinFactory implements GUIFactory {
public Button createButton() {
return new WinButton();
}
}
class OSXFactory implements GUIFactory {
public Button createButton() {
return new OSXButton();
}
}
class WinButton implements Button {
public void paint() {
System.out.println("I'm a WinButton");
}
}
class OSXButton implements Button {
public void paint() {
System.out.println("I'm an OSXButton");
}
}
class Application {
public Application(GUIFactory factory) {
Button button = factory.createButton();
button.paint();
}
}
public class ApplicationRunner {
public static void main(String[] args) {
new Application(createOsSpecificFactory());
}
public static GUIFactory createOsSpecificFactory() {
int sys = readFromConfigFile("OS_TYPE");
if (sys == 0) return new WinFactory();
else return new OSXFactory();
}
}
The Strategy pattern could also work, depending on the use case. But that would be a heavier lift with the preexisting base class that you don't want to change, and depending on if it is a strategy that is changing between those subclasses. The Visitor Pattern could also fit, but would have the same problem and involve a major change to the architecture around the base class.
class MyClass{
public sort() { Globals.getSortStrategy()() }
};
Finally, if the "special abilities" needed are enough (or could eventually be enough) to justify a whole new interface, this may be a good time for the use of the Extension Objects Pattern. Though it does make your clients or subclasses far more complex, as they have to manage a lot more: checking that the specific extension object and it's required methods exist, etc.
class MyClass{
public addExtension(addMe) {
addMe.initialize(this);
}
public getExtension(getMe);
};
(new MyClass()).getExtension("wooper").doWoop();
With all that being said, keep it as simple as possible, sometimes you just have to write the specific subclasses or a few adaptors and you're done, especially with a preexisting class in use in many other places. You also have to ask how much you want to leave the class open for further extension. It might be worthwhile to keep the tech debt low with an abstract factory, so less changes need to be made when you add more functionality down the road. Or maybe what you really want is to lock the class down to prevent further extension, for the sake of understand-ability and simplicity. You have to examine your use case, future plans, and existing architecture to decide on the path forward. More than likely, there are lots of right answers and only a couple very wrong ones, so weigh the options, pick one that feels right, then implement and push code.
As far as I've gotten, adding functions to a class is a bit of a no-op. There are ways, but it seems to always get ugly because the class is meant to be itself and nothing else ever.
What has been more approachable is to add references to functions to an object or map.
So, I have bound the CombatController to an object called "godObject". In the Start() method, I call init() functions on other classes. I did this so I can control the order in which objects are initialized since, for example, the character controller relies on the grid controller being initialized.
Quick diagram:
-------------------- calls
| CombatController | ----------> CameraController.init();
-------------------- |
| ---> GridController.init();
|
| ---> CharacterController.init();
So, now I have a slight problem. I have multiple properties that I need in every controller. At the moment, I have bound everything to the combat controller itself. That means that, in every controller, I have to get an instance of the CombatController via GameObject.Find("godObject).GetComponent<CombatController>(). To be honest, I don't think this is good design.
My idea now was to create a BaseCombatController that extends MonoBehavior, and then have all other classes like GridController, CharacterController etc. extend the BaseCombatController. It might look like this:
public class BaseCombatController : MonoBehaviour
{
public GameObject activePlayer;
public void setActivePlayer(GameObject player) {
this.activePlayer = player;
}
... more stuff to come ...
}
This way, I could access activePlayer everywhere without the need to create a new instance of the CombatController. However, I'm not sure if this doesn't have possible side effects.
So, lots of text for a simple question, is that safe to do?
I use inheritance in Unity all the time. The trick, like you have in the question, is to allow your base class to inherit from monobehavior. For Example:
public class Base Item : Monobehavior
{
public string ItemName;
public int Price;
public virtual void PickUp(){//pickup logic}
//Additional functions. Update etc. Make them virtual.
}
This class sets up what an item should do. Then in a derived class you can change and extend this behavior.
public class Coin : BaseItem
{
//properties set in the inspector
public override void PickUp(){//override pickup logic}
}
I have used this design pattern a lot over the past year, and am currently using it in a retail product. I would say go for it! Unity seems to favor components over inheritance, but you could easily use them in conjunction with each other.
Hope this helps!
As far as I can see this should be safe. If you look into Unity intern or even Microsoft scripts they all extend/inhert (from) each other.
Another thing you could try would be the use of interfaces, here is the Unity Documentation to them: https://unity3d.com/learn/tutorials/topics/scripting/interfaces if you want to check it out.
You are right that GameObject.Find is pure code smell.
You can do it via the inheritance tree (as discussed earlier) or even better via interfaces (as mentioned by Assasin Bot), or (I am surprised no one mentioned it earlier) via static fields (aka the Singleton pattern).
One thing to add from experience - having to have Inits() called in a specific order is a yellow flag for your design - I've been there myself and found myself drowned by init order management.
As a general advice: Unity gives you two usefull callbacks - Awake() and Start(). If you find yourself needing Init() you are probably not using those two as they were designed.
All the Awakes() are guaranteed (for acvie objects) to run before first Start(), so do all the internal object initialisation in Awake(), and binding to external objects on Start(). If you find yourself needing finer control - you should probably simplify the design a bit.
As a rule of thumb: all objects should have their internal state (getcomponents<>, list inits etc) in order by the end of Awake(), but they shold not make any calls depending on other objects being ready before Start(). Splitting it this way usually helps a lot
I always used Repository pattern but for my latest project I wanted to see if I could perfect the use of it and my implementation of “Unit Of Work”. The more I started digging I started asking myself the question: "Do I really need it?"
Now this all starts with a couple of comments on Stackoverflow with a trace to Ayende Rahien's post on his blog, with 2 specific,
repository-is-the-new-singleton
ask-ayende-life-without-repositories-are-they-worth-living
This could probably be talked about forever and ever and it depends on different applications. Whats I like to know,
would this approach be suited for a Entity Framework project?
using this approach is the business logic still going in a service layer, or extension methods (as explained below, I know, the extension method is using NHib session)?
That's easily done using extension methods. Clean, simple and reusable.
public static IEnumerable GetAll(
this ISession instance, Expression<Func<T, bool>> where) where T : class
{
return instance.QueryOver().Where(where).List();
}
Using this approach and Ninject as DI, do I need to make the Context a interface and inject that in my controllers?
I've gone down many paths and created many implementations of repositories on different projects and... I've thrown the towel in and given up on it, here's why.
Coding for the exception
Do you code for the 1% chance your database is going to change from one technology to another? If you're thinking about your business's future state and say yes that's a possibility then a) they must have a lot of money to afford to do a migration to another DB technology or b) you're choosing a DB technology for fun or c) something has gone horribly wrong with the first technology you decided to use.
Why throw away the rich LINQ syntax?
LINQ and EF were developed so you could do neat stuff with it to read and traverse object graphs. Creating and maintain a repository that can give you the same flexibility to do that is a monstrous task. In my experience any time I've created a repository I've ALWAYS had business logic leak into the repository layer to either make queries more performant and/or reduce the number of hits to the database.
I don't want to create a method for every single permutation of a query that I have to write. I might as well write stored procedures. I don't want GetOrder, GetOrderWithOrderItem, GetOrderWithOrderItemWithOrderActivity, GetOrderByUserId, and so on... I just want to get the main entity and traverse and include the object graph as I so please.
Most examples of repositories are bullshit
Unless you are developing something REALLY bare-bones like a blog or something your queries are never going to be as simple as 90% of the examples you find on the internet surrounding the repository pattern. I cannot stress this enough! This is something that one has to crawl through the mud to figure out. There will always be that one query that breaks your perfectly thought out repository/solution that you've created, and it's not until that point where you second guess yourself and the technical debt/erosion begins.
Don't unit test me bro
But what about unit testing if I don't have a repository? How will I mock? Simple, you don't. Lets look at it from both angles:
No repository - You can mock the DbContext using an IDbContext or some other tricks but then you're really unit testing LINQ to Objects and not LINQ to Entities because the query is determined at runtime... OK so that's not good! So now it's up to the integration test to cover this.
With repository - You can now mock your repositories and unit test the layer(s) in between. Great right? Well not really... In the cases above where you have to leak logic into the repository layer to make queries more performant and/or less hits to the database, how can your unit tests cover that? It's now in the repo layer and you don't want to test IQueryable<T> right? Also let's be honest, your unit tests aren't going to cover the queries that have a 20 line .Where() clause and .Include()'s a bunch of relationships and hits the database again to do all this other stuff, blah, blah, blah anyways because the query is generated at runtime. Also since you created a repository to keep the upper layers persistence ignorant, if you now you want to change your database technology, sorry your unit tests are definitely not going to guarantee the same results at runtime, back to integration tests. So the whole point of the repository seems weird..
2 cents
We already lose a lot of functionality and syntax when using EF over plain stored procedures (bulk inserts, bulk deletes, CTEs, etc.) but I also code in C# so I don't have to type binary. We use EF so we can have the possibility of using different providers and to work with object graphs in a nice related way amongst many things. Certain abstractions are useful and some are not.
The repository pattern is an abstraction. It's purpose is to reduce complexity and make the rest of the code persistant ignorant. As a bonus it allows you to write unit tests instead of integration tests.
The problem is that many developers fail to understand the patterns purpose and create repositories which leak persistance specific information up to the caller (typically by exposing IQueryable<T>). By doing so they get no benefit over using the OR/M directly.
Update to address another answer
Coding for the exception
Using repositories is not about being able to switch persistence technology (i.e. changing database or using a webservice etc instead). It's about separating business logic from persistence to reduce complexity and coupling.
Unit tests vs integration tests
You do not write unit tests for repositories. period.
But by introducing repositories (or any other abstraction layer between persistance and business) you are able to write unit tests for the business logic. i.e. you do not have to worry about your tests failing due to an incorrectly configured database.
As for the queries. If you use LINQ you also have to make sure that your queries work, just as you have to do with repositories. and that is done using integration tests.
The difference is that if you have not mixed your business with LINQ statements you can be 100% sure that it's your persistence code that are failing and not something else.
If you analyze your tests you will also see that they are much cleaner if you have not mixed concerns (i.e. LINQ + Business logic)
Repository examples
Most examples are bullshit. that is very true. However, if you google any design pattern you will find a lot of crappy examples. That is no reason to avoid using a pattern.
Building a correct repository implementation is very easy. In fact, you only have to follow a single rule:
Do not add anything into the repository class until the very moment that you need it
A lot of coders are lazy and tries to make a generic repository and use a base class with a lot of methods that they might need. YAGNI. You write the repository class once and keep it as long as the application lives (can be years). Why fuck it up by being lazy. Keep it clean without any base class inheritance. It will make it much easier to read and maintain.
(The above statement is a guideline and not a law. A base class can very well be motivated. Just think before you add it, so that you add it for the right reasons)
Old stuff
Conclusion:
If you don't mind having LINQ statements in your business code nor care about unit tests I see no reason to not use Entity Framework directly.
Update
I've blogged both about the repository pattern and what "abstraction" really means: http://blog.gauffin.org/2013/01/repository-pattern-done-right/
Update 2
For single entity type with 20+ fields, how will you design query method to support any permutation combination? You dont want to limit search only by name, what about searching with navigation properties, list all orders with item with specific price code, 3 level of navigation property search. The whole reason IQueryable was invented was to be able to compose any combination of search against database. Everything looks great in theory, but user's need wins above theory.
Again: An entity with 20+ fields is incorrectly modeled. It's a GOD entity. Break it down.
I'm not arguing that IQueryable wasn't made for quering. I'm saying that it's not right for an abstraction layer like Repository pattern since it's leaky. There is no 100% complete LINQ To Sql provider (like EF).
They all have implementation specific things like how to use eager/lazy loading or how to do SQL "IN" statements. Exposing IQueryable in the repository forces the user to know all those things. Thus the whole attempt to abstract away the data source is a complete failure. You just add complexity without getting any benefit over using the OR/M directly.
Either implement Repository pattern correctly or just don't use it at all.
(If you really want to handle big entities you can combine the Repository pattern with the Specification pattern. That gives you a complete abstraction which also is testable.)
IMO both the Repository abstraction and the UnitOfWork abstraction have a very valuable place in any meaningful development. People will argue about implementation details, but just as there are many ways to skin a cat, there are many ways to implement an abstraction.
Your question is specifically to use or not to use and why.
As you have no doubt realised you already have both these patterns built into Entity Framework, DbContext is the UnitOfWork and DbSet is the Repository. You don’t generally need to unit test the UnitOfWork or Repository themselves as they are simply facilitating between your classes and the underlying data access implementations. What you will find yourself needing to do, again and again, is mock these two abstractions when unit testing the logic of your services.
You can mock, fake or whatever with external libraries adding layers of code dependencies (that you don’t control) between the logic doing the testing and the logic being tested.
So a minor point is that having your own abstraction for UnitOfWork and Repository gives you maximum control and flexibility when mocking your unit tests.
All very well, but for me, the real power of these abstractions is they provide a simple way to apply Aspect Oriented Programming techniques and adhere to the SOLID principles.
So you have your IRepository:
public interface IRepository<T>
where T : class
{
T Add(T entity);
void Delete(T entity);
IQueryable<T> AsQueryable();
}
And its implementation:
public class Repository<T> : IRepository<T>
where T : class
{
private readonly IDbSet<T> _dbSet;
public Repository(PPContext context)
{
_dbSet = context.Set<T>();
}
public T Add(T entity)
{
return _dbSet.Add(entity);
}
public void Delete(T entity)
{
_dbSet.Remove(entity);
}
public IQueryable<T> AsQueryable()
{
return _dbSet.AsQueryable();
}
}
Nothing out of the ordinary so far but now we want to add some logging - easy with a logging Decorator.
public class RepositoryLoggerDecorator<T> : IRepository<T>
where T : class
{
Logger logger = LogManager.GetCurrentClassLogger();
private readonly IRepository<T> _decorated;
public RepositoryLoggerDecorator(IRepository<T> decorated)
{
_decorated = decorated;
}
public T Add(T entity)
{
logger.Log(LogLevel.Debug, () => DateTime.Now.ToLongTimeString() );
T added = _decorated.Add(entity);
logger.Log(LogLevel.Debug, () => DateTime.Now.ToLongTimeString());
return added;
}
public void Delete(T entity)
{
logger.Log(LogLevel.Debug, () => DateTime.Now.ToLongTimeString());
_decorated.Delete(entity);
logger.Log(LogLevel.Debug, () => DateTime.Now.ToLongTimeString());
}
public IQueryable<T> AsQueryable()
{
return _decorated.AsQueryable();
}
}
All done and with no change to our existing code. There are numerous other cross cutting concerns we can add, such as exception handling, data caching, data validation or whatever and throughout our design and build process the most valuable thing we have that enables us to add simple features without changing any of our existing code is our IRepository abstraction.
Now, many times I have seen this question on StackOverflow – “how do you make Entity Framework work in a multi tenant environment?”.
https://stackoverflow.com/search?q=%5Bentity-framework%5D+multi+tenant
If you have a Repository abstraction then the answer is “it’s easy add a decorator”
public class RepositoryTennantFilterDecorator<T> : IRepository<T>
where T : class
{
//public for Unit Test example
public readonly IRepository<T> _decorated;
public RepositoryTennantFilterDecorator(IRepository<T> decorated)
{
_decorated = decorated;
}
public T Add(T entity)
{
return _decorated.Add(entity);
}
public void Delete(T entity)
{
_decorated.Delete(entity);
}
public IQueryable<T> AsQueryable()
{
return _decorated.AsQueryable().Where(o => true);
}
}
IMO you should always place a simple abstraction over any 3rd party component that will be referenced in more than a handful of places. From this perspective an ORM is the perfect candidate as it is referenced in so much of our code.
The answer that normally comes to mind when someone says “why should I have an abstraction (e.g. Repository) over this or that 3rd party library” is “why wouldn’t you?”
P.S. Decorators are extremely simple to apply using an IoC Container, such as SimpleInjector.
[TestFixture]
public class IRepositoryTesting
{
[Test]
public void IRepository_ContainerRegisteredWithTwoDecorators_ReturnsDecoratedRepository()
{
Container container = new Container();
container.RegisterLifetimeScope<PPContext>();
container.RegisterOpenGeneric(
typeof(IRepository<>),
typeof(Repository<>));
container.RegisterDecorator(
typeof(IRepository<>),
typeof(RepositoryLoggerDecorator<>));
container.RegisterDecorator(
typeof(IRepository<>),
typeof(RepositoryTennantFilterDecorator<>));
container.Verify();
using (container.BeginLifetimeScope())
{
var result = container.GetInstance<IRepository<Image>>();
Assert.That(
result,
Is.InstanceOf(typeof(RepositoryTennantFilterDecorator<Image>)));
Assert.That(
(result as RepositoryTennantFilterDecorator<Image>)._decorated,
Is.InstanceOf(typeof(RepositoryLoggerDecorator<Image>)));
}
}
}
First of all, as suggested by some answer, EF itself is a repository pattern, there is no need to create further abstraction just to name it as repository.
Mockable Repository for Unit Tests, do we really need it?
We let EF communicate to test DB in unit tests to test our business logic straight against SQL test DB. I don't see any benefit of having mock of any repository pattern at all. What is really wrong doing unit tests against test database? As it is bulk operations are not possible and we end up writing raw SQL. SQLite in memory is perfect candidate for doing unit tests against real database.
Unnecessary Abstraction
Do you want to create repository just so that in future you can easily replace EF with NHbibernate etc or anything else? Sounds great plan, but is it really cost effective?
Linq kills unit tests?
I will like to see any examples on how it can kill.
Dependency Injection, IoC
Wow these are great words, sure they look great in theory, but sometimes you have to choose trade off between great design and great solution. We did use all of that, and we ended up throwing all at trash and choosing different approach. Size vs Speed (Size of code and Speed of development) matters huge in real life. Users need flexibility, they don't care if your code is great in design in terms of DI or IoC.
Unless you are building Visual Studio
All these great design are needed if you are building a complex program like Visual Studio or Eclipse which will be developed by many people and it needs to be highly customizable. All great development pattern came into picture after years of development these IDEs has gone through, and they have evolved at place where all these great design patterns matter so much. But if you are doing simple web based payroll, or simple business app, it is better that you evolve in your development with time, instead of spending time to build it for million users where it will be only deployed for 100s of users.
Repository as Filtered View - ISecureRepository
On other side, repository should be a filtered view of EF which guards access to data by applying necessary filler based on current user/role.
But doing so complicates repository even more as it ends up in huge code base to maintain. People end up creating different repositories for different user types or combination of entity types. Not only this, we also end up with lots of DTOs.
Following answer is an example implementation of Filtered Repository without creating whole set of classes and methods. It may not answer question directly but it can be useful in deriving one.
Disclaimer: I am author of Entity REST SDK.
http://entityrestsdk.codeplex.com
Keeping above in mind, we developed a SDK which creates repository of filtered view based on SecurityContext which holds filters for CRUD operations. And only two kinds of rules simplify any complex operations. First is access to entity, and other is Read/Write rule for property.
The advantage is, that you do not rewrite business logic or repositories for different user types, you just simply block or grant them the access.
public class DefaultSecurityContext : BaseSecurityContext {
public static DefaultSecurityContext Instance = new DefaultSecurityContext();
// UserID for currently logged in User
public static long UserID{
get{
return long.Parse( HttpContext.Current.User.Identity.Name );
}
}
public DefaultSecurityContext(){
}
protected override void OnCreate(){
// User can access his own Account only
var acc = CreateRules<Account>();
acc.SetRead( y => x=> x.AccountID == UserID ) ;
acc.SetWrite( y => x=> x.AccountID == UserID );
// User can only modify AccountName and EmailAddress fields
acc.SetProperties( SecurityRules.ReadWrite,
x => x.AccountName,
x => x.EmailAddress);
// User can read AccountType field
acc.SetProperties<Account>( SecurityRules.Read,
x => x.AccountType);
// User can access his own Orders only
var order = CreateRules<Order>();
order.SetRead( y => x => x.CustomerID == UserID );
// User can modify Order only if OrderStatus is not complete
order.SetWrite( y => x => x.CustomerID == UserID
&& x.OrderStatus != "Complete" );
// User can only modify OrderNotes and OrderStatus
order.SetProperties( SecurityRules.ReadWrite,
x => x.OrderNotes,
x => x.OrderStatus );
// User can not delete orders
order.SetDelete(order.NotSupportedRule);
}
}
These LINQ Rules are evaluated against Database in SaveChanges method for every operation, and these Rules act as Firewall in front of Database.
There is a lot of debate over which method is correct, so I look at it as both are acceptable so I use ever which one I like the most (Which is no repository, UoW).
In EF UoW is implemented via DbContext and the DbSets are repositories.
As for how to work with the data layer I just directly work on the DbContext object, for complex queries I will make extension methods for the query that can be reused.
I believe Ayende also has some posts about how abstracting out CUD operations is bad.
I always make an interface and have my context inherit from it so I can use an IoC container for DI.
What most apply over EF is not a Repository Pattern. It is a Facade pattern (abstracting the calls to EF methods into simpler, easier to use versions).
EF is the one applying the Repository Pattern (and the Unit of Work pattern as well). That is, EF is the one abstracting the data access layer so that the user has no idea they are dealing with SQLServer.
And at that, most "repositories" over EF are not even good Facades as they merely map, quite straightforwardly, to single methods in EF, even to the point of having the same signatures.
The two reasons, then, for applying this so-called "Repository" pattern over EF is to allow easier testing and to establish a subset of "canned" calls to it. Not bad in themselves, but clearly not a Repository.
Linq is a nowadays 'Repository'.
ISession+Linq already is the repository, and you need neither GetXByY methods nor QueryData(Query q) generalization. Being a little paranoid to DAL usage, I still prefer repository interface. (From maintainability point of view we also still have to have some facade over specific data access interfaces).
Here is repository we use - it de-couples us from direct usage of nhibernate, but provides linq interface (as ISession access in exceptional cases, which are subject to refactor eventually).
class Repo
{
ISession _session; //via ioc
IQueryable<T> Query()
{
return _session.Query<T>();
}
}
The Repository (or however one chooses to call it) at this time for me is mostly about abstracting away the persistence layer.
I use it coupled with query objects so I do not have a coupling to any particular technology in my applications. And also it eases testing a lot.
So, I tend to have
public interface IRepository : IDisposable
{
void Save<TEntity>(TEntity entity);
void SaveList<TEntity>(IEnumerable<TEntity> entities);
void Delete<TEntity>(TEntity entity);
void DeleteList<TEntity>(IEnumerable<TEntity> entities);
IList<TEntity> GetAll<TEntity>() where TEntity : class;
int GetCount<TEntity>() where TEntity : class;
void StartConversation();
void EndConversation();
//if query objects can be self sustaining (i.e. not need additional configuration - think session), there is no need to include this method in the repository.
TResult ExecuteQuery<TResult>(IQueryObject<TResult> query);
}
Possibly add async methods with callbacks as delegates.
The repo is easy to implement generically, so I am able not to touch a line of the implementation from app to app. Well, this is true at least when using NH, I did it also with EF, but made me hate EF. 4. The conversation is the start of a transaction. Very cool if a few classes share the repository instance. Also, for NH, one repo in my implementation equals one session which is opened at the first request.
Then the Query Objects
public interface IQueryObject<TResult>
{
/// <summary>Provides configuration options.</summary>
/// <remarks>
/// If the query object is used through a repository this method might or might not be called depending on the particular implementation of a repository.
/// If not used through a repository, it can be useful as a configuration option.
/// </remarks>
void Configure(object parameter);
/// <summary>Implementation of the query.</summary>
TResult GetResult();
}
For the configure I use in NH only to pass in the ISession. In EF makes no sense more or less.
An example query would be.. (NH)
public class GetAll<TEntity> : AbstractQueryObject<IList<TEntity>>
where TEntity : class
{
public override IList<TEntity> GetResult()
{
return this.Session.CreateCriteria<TEntity>().List<TEntity>();
}
}
To do an EF query you would have to have the context in the Abstract base, not the session. But of course the ifc would be the same.
In this way the queries are themselves encapsulated, and easily testable. Best of all, my code relies only on interfaces. Everything is very clean. Domain (business) objects are just that, e.g. there is no mixing of responsibilities like when using the active record pattern which is hardly testable and mixes data access (query) code in the domain object and in doing so is mixing concerns (object which fetches itself??). Everybody is still free to create POCOs for data transfer.
All in all, much code reuse and simplicity is provided with this approach at the loss of not anything I can imagine. Any ideas?
And thanks a lot to Ayende for his great posts and continued dedication. Its his ideas here (query object), not mine.
For me, it's a simple decision, with relatively few factors. The factors are:
Repositories are for domain classes.
In some of my apps, domain classes are the same as my persistence (DAL) classes, in others they are not.
When they are the same, EF is providing me with Repositories already.
EF provides lazy loading and IQueryable. I like these.
Abstracting/'facading'/re-implementing repository over EF usually means loss of lazy and IQueryable
So, if my app can't justify #2, separate domain and data models, then I usually won't bother with #5.
I'm looking for suggestions on how to approach using an ORM (in this case, EF5) in the design of modular Non-Monolithic applications, with a Core part and 3rd party Modules, where the Core has no direct Reference to the 3rd party Modules, and Modules only have a reference to Core/Common tables and classes.
For arguments sake, a close enough analogy would be DNN.
CodeFirst:
With CodeFirst, the approach I used was to build up the model of the Db was via reflection: in the Core's DbContext's DbInitialation phase, I used Reflection to find any class in any dll (eg Core or various Modules) decorated with IDbInitializer (a custom contract containing an Execute() method) to define just the dll's structure. Each dll added to the DbModel what it knew about itself.
Any subsequent Seeding was also handled in the same wa (searching for a specific IDbSeeder contract, and executing it).
Pro:
* the approach works for now.
* The same core DbContext can be used across all respositories, as long as each repo uses dbContext.GetSet(), rather than expecting it to be a property of the dbContext. No biggie.
Cons:
* it only works at startup (ie, adding new modules would require an AppPool refresh).
* CodeFirst is great for a POC. But in EF5, it's not mature enough for Enterprise work yet (and I can't wait for EF6 for StoredProcs and other features to be added).
* My DBA hates CodeFirst, at least for the Core, wanting to optimize that part with Stored Procs as much as as possible...We're a team, so I have to try to find a way to please him, if I can find a way...
Database-first:
The DbModel phase appears to be happening prior to the DbContext's constructor (reading from embedded *.edmx resource file). DbInitialization is never invoked (as model is deemed complete), so I can't add more tables than what the Core knows about.
If I can't add elements to the Model, dynamically, as one can with CodeFirst, it means that
* either the Core DbContext's Model has to have knowledge of every table in the Db -- Core AND every 3rd party module. Making the application Monolithic and highly coupled, defeating the very thing I am trying to achieve.
* Or each 3rd party has to create their own DbContext, importing Core tables, leading to
* versioning issues (module not updating their *.edmx's when Core's *.edmx is updated, etc.)
* duplication everywhere, in different memory contexts = hard to track down concurrency issues.
At this point, it seems to me that the CodeFirst approach is the only way that Modular software can be achieved with EF. But hopefully someone else know's how to make DatabaseFirst shine -- is there any way of 'appending' DbSet's to the model created from the embedded *.edmx file?
Or any other ideas?
I would consider using a sort of plugin architecture, since that's your overall design for the application itself.
You can accomplish the basics of this by doing something like the following (note that this example uses StructureMap - here is a link to the StructureMap Documentation):
Create an interface from which your DbContext objects can derive.
public interface IPluginContext {
IDictionary<String, DbSet> DataSets { get; }
}
In your Dependency Injection set-up (using StructureMap) - do something like the following:
Scan(s => {
s.AssembliesFromApplicationBaseDirectory();
s.AddAllTypesOf<IPluginContext>();
s.WithDefaultConventions();
});
For<IEnumerable<IPluginContext>>().Use(x =>
x.GetAllInstances<IPluginContext>()
);
For each of your plugins, either alter the {plugin}.Context.tt file - or add a partial class file which causes the DbContext being generated to derive from IPluginContext.
public partial class FooContext : IPluginContext { }
Alter the {plugin}.Context.tt file for each plugin to expose something like:
public IDictionary<String, DbSet> DataSets {
get {
// Here is where you would have the .tt file output a reference
// to each property, keyed on its property name as the Key -
// in the form of an IDictionary.
}
}
You can now do something like the following:
// This could be inside a service class, your main Data Context, or wherever
// else it becomes convenient to call.
public DbSet DataSet(String name) {
var plugins = ObjectFactory.GetInstance<IEnumerable<IPluginContext>>();
var dataSet = plugins.FirstOrDefault(p =>
p.DataSets.Any(ds => ds.Key.Equals(name))
);
return dataSet;
}
Forgive me if the syntax isn't perfect - I'm doing this within the post, not within the compiler.
The end result gives you the flexibility to do something like:
// Inside an MVC controller...
public JsonResult GetPluginByTypeName(String typeName) {
var dataSet = container.DataSet(typeName);
if (dataSet != null) {
return Json(dataSet.Select());
} else {
return Json("Unable to locate that object type.");
}
}
Clearly, in the long-run - you would want the control to be inverted, where the plugin is the one actually tying into the architecture, rather than the server expecting a type. You can accomplish the same kind of thing using this sort of lazy-loading, however - where the main application exposes an endpoint that all of the Plugins tie to.
That would be something like:
public interface IPlugin : IDisposable {
void EnsureDatabase();
void Initialize();
}
You now can expose this interface to any application developers who are going to create plugins for your architecture (DNN style) - and your StructureMap configuration works something like:
Scan(s => {
s.AssembliesFromApplicationBaseDirectory(); // Upload all plugin DLLs here
// NOTE: Remember that this gives people access to your system!!!
// Given what you are developing, though, I am assuming you
// already get that.
s.AddAllTypesOf<IPlugin>();
s.WithDefaultConventions();
});
For<IEnumerable<IPlugin>>().Use(x => x.GetAllInstances<IPlugin>());
Now, when you initialize your application, you can do something like:
// Global.asax
public static IEnumerable<IPlugin> plugins =
ObjectFactory.GetInstance<IEnumerable<IPlugin>>();
public void Application_Start() {
foreach(IPlugin plugin in plugins) {
plugin.EnsureDatabase();
plugin.Initialize();
}
}
Each of your IPlugin objects can now contain its own database context, manage the process of installing (if necessary) its own database instance / tables, and dispose of itself gracefully.
Clearly this isn't a complete solution - but I hope it starts you off in a useful direction. :) If I can help clarify anything herein, please let me know.
While it's a CodeFirst approach, and not the cleanest solution, what about forcing initialization to run even after start up, as in the answer posted here: Forcing code-first to always initialize a non-existent database? (I know precisely nothing about CodeFirst, but seeing as this is a month old with no posts it's worth a shot)
I'm working with a database where the designers decided to mark every table with a IsHistorical bit column. There is no consideration for proper modeling and there is no way I can change the schema.
This is causing some friction when developing CRUD screens that interact with navigation properties. I cannot simply take a Product and then edit its EntityCollection I have to manually write IsHistorical checks all over the place and its driving me mad.
Additions are also horrible because so far I've written all manual checks to see if an addition is just soft deleted so instead of adding a duplicate entity I can just toggle IsHistoric.
The three options I've considered are:
Modifying the t4 templates to include IsHistorical checks and synchronization.
Intercept deletions and additions in the ObjectContext, toggle the IsHistorical column, and then synch the object state.
Subscribe to the AssociationChanged event and toggle the IsHistorical column there.
Does anybody have any experience with this or could recommend the most painless approach?
Note: Yes, I know, this is bad modeling. I've read the same articles about soft deletes that you have. It stinks I have to deal with this requirement but I do. I just want the most painless method of dealing with soft deletes without writing the same code for every navigation property in my database.
Note #2 LukeLed's answer is technically correct although forces you into a really bad poor mans ORM, graph-less, pattern. The problem lies in the fact that now I'm required to rip out all the "deleted" objects from the graph and then call the Delete method over each one. Thats not really going to save me that much manual ceremonial coding. Instead of writing manual IsHistoric checks now I'm gathering deleted objects and looping through them.
I am using generic repository in my code. You could do it like:
public class Repository<T> : IRepository<T> where T : EntityObject
{
public void Delete(T obj)
{
if (obj is ISoftDelete)
((ISoftDelete)obj).IsHistorical = true
else
_ctx.DeleteObject(obj);
}
Your List() method would filter by IsHistorical too.
EDIT:
ISoftDelete interface:
public interface ISoftDelete
{
bool IsHistorical { get; set; }
}
Entity classes can be easily marked as ISoftDelete, because they are partial. Partial class definition needs to be added in separate file:
public partial class MyClass : EntityObject, ISoftDelete
{
}
As I'm sure you're aware, there is not going to be a great solution to this problem when you cannot modify the schema. Given that you don't like the Repository option (though, I wonder if you're not being just a bit hasty to dismiss it), here's the best I can come up with:
Handle ObjectContext.SavingChanges
When that event fires, trawl through the ObjectStateManager looking for objects in the deleted state. If they have an IsHistorical property, set that, and changed the state of the object to modified.
This could get tricky when it comes to associations/relationships, but I think it more or less does what you want.
I use the repository pattern also with similar code to LukLed's, but I use reflection to see if the IsHistorical property is there (since it's an agreed upon naming convention):
public class Repository<TEntityModel> where TEntityModel : EntityObject, new()
{
public void Delete(TEntityModel entity)
{
// see if the object has an "IsHistorical" flag
if (typeof(TEntityModel).GetProperty("IsHistorical") != null);
{
// perform soft delete
var historicalProperty = entity.GetType().GetProperty("IsHistorical");
historicalProperty.SetValue(entity, true, null);
}
else
{
// perform real delete
EntityContext.DeleteObject(entity);
}
EntityContext.SaveChanges();
}
}
Usage is then simply:
using (var fubarRepository = new Repository<Fubar>)
{
fubarRepository.Delete(someFubar);
}
Of course, in practice, you extend this to allow deletes by passing PK instead of an instantiated entity, etc.