Integration Testing Entity Framework CRD operations - entity-framework

I'm struggling with Integration Tests when using Entity Framework.
I seed my database with Test data in my Test project, but I am wondering how you manange to test the Create, Update and Delete operations.
Basicly I have my Test data which e.g. contains 5 customer entries... I can now write some unit tests to get the data based on these 5 entries. (e.g. get all will return a collection containing 5 items).
But what if I have a test which deletes 1 customer, this means the GetAll test will expect 5 customers, but only return 4 (if this test is executed after the delete test) and fails.
How do you work around this issue? Do you give a certain order to your tests or reseed the database before every test (but this sounds so bad?)...
Thanks !

An effective way to do this is to use Transaction Scope. This basically wraps all sql calls and rolls back changes if the scope is disposed without calling the Complete method. The basic test will look like this.
public class TransactionalTestsBase
{
private TransactionScope _scope;
[TestInitialize]
public void Initialize()
{
_scope = new TransactionScope();
}
[TestCleanup]
public void TestCleanup()
{
_scope.Dispose();
}
[TestMethod]
public void CrudAction()
{
var repo = new YourRepo();
var client = ; // Make client
repo.DeleteClient(client);
Assert.AreEqual(4,repo.GetClients().Count());
}
}
Ideally you would inherit from this base test class not write your tests in it.
There is some new heat in beta still that I think will help with this greatly in the future. Have a look at Effort

Related

Define order of "Parallelizable" in NUnit

Im having 4 test classes inside one project (Lets call them Class A, Class B, Class C, Class D)
Each of these 3 classes have two [TestFixture("string")], which makes it to 8 tests in total.
All classes are having the [Parallelizable] parameter.
When i start the test all at once by clicking inside the Test Explorer on the name of the project and "Run", then it will start all 8 tests at the same time.
The problem here is, that it consumes a lot RAM and the tests fail because it takes too long to load and i get a timeout error (Im doing automation tests with selenium in chrome)
Now i want to define a order.
For example:
Class A and Class B should start parallel
Class C and Class D should start parallel when Class A and Class B is done
Is it possible?
I tried the parameter [Order(1)] for Class A and Class B and Order(2)] for Class C and Class D
But when i run the tests, all 8 tests start to load.
Example from my code:
[TestFixture("normalUser")]
[TestFixture("adminUser")]
[Parallelizable]
public class ImportTest
{
private IWebDriver webDriver;
private const int waitTimer = 60;
public WebDriverWait w;
public string userRole;
// Constructor
public ImportTest(string userRole)
{
this.userRole = userRole;
Console.WriteLine(userRole);
}
////-----------------------------
[SetUp]
{
}
//-------------------------------
[Test]
public void Test1()
{
Do Test
}
[Test]
public void Test2()
{
Do Test
}
//--------------------------
[TearDown]
public void CloseBrowser()
{
webDriver.Quit();
}
}
First, I'll describe what's happening...
The OrderAttribute was created in NUnit V2, before parallel tests existed. It defines the order in which tests are started. Since there was no parallelism at the time, one test had to finish before the next one started.
When parallel execution was introduced in NUnit 3, Order was not exactly broken, because it continued to start tests in the specified order. But many users perceptions were "broken", because they thought that one test would not start until the prior one finished.
Order could, of course, be changed to work like that. However, at this point, that would be a breaking change for some people, so you most likely won't see it happen until there's an NUnit 4.
So... what can you do as a workaround? I can see three options...
The simplest approach would be to make each fixture [NonParallelizable]. Then they would all run separately. You should try that first and see if the performance is acceptable to you. If you want the tests within each fixture to run in parallel, you could use [Parallelizable(ParallelScope.Children)] instead but that might break things if the tests change the state of the fixture or of any common references found in the fixture.
Alternatively, you could pick only some fixtures to mark as [NonParallelizable]. In that case, I'd do it for the ones that consume a lot of memory.
For the most effort required, you could implement ordering yourself for these classes. I'd do that by creating some sort of shared token... e.g. a lock... which each class had to acquire on startup. I'd grab the lock in the OneTimeSetUp for a fixture and release it in the onetime teardown. The locking code should be before any setup code, which acquires resources and should be released after your teardown releases those resources.
I made option 3 rather sketchy because (a) I don't know precisely how your application works and (b) I presume that you won't do it unless it's absolutely necessary.
Final advice: don't make any assumptions about the performance impact of any of these options, even the first. Measure first!

Can a TestActionAttribute in NUnit run BeforeTest before the fixture's own SetUp method?

I have some old MbUnit code which looks like this:
public class MyFixture {
[SetUp]
public void SetUp() {
// Add data to database
}
[Test, Rollback]
public void DoTest() {
// Tests with the data
}
}
My new NUnit Rollback attribute looks a bit like this:
public class RollbackAttribute : TestActionAttribute
{
public override void BeforeTest(TestDetails testDetails)
{
// Begin transaction
}
public override void AfterTest(TestDetails testDetails)
{
// Abort transaction
}
}
The Rollback should roll back the new data added in the SetUp method as well as any modifications during the test itself. Unfortunately, it seems that NUnit's BeforeTest runs after the fixture's SetUp method, so the data added during SetUp is not rolled back.
Is there a way to run BeforeTest before SetUp?
One option would be a base class, and replace the existing Rollback attributes with additional code in SetUp and TearDown, however some of my tests require running outside a transaction (they create multiple transactions themselves during the test run), so adding transactions around all test cases would require a bit of care. I'd rather find a solution which can re-use the existing Rollback attributes.
Is there a way to run BeforeTest before SetUp?
I don't think so, see e.g. this related discussion on google groups. Issue being discussed there is very similar, as you can see, code in SetUp method would run even prior to BeforeTest method used on test fixture level (you have it on test level).
Workaround from my point of view would be to remove the SetUpAttribute from the SetUp method and call the SetUp method explicitly at the beginning of the each test, i.e.:
public class MyFixture
{
public void SetUp()
{
// Add data to database
}
[Test, Rollback]
public void DoTest()
{
SetUp();
// Tests with the data
}
}
Your question also reminded me of question that marc_s raised in this SO thread. Question is unrelated to your problem, but he used the same construct as I am proposing above so it is perhaps not that bad idea.
EDIT:
Here is an opened issue on NUnit's github. But still, requested order there is:
BeforeTest (BaseFixture)
BaseSetUp BeforeTest (Fixture)
SetUp
BeforeTest (Test)
Test AfterTest (Test)
TearDown AfterTest (Fixture)
BaseTearDown AfterTest (BaseFixture)
So not exactly what you desire, "BeforeTest (Test)" would be executed after SetUp.

How do you seed data with Entity Framework Database First approach?

I see many examples of seeding with Code First, but I'm not sure I understand what the idiomatic way of seeding the database when using EF Database First.
Best practice is very situation dependent. Then there is the DEV versus PROD environments.
Auto seed when using Drop and recreate on model change during DEV so you have test data makes the most sense. This is when it used most.
Of cause you can a have a test method that you trigger manually. I personally find the idea an automatically triggered seed method not that exciting and more for DEV prototyping when the DB structure is volatile. When using migrations, you tend to keep your hard earned test data. Some use Seeding during initial installation in PROD. Others will have a specific load routines triggered during the installation/commissioning process. I like to use custom load routines instead.
EDIT: A CODE FIRST SAMPLE. With DB First you just write to the Db normally.
// select the appropriate initializer for your situation eg
Database.SetInitializer(new MigrateDatabaseToLatestVersion<MyDbContext, MyMigrationConfiguration>());
Context.Database.Initialize(true); // yes now please
//...
public class MyMigrationConfiguration<TContext> : DbMigrationsConfiguration<TContext>
where TContext : DbContext{
public MyMigrationConfiguration() {
AutomaticMigrationsEnabled = true; //fyi options
AutomaticMigrationDataLossAllowed = true; //fyi options
}
public override void Seed(TContext context)
{
base.Seed(context);
// SEED AWAY..... you have the context
}
}

How should I separate integration tests for an EF code first DAL and pure unit tests of the DAL clients?

Having seen some strong advice against testing EF against mocks, especially Code First, I have decided to go with integration testing against a SqlCe database dedicated to testing, and then use pure unit tests further downstream from the unit of work and repositories provided by DbContext and DbSet.
I am just unclear where to draw the line and what to test where. I know I can mock the DAL in my service layer when I am confident the DAL specific integration tests cover its insides, but what do I test in the DAL? There doesn't seem to be much point testing to see if I can save and read an object, because EF is external and already tested.
You will test your mapping and queries in DAL by using integration tests. Example:
public class Service {
private readonly IDAL _dal;
public Service(IDAL dal) {
// Not null validation here
_dal = dal;
}
public void DoSomething() {
SomeData data = FindSomeData();
// Do some logic
_dal.Commit();
}
protected virtual SomeData FindSomeData() {
return _dal.SomeData.Where(...).FirstOrDefault();
}
}
This is very simplified example showing:
Service dependent on DAL. DAL interface is passed through constructor injection.
The Service contains public DoSomething method you want to test to know if the logic is executed correctly. But this method is also dependent on DB query and DB persistence (Commit).
The query is part of your logic but executing such query is separate concern so it is handled by its own method. In more complex situation this method can be in other class injected to the Service class (repository). The key criteria for these query methods are:
They don't return IQueryable
They don't accept Expression<> as parameters
How to unit test DoSomething method:
In this simple example your test class will derive from Service class and override FindSomeData to return test data. In case of injection you would instead define fake for injected class.
You will also mock IDAL and you can verify that Commit was called
What integration test you should use:
You should create test for FindSomeData querying the real database
In general you should also have integration test for Commit but it is more difficult to achieve because the example has commit called directly from DoSomething. You don't want to test that method again and in the same time Commit method has too much generic cases because it simply flushes all changes from current context to database. I usually have separate tests for inserting, updating and deleting every entity type. When the DoSomething method does some complex modification you can split the method into two methods one handled by unit test for a real logic and second covered by integration tests for different persistence scenarios which can be produced by your logic.
Entity Framework is tested, but your DAL, especially mapping, is not. I prefer having integration tests to show me that at the very least my mapping is right, and better yet, that I can successfully perform all CRUD operations against my database.
What we usually test DB-wise, is if the complex object graphs can be properly inserted, updated, deleted; basically testing the more complex mappings.
In my opinion there's not really much point in testing if an object with 3 primitive value properties can be inserted and whatnot, because then you'd never see the end of it.
We kind of favor being optimistic (that the simple stuff will just work), we test the more complex associations, and if we encounter an error in our mapping (e.g. an object that should be deleted but isn't) we write an extra test for that.
It's usually not wise to test absolutely everything from a business perspective; you should focus on the high-risk/high-damage stuff first and then work your way down the severity ladder until you think it's not worth it anymore.
1) Have a set of integration tests which test your mappings
2) Make your DAL very light weight but with sufficent power to construct querys, something like:
public interface IDb
{
IQueryable<T>Query<T>();
... (save, delete, get-by-id methods)...
}
3) Write objects which encapsulate the logic behind constructing the query against the DAL
public class MuppetSearch
{
public MuppetColor? Color { get; set;}
public string Name{ get; set; }
public IQueryable<Muppet> ConstructQuery(IDb db)
{
var query = db.Query<Muppet>();
if(Color.HasValue)
{
query = query.Where(m=>m.Value == Color.Value);
}
if(!String.IsNullOrEmpty(Name))
{
query = query.Where(m=>m.Name.Contains(Name));
}
return query;
}
}
4) Test those, mocking all the data you need should be pretty trivial
5) In your service use the search classes to do your query construction

Unit Testing with Nunit, Ninject, MVC2 and the ADO.Net Entity Data Model

I'm trying to get my head around using Nunit, Ninject, MVC2 and the ADO.Net Entity Data Model.
Let's say I have have a ProductsController instantiating a SqlProductsRepository class.
public class ProductsRepository : IProductsRepository
{
public MyDbEntities _context;
public ProductsRepository()
{
_context = new MyDbEntities();
}
public IList<Product> GetAllProducts()
{
return (from p in _context.Products select p).ToList();
}
}
public class ProductsController : Controller
{
public ActionResult ProductsList()
{
ProductsRepository r = new ProductsRepository();
var products = r.GetAllProducts();
return View(products);
}
}
I'd like to be able to perform unit testing on ProductsRepository to ensure this is returning the correct data but i'm not sure how to write the Test Class.
Every tutorial/document I've read so far points me to creating a Mock object using IProductsRepository and then injecting and testing the Controller.
This seems, to me, to bypass the concrete implementation.
MyDbEntities comes from an ADO.Net Entity Data Model .edmx
You're exactly right- mocking the repository does bypass the concrete implementation. That's the point.
Unit testing is not the same thing as functional testing. Your mock object can be set up to return whatever you explicitly define, then you test to ensure that constant inputs from your mock lead to expected results.
It sounds like you're wanting to create an integration test for ProductsRepository rather than a unit test, since you'd be testing against the database so that you can check it's giving you the right data.
It's when unit testing the Controller that you'd want to mock the ProductsRepository.
In my integration tests for ProductsRepository, I'd be doing the obvious things like
public void TestProductsRepository()
{
var context = new MyDbEntities();
// add a new product
var products = context.GetAllProducts();
// check products contains new product
}
With your two classes (ProductsRepository, ProductsController), you should have two sets of tests. One set of tests for each class.
When (unit) testing the ProductsController, you should mock its dependencies (in this case, the IProductsRepository). Bypassing the concrete implementations of the dependencies is the point.
There will be a completely different set of (integration) tests to validate that the ProductsRepository can hit the database and return correct data. In these integration tests you won't mock anything, since what you're testing is the interaction between a real repository with the actual database.