what is a good way to get test data to a mock repo - entity-framework

I have a database (SQL Server, Entity Framework) with 10-12 tables, around 5000 records. I want to code a unit test for a complex report that uses the data from those tables. What would be a good way to do that? I really do not want unit tests hitting an actual database...

Late to answer but its worth pointing out the NuGet package Effort allows you to fake an entire DbContext in memory from either a code-first or a database-first model.
I use it frequently & like it's ease of set up. You can use it to test issues that one could miss if trying to unit test with concrete fakes such as List<Entity>. (Issues caused by trying to load nav properties after the context is disposed for example)
A downside is that it does slow down the unit test a little1 which may be a concern for TDD or with a huge number of tests but I have always found it perfectly acceptable.
Simple use case outline for a db first set up would be as follows:
Add the nuget package for your EF version (package id: Effort.EF6 perhaps?)
Add an override to the constructor of your "MyEntities" class to enable Effort to pass in it's own connection string:
public partial class MyEntities
{
public MyEntities(DbConnection connection) : base(connection, true)
{
}
}
You can now create a 'Persistent' db store (persistent means it stays available after the context is disposed - it's not saved anywhere after the test run finishes!) with the following code:
var context = new MyEntities(Effort.EntityConnectionFactory.CreatePersistent("name=MyEntities"))
Just make sure you have the connection string to the db available in your app.config for the test project (used only to create the model)
Set up whatever mock data you need to test the context (I often have some simple data loaded using [ClassInitialize] or [TestInitialize] methods but it's easy to do whatever makes sense for your own set up):
[ClassInitialize]
public static void Initialise()
{
using (var context = new MyEntities(Effort.EntityConnectionFactory.CreatePersistent("name=MyEntities")))
{
context.Employees.Add(new Employee { Name = "John Doe", Skills = null});
context.Skills.Add(new Skill { Type = "C# Coding"});
context.SaveChanges();
}
}
Run your unit test using the in memory context:
[TestMethod]
public async Task CsSkillCanBeAssociatedWithEmployee()
{
//Arrange
using (var context = new MyEntities(Effort.EntityConnectionFactory.CreatePersistent("name=MyEntities")))
using (var sut = new SkillsMatrixService(context))
{
//Act
await sut.AddSkillToEmployeeAsync("Joe Bloggs","C# Coding");
}
//Assert
using (var context = new MyEntities(Effort.EntityConnectionFactory.CreatePersistent("name=MyEntities")))
{
var joesCSharpSkills = context
.Employees
.First(e => e.Name == "Joe Bloggs")
.Skills
.Where(s => s.Type == "C# Coding");
Assert.IsTrue(joesCSharpSkills.Any());
}
}
I'm given to understand you can load the initial set up for your db from csv config files to live in your project but I've always just set up dummy data in code as above.
1. Anecdotally, in my own (fairly limited) experience, 1 second or so is about usual to set it up and load some dummy data for a smallish model with around 20 entities and lots of nav properties - my machine isn't ultra fast. Naturally, your mileage may vary according to the complexity of your model and your hardware setup

Related

Debugging Code Called by EF Core Add-Migrations

I have an Entity Framework Core database defined in a separate assembly, using the IDesignTimeDbContextFactory<> pattern (i.e., I define a class, derived from IDesignTimeDbContextFactory, which has a method called CreateDbContext that returns an instance of the database context).
Because the application of which the EF Core database is a part utilizes AutoFac dependency injection, the IDesignTimeDbContextFactory<> factory class creates an AutoFac container in its constructor, and then resolves the DbContextOptionsBuilder<>-derived class which is fed into the constructor for the database context (I do this so I can control whether a local or an Azure-based SqlServer database is targeted, based on a config file setting, with passwords stored in an Azure KeyVault):
public class TemporaryDbContextFactory : IDesignTimeDbContextFactory<FitchTrustContext>
{
private readonly FitchTrustDBOptions _dbOptions;
public TemporaryDbContextFactory()
{
// OMG, I would >>never<< have thought to do this to eliminate the default logging by this
// deeply-buried package. Thanx to Bruce Chen via
// https://stackoverflow.com/questions/47982194/suppressing-console-logging-by-azure-keyvault/48016958#48016958
LoggerCallbackHandler.UseDefaultLogging = false;
var builder = new ContainerBuilder();
builder.RegisterModule<SerilogModule>();
builder.RegisterModule<KeyVaultModule>();
builder.RegisterModule<ConfigurationModule>();
builder.RegisterModule<FitchTrustDbModule>();
var container = builder.Build();
_dbOptions = container.Resolve<FitchTrustDBOptions>() ??
throw new NullReferenceException(
$"Could not resolve {typeof(FitchTrustDBOptions).Name}");
}
public FitchTrustContext CreateDbContext( string[] args )
{
return new FitchTrustContext( _dbOptions );
}
}
public class FitchTrustDBOptions : DbContextOptionsBuilder<FitchTrustContext>
{
public FitchTrustDBOptions(IFitchTrustNGConfigurationFactory configFactory, IKeyVaultManager kvMgr)
{
if (configFactory == null)
throw new NullReferenceException(nameof(configFactory));
if (kvMgr == null)
throw new NullReferenceException(nameof(kvMgr));
var scannerConfig = configFactory.GetFromDisk()
?? throw new NullReferenceException(
"Could not retrieve ScannerConfiguration from disk");
var dbConnection = scannerConfig.Database.Connections
.SingleOrDefault(c =>
c.Location.Equals(scannerConfig.Database.Location,
StringComparison.OrdinalIgnoreCase))
?? throw new ArgumentOutOfRangeException(
$"Cannot find database connection information for location '{scannerConfig.Database.Location}'");
var temp = kvMgr.GetSecret($"DatabaseCredentials--{dbConnection.Location}--Password");
var connString = String.IsNullOrEmpty(dbConnection.UserID) || String.IsNullOrEmpty(temp)
? dbConnection.ConnectionString
: $"{dbConnection.ConnectionString}; User ID={dbConnection.UserID}; Password={temp}";
this.UseSqlServer(connString,
optionsBuilder =>
optionsBuilder.MigrationsAssembly(typeof(FitchTrustContext).GetTypeInfo().Assembly.GetName()
.Name));
}
}
Needless to say, while this provides me with a lot of flexibility (I can switch from local to cloud database just by changing a single config parameter, and any required passwords are reasonably securely stored in the cloud), it can trip up the add-migration commandlet if there's a bug in the code (e.g., the wrong name of a configuration file).
To debug those kinds of problems, I've often had to resort to outputting messages to the Visual Studio output window via diagnostic WriteLine calls. That strikes me as pretty primitive (not to mention time-consuming).
Is there a way to attach a debugger to my code that's called by add-migration so I can step thru it, check values, etc? I tried inserting a Launch() debugger line in my code, but it doesn't work. It seems to throw me into add-manager codebase, for which I have no symbols loaded, and breakpoints that I try to set in my code show up as the empty red circle: they'll never be hit.
Thoughts and suggestions would be most welcome!
Add Debugger.Launch() to the beginning of the constructor to launch the just-in-time debugger. This lets you attach VS to the process and debug it like normal.

Unit testing With Entity Framework 7, Test fails sometimes?

I have a bunch of test where I use the new UseInMemory function in EF7. When I run them all some of them fail. When I run them single they all pass.
My best guess it is a conflict in EF7 because of the fact that every test runs in its own thread and they all kind of using the same DbContext class.
Here one of my Tests:
[Fact]
public void Index()
{
DbContextOptionsBuilder<DatabaseContext> optionsBuilder = new DbContextOptionsBuilder<DatabaseContext>();
optionsBuilder.UseInMemoryDatabase();
db = new DatabaseContext(optionsBuilder.Options);
AdminController controller = new AdminController(db);
var result = controller.Index() as ViewResult;
Assert.Equal("Index", result.ViewName);
}
I remake the dbContext object in every test but it seem not to make any different.
Would be greatful for any input. Thanks :)
The problem is, that the memory storage in InMemoryDatabase is registered as Singleton so you actually share the data between DbContexts even you think you don't.
You have to create your DbContexts like this:
public abstract class UnitTestsBase
{
protected static T GetNewDbContext<T>() where T : DbContext
{
var services = new ServiceCollection();
services
.AddEntityFramework()
.AddInMemoryDatabase()
.AddDbContext<T>(options => options.UseInMemoryDatabase());
var serviceProvider = services.BuildServiceProvider();
var dbContext = serviceProvider.GetRequiredService<T>();
dbContext.Database.EnsureDeleted();
return dbContext;
}
}
var newTestDbContext = GetNewDbContext<TestDbContext>()
I also was led to beleive that .UseInMemoryDatabase() has no persistence, but that does not seem to be the case (at least with the latest versions)!
As noted in How can I reset an EF7 InMemory provider between unit tests? you want to do a db.Database.EnsureDeleted() BUT I also noticed that this does NOT reset auto increment ids.

How can I create a generic update method for One to Many structures in Entity Framework 5?

I am writing a web application, such that I get different objects back from the web that need to be either updated or added to the database. On top of this, I need to check that the owner is not modified. Since a hacker could potentially get an account and send an update to modify the foreign key to the user model. I don't want to have to manually code all of these methods, instead I want to make a simple generic call.
Maybe something as simple as this
ctx.OrderLines.AddOrUpdateSet(order.OrderLines, a => a.Order)
Based on old persisted records that have a foreign key to Order, and on the new incoming records.
Delete old records that are not on the new records list.
Add new records that are not on the old records list.
Update new records that exist on both lists.
ctx.Entry(orderLine).State=EntityState.Deleted;
...
ctx.Entry(orderLine).State=EntityState.Added;
...
ctx.Entry(orderLine).State=EntityState.Modified;
This gets a bit complicated when the old record is loaded to verify that ownership did not change. I get an error if I don't do.
oldorder.OrderLines.remove(oldOrderLine); //for deletes
oldorder.OrderLines.add(oldOrderLine); //for adds
ctx.Entry(header).CurrentValues.SetValues(header); //for modifications
With Entity Framework 5 there is a new extension function called AddOrUpdate. And there was a very interesting (please read) blog entry on how to create this method before it was added.
I'm not sure if this is too much to ask as a question in StackOverflow, any clues on how to approach the problem may be sufficient. Here are my thoughts so far:
a) leverage AddOrUpdate for some of the functionality.
b) create a secondary context hoping to avoid loading order into the context and avoid extra calls.
c) Set the state of all the saved objects to initially deleted.
Since you have linked to this question from my own question, I thought I'd throw in some newly-aquired experience with Entity Framework for me.
To achieve a common save method in my generic repository with Entity Framework, I do this. (Please note that the Context is a member of my repository, as I am implementing the Unit of Work pattern as well)
public class EFRepository<TEntity> : IRepository<TEntity> where TEntity : class
{
internal readonly AwesomeContext Context;
internal readonly DbSet<TEntity> DbSet;
public EFRepository(AwesomeContext context)
{
if (context == null) throw new ArgumentNullException("context");
Context = context;
DbSet = context.Set<TEntity>();
}
// Rest of implementation removed for brevity
public void Save(TEntity entity)
{
var entry = Context.Entry(entity);
if (entry.State == EntityState.Detached)
DbSet.Add(entity);
else entry.State = EntityState.Modified;
}
}
Honestly, I can't tell you why this works, because I just kept changing the state conditions - however I do have unit (integration) tests to prove that it works. Hopefully someone more into EF than myself can shed some light on this.
Regarding the "cascading updates", I was curious myself as if it would work using the Unit of Work pattern (my question I linked to was when I did not know it existed, and my repositories would basically create a unit of work whenever I wanted to save/get/delete, which is bad), so I threw in a test case in a simple relational DB. Here is a diagram to give you an idea.
IMPORTANT In order for test case number 2 to work, you need to make your POCO reference properties virtual, in order for EF to provide lazy loading.
The repository implementation is just derived from the generic EFRepository<TEntity> as shown above, so I'll leave out that implementation.
These are my test cases, both pass.
public class EFResourceGroupFacts
{
[Fact]
public void Saving_new_resource_will_cascade_properly()
{
// Recreate a fresh database and add some dummy data.
SetupTestCase();
using (var ctx = new LocalizationContext("Localization.CascadeTest"))
{
var cultureRepo = new EFCultureRepository(ctx);
var resourceRepo = new EFResourceRepository(cultureRepo, ctx);
var existingCulture = cultureRepo.Get(1); // First and only culture.
var groupToAdd = new ResourceGroup("Added Group");
var resourceToAdd = new Resource(existingCulture,"New Resource", "Resource to add to existing group.",groupToAdd);
// Verify we got a single resource group.
Assert.Equal(1,ctx.ResourceGroups.Count());
// Saving the resource should also add the group.
resourceRepo.Save(resourceToAdd);
ctx.SaveChanges();
// Verify the group was added without explicitly saving it.
Assert.Equal(2, ctx.ResourceGroups.Count());
}
// try creating a new Unit of Work to really verify it has been persisted..
using (var ctx = new LocalizationContext("Localization.CascadeTest"))
{
Assert.DoesNotThrow(() => ctx.ResourceGroups.First(rg => rg.Name == "Added Group"));
}
}
[Fact]
public void Changing_existing_resources_group_saves_properly()
{
SetupTestCase();
using (var ctx = new LocalizationContext("Localization.CascadeTest"))
{
ctx.Configuration.LazyLoadingEnabled = true;
var cultureRepo = new EFCultureRepository(ctx);
var resourceRepo = new EFResourceRepository(cultureRepo, ctx);
// This resource already has a group.
var existingResource = resourceRepo.Get(2);
Assert.NotNull(existingResource.ResourceGroup); // IMPORTANT: Property must be virtual!
// Verify there is only one resource group in the datastore.
Assert.Equal(1,ctx.ResourceGroups.Count());
existingResource.ResourceGroup = new ResourceGroup("I am implicitly added to the database. How cool is that?");
// Make sure there are 2 resources in the datastore before saving.
Assert.Equal(2, ctx.Resources.Count());
resourceRepo.Save(existingResource);
ctx.SaveChanges();
// Make sure there are STILL only 2 resources in the datastore AFTER saving.
Assert.Equal(2, ctx.Resources.Count());
// Make sure the new group was added.
Assert.Equal(2,ctx.ResourceGroups.Count());
// Refetch from store, verify relationship.
existingResource = resourceRepo.Get(2);
Assert.Equal(2,existingResource.ResourceGroup.Id);
// let's change the group to an existing group
existingResource.ResourceGroup = ctx.ResourceGroups.First();
resourceRepo.Save(existingResource);
ctx.SaveChanges();
// Assert no change in groups.
Assert.Equal(2, ctx.ResourceGroups.Count());
// Refetch from store, verify relationship.
existingResource = resourceRepo.Get(2);
Assert.Equal(1, existingResource.ResourceGroup.Id);
}
}
private void SetupTestCase()
{
// Delete everything first. Database.SetInitializer does not work very well for me.
using (var ctx = new LocalizationContext("Localization.CascadeTest"))
{
ctx.Database.Delete();
ctx.Database.Create();
var culture = new Culture("en-US", "English");
var resourceGroup = new ResourceGroup("Existing Group");
var resource = new Resource(culture, "Existing Resource 1",
"This resource will already exist when starting the test. Initially it has no group.");
var resourceWithGroup = new Resource(culture, "Exising Resource 2",
"Same for this resource, except it has a group.",resourceGroup);
ctx.Cultures.Add(culture);
ctx.ResourceGroups.Add(resourceGroup);
ctx.Resources.Add(resource);
ctx.Resources.Add(resourceWithGroup);
ctx.SaveChanges();
}
}
}
It was interesting to learn this, as I was not sure if it would work.
After working on this for a while I found an opensource project called GraphDiff here is it's blog entry 'introducing graphdiff for entity framework code first – allowing automated updates of a graph of detached entities'. I only began using it but it looks impressive. And it does solve the problem of issuing update/delete/insert for Many to One relationships. It actually generalizes the problem to graphs and allows arbitrary nesting.
Here is the generic method I concocted. It does use AddOrUpdate from the System.Data.Entity.Migrations namespace. Which may be reloading records from the db, I'll be checking on that later. The usage is
ctx.OrderLines.AddOrUpdateSet(l => l.orderId == neworder.Id,
l => l.Id, order.orderLines);
Here is the code:
public static class UpdateExtensions
{
public static void AddOrUpdateSet<TEntity>(this IDbSet<TEntity> set, Expression<Func<TEntity, bool>> predicate,
Func<TEntity, int> selector, IEnumerable<TEntity> newRecords) where TEntity : class
{
List<TEntity> oldRecords = set.Where(predicate).ToList();
IEnumerable<int> keys = newRecords.Select(selector);
foreach (TEntity newRec in newRecords)
set.AddOrUpdate(newRec);
oldRecords.FindAll(old => !keys.Contains(selector(old))).ForEach(detail => set.Remove(detail));
}
}

Unit testing with EF Code First DataContext

This is more a solution / work around than an actual question. I'm posting it here since I couldn't find this solution on stack overflow or indeed after a lot of Googling.
The Problem:
I have an MVC 3 webapp using EF 4 code first that I want to write unit tests for. I'm also using NCrunch to run the unit tests on the fly as I code, so I'd like to avoid backing onto an actual database here.
Other Solutions:
IDataContext
I've found this the most accepted way to create an in memory datacontext. It effectively involves writing an interface IMyDataContext for your MyDataContext and then using the interface in all your controllers. An example of doing this is here.
This is the route I went with initially and I even went as far as writing a T4 template to extract IMyDataContext from MyDataContext since I don't like having to maintain duplicate dependent code.
However I quickly discovered that some Linq statements fail in production when using IMyDataContext instead of MyDataContext. Specifically queries like this throw a NotSupportedException
var siteList = from iSite in MyDataContext.Sites
let iMaxPageImpression = (from iPage in MyDataContext.Pages where iSite.SiteId == iPage.SiteId select iPage.AvgMonthlyImpressions).Max()
select new { Site = iSite, MaxImpressions = iMaxPageImpression };
My Solution
This was actually quite simple. I simply created a MyInMemoryDataContext subclass to MyDataContext and overrode all the IDbSet<..> properties as below:
public class InMemoryDataContext : MyDataContext, IObjectContextAdapter
{
/// <summary>Whether SaveChanges() was called on the DataContext</summary>
public bool SaveChangesWasCalled { get; private set; }
public InMemoryDataContext()
{
InitializeDataContextProperties();
SaveChangesWasCalled = false;
}
/// <summary>
/// Initialize all MyDataContext properties with appropriate container types
/// </summary>
private void InitializeDataContextProperties()
{
Type myType = GetType().BaseType; // We have to do this since private Property.Set methods are not accessible through GetType()
// ** Initialize all IDbSet<T> properties with CollectionDbSet<T> instances
var DbSets = myType.GetProperties().Where(x => x.PropertyType.IsGenericType && x.PropertyType.GetGenericTypeDefinition() == typeof(IDbSet<>)).ToList();
foreach (var iDbSetProperty in DbSets)
{
var concreteCollectionType = typeof(CollectionDbSet<>).MakeGenericType(iDbSetProperty.PropertyType.GetGenericArguments());
var collectionInstance = Activator.CreateInstance(concreteCollectionType);
iDbSetProperty.SetValue(this, collectionInstance,null);
}
}
ObjectContext IObjectContextAdapter.ObjectContext
{
get { return null; }
}
public override int SaveChanges()
{
SaveChangesWasCalled = true;
return -1;
}
}
In this case my CollectionDbSet<> is a slightly modified version of FakeDbSet<> here (which simply implements IDbSet with an underlying ObservableCollection and ObservableCollection.AsQueryable()).
This solution works nicely with all my unit tests and specifically with NCrunch running these tests on the fly.
Full Integration Tests
These Unit tests test all the business logic but one major downside is that none of your LINQ statements are guaranteed to work with your actual MyDataContext. This is because testing against an in memory data context means you're replacing the Linq-To-Entity provider but a Linq-To-Objects provider (as pointed out very well in the answer to this SO question).
To fix this I use Ninject within my unit tests and setup InMemoryDataContext to bind instead of MyDataContext within my unit tests. You can then use Ninject to bind to an actual MyDataContext when running the integration tests (via a setting in the app.config).
if(Global.RunIntegrationTest)
DependencyInjector.Bind<MyDataContext>().To<MyDataContext>().InSingletonScope();
else
DependencyInjector.Bind<MyDataContext>().To<InMemoryDataContext>().InSingletonScope();
Let me know if you have any feedback on this however, there are always improvements to be made.
As per my comment in the question, this was more to help others searching for this problem on SO. But as pointed out in the comments underneath the question there are quite a few other design approaches that would fix this problem.

MSTest with Moq - DAL setup

I'm new to Moq, and just started on a project that's already in development. I'm responsible for setting up unit testing. There's a custom class for the DatabaseFactory that uses EnterpriseLibrary and looks like this:
public Database CreateCommonDatabase()
{
return CreateDatabaseInstance(string.Empty);
}
private static Database CreateDatabaseInstance(string foo)
{
var database = clientCode == string.Empty
? DatabaseFactory.CreateDatabase("COMMON")
: new OracleDatabase(new ClientConnections().GetConnectionString(foo)));
return database;
}
Now, here's where that gets used (ResultData is another class of the type DataSet):
public ResultData GetNotifications(string foo, string foo2, Database database)
{
var errMsg = string.Empty;
var retval = 0;
var ds = new DataSet();
var sqlClause =
#"[Some SELECT statement here that uses foo]";
DbCommand cm = database.GetSqlStringCommand(sqlClause);
cm.CommandType = CommandType.Text;
// Add Parameters
if (userSeq != string.Empty)
{
database.AddInParameter(cm, ":foo2", DbType.String, foo2);
}
try
{
ds = database.ExecuteDataSet(cm);
}
catch (Exception ex)
{
retval = -99;
errMsg = ex.Message;
}
return new ResultData(ds, retval, errMsg);
}
Now, originally, the Database wasn't passed in as a parameter, but the method was creating a new instance of the DatabaseFactory using the CreateCommonDatabase method, and using it from there. However, that leaves the class untestable because I can't keep it from actually hitting the database. So, I went with Dependency Injection, and pass the Database in.
Now, I'm stuck, because there's no way to mock Database in order to test GetNotifications. I'm wondering if I'm overly complicating things, or if I'm missing something. Am I doing this the right way, or should I be rethinking how I've got this set up?
Edit to add more info*****
I really don't want to test the database. I want the Data.Notifications class (above) to return an instance of ResultData, but that's all I really want to test. If I go a level up, to the Business layer, I have this:
public DataSet GetNotifications(string foo, string foo1, out int returnValue, out string errorMessage, Database database)
{
ResultData rd = new data.Notifications().GetNotifications(foo, foo1, database);
returnValue = rd.ResultValue;
errorMessage = rd.ErrorMessage;
return rd.DataReturned;
}
So, originally, the database wasn't passed in, it was the Data.Notifications class that created it - but then again, if I left it that way, I couldn't help but hit the database to test this Business layer object. I modified all of the code to pass the Database in (which gets created a the web's Base page), but now I'm just not certain what to do next. I thought I was one unit test away from having this resolved, but apparently, either I'm wrong or I've got a mental roadblock to the right path.
You should be able to create a mock Database object if the methods in it are virtual. If they are not, then you have a little bit of a problem.
I don't know what type "Database" is, but you have a few options.
If you own the source code to Database, I would recommend extracting an interface IDatabase, rather than dealing with a Database class type. This will eliminate some complexity and give you something extremely testable.
If you don't have access to the Database class, you can always solve this with another layer of abstraction. Many people in this case use a Repository pattern that wraps the data access layer. Generally speaking in this case, most people leave testing Respository classes to integration tests (tests without any isolation), rather than unit tests.
Here's how you'd setup your test using option #1:
[TestMethod]
public void GetNotifications_PassedNullFoo_ReturnsData()
{
//Arrange
Mock<IDatabase> mockDB = new Mock<IDatabase>();
mockDB.Setup(db => db.ExecuteDataSet()).Returns(new DataSet() ... );
//Act
FooClass target = new fooClass();
var result = target.GetNotifications(null, "Foo2", mockDB.Object);
//Assert
Assert.IsTrue(result.DataSet.Rows.Count > 0);
}
My dataset code is a little rusty, but hopefully this gives you the general idea.
Based on the code you've given, I would think you would want to talk to the database, and not a mocked version.
The reason is that your GetNotifications code contains DB-specific instructions, and you'll want those to pass validation at the DB Engine level. So just pass in a Database that is connected to your test DB instance.
If you took the testing abstraction to a higher level, where you built unit tests for the database call and a version of this test that used a mocked database, you'd still have to run integration tests, which ends up being triple the work for the same amount of code coverage. In my opinion, it's far more efficient to do an integration test at tier borders you control then to write unit tests for both sides of the contract and integration tests.