EF Code First update-database -script not running Seed() method - entity-framework

In my Entity Framework Code First project, if I run the
update-database
command through the Package Manage Console, my Seed() method runs successfully. If, however, I run the command with the -script parameter:
update-database -script
...the Seed() method is not called and the resulting SQL does not contain the SQL commands for seeding the database. This behaviour is repeatable. I'm attempting to create a full DB Script for deployment.
Why is there a discrepancy between the SQL Commands run with -script and without it, and how can I get update-database to run the Seed() method when using the -script switch?

I would argue that this is not a discrepancy, but a by design feature.
For one, when you use -script, the database update is not performed, so the Seed() method should not be called either. And also there is the fact the migrations is a design time tool, and Seed() is a method that has be compiled and called at runtime.
And as for why isn't the contents of the Seed() method included in the generated script, my answer is also the different notions of runtime and design time. Imagine that you have a Seed() method like this:
protected override void Seed(PersonContext context)
{
Random r = new Random();
context.People.Add(new Person("PersonA", r.Next(90));
context.SaveChanges();
}
Now the Random instance is created at runtime, and the Next() is called at runtime. How would you try to compile that to SQL? What if you have a file and you read data from it and put that into the database? These are all things that cannot be evaluated at design time, only at runtime (i.e. if the file is not at the right place at design time, it's not a problem, only at runtime). And that's why a design time tool like the migration scaffolder cannot evaluate it.

Related

ASP Boilerplate problems using Effort in unit testing with EFProf (Entity Framework Profiler)

Having issues using EFProf (http://www.hibernatingrhinos.com/products/EFProf) with ASP Boilerplate (http://www.aspnetboilerplate.com/).
For unit testing, ASP Boilerplate uses Effort (https://github.com/tamasflamich/effort) for mocking the database in-memory.
If I run the unit tests without adding the reference to EFProf, the tests run correctly (green).
If I add the initialization line:
HibernatingRhinos.Profiler.Appender.EntityFramework.EntityFrameworkProfiler.Initialize();
in either my test base ctor or my application project's Initialize(), I get the following error:
Castle.MicroKernel.ComponentActivator.ComponentActivatorException
ComponentActivator: could not instantiate MyApp.EntityFramework.MyAppDataContext
The inner exception has the relevant information:
Error: Unable to cast object of type 'Effort.Provider.EffortConnection' to type 'HibernatingRhinos.Profiler.Appender.ProfiledDataAccess.ProfiledConnection'.
Is Effort just not compatible with EFProf? Or am I doing something blindingly obvious wrong in my initialization?
Answering my own question: Effort fakes the DbContect object but does not actually create SQL for in-memory, thus there is nothing to intercept by profilers. It is also the reason why the CommandText is always null when using EF6's Database.Log with Effort.
Am going to try using Moq with EF6 to use an in-memory database implementation for testing as an alternative to Asp Boilerplate's testing project that utilizes Effort per this article: https://msdn.microsoft.com/en-us/library/dn314429(v=vs.113).aspx

DropCreateDatabaseIfModelChanges

I am trying to set up an initializer used in a test environment. From everything I read, DropCreateDatabaseIfModelChanges is exactly what I need. Often times the database gets slightly out of sync with the model, and I need to simply start over fresh.
So here is how I went about setting up my context constructor
public ApplicationContext(int dbID, string username) : base(dbID, username)
{
Database.SetInitializer<ApplicationContext>(new DropCreateDatabaseIfModelChanges<ApplicationContext>());
Database.Initialize(true);
}
However, even when I have this initializer set up, I still get the error:
"The model backing the "ApplicationContext" has changed since the database was created. Consider using code first migrations to update
the database"
some other things to note:
I have tried setting AutomicMigrationsEnabled = false as well as true in my Migrations config file.
I have tried with and without forcing initialization
Anybody run into the same issue or have any ideas?
UPDATE
I was able to look through the source code for System.Data.entity here:
https://github.com/hallco978/entityframework/tree/master/src/EntityFramework
It turns out I needed to outright delete my Migrations/Configuration.cs file to prevent the error. It doesn't matter what the settings are within that configuration file.
I know get a problem because I can't drop the database if I'm using it. Does anyone know if the DeleteDatabase actually means drop the entire database or just the tables the model created?

Reset a codefirst generated database on app_start

I have a codefirst project using EF6 that I would like to reset the database each time the application starts (only in dev)
Is there a way to apply the initial migration followed by the update-database command without needing to do it from the PM window in VS?
I want to remove all data I generate while playing with the application
You could put the following in your Global.asax:
Database.Delete();
Database.SetInitializer(new MigrateDatabaseToLatestVersion<ApplicationDbContext, Configuration>());
ApplicationDbContext would refer to the context you are trying to delete/migrate.
You might want to put a #if DEBUG or some other conditional so it doesn't make it to prod, though.

Fixtures in Play! 2 for Scala

I am trying to do some integration testing in a Play! 2 for Scala application. For this, I need to load some fixtures to have the DB in a known state before each test.
At the moment, I am just invoking a method that executes a bunch of Squeryl statements to load data. But declaring the fixtures declaratively, either with a Scala DSL or in a language like JSON or YAML is more readable and easy to mantain.
In this example of a Java application I see that fixtures are loaded from a YAML file, but the equivalent Scala app resorts to manula loading, as I am doing right now.
I have also found this project which is not very well documented, and it seems a bit more complex than I'd like - it is not even clear to me where the fixture data is actually declared.
Are there any other options to load fixtures in a Play! application?
Use Evolutions. Write a setup and teardown script for the fixtures in SQL, or use mysqldump (or equivalent for your DB) to export an existing test DB as sql.
http://www.playframework.org/documentation/1.2/evolutions
I find the most stress-free way to do testing is to set everything up in an in-memory database which means tests run fast and drive the tests from Java using JUnit. I use H2DB, but there are a few gotchas you need to watch out for. I learned these the hard way, so this should save you some time.
Play has a nice system for setting up and tearing down your application for integration testing, using running( FakeAplication() ) { .. }, and you can configure it to use an in memory database with FakeApplication(additionalConfiguration = inMemoryDatabase()) see:
http://www.playframework.org/documentation/2.0/ScalaTest
OutOfMemory errors: However, running a sizeable test fixture a few times on my machine caused OutOfMemory errors. This seems to be because the default implementation of the inMemoryDatabase() function creates a new randomly named database and doesn't clean up the old ones between test runs. This isn't necessary if you've written your evolution teardown scripts correctly, because the database will be emptied out and refilled between each test. So we overrode this behaviour to use the same database and the memory issues disappeared.
DB Dialect: Another issue is that our production database is MySQL which has a number of incompatibilities with H2DB. H2DB has compatibility modes for a number of dbs, which should reduce the number of problems you have:
http://www.h2database.com/html/features.html#compatibility
Putting this all together makes it a little unwieldy to add before each test, so I extracted it into a function:
def memDB[T](code: =>T) =
running( FakeApplication( additionalConfiguration = Map(
"db.default.driver" -> "org.h2.Driver",
"db.default.url" -> "jdbc:h2:mem:test;MODE=MySQL"
) ) )(code)
You can then use it like so (specs example):
"My app" should {
"integrate nicely" in memDB {
.....
}
}
Every test will start a fake application, run your fixture setup evolutions script, run the test, then tear it all down again. Good luck!
Why not use the java example in Scala? That exact code should also work without modifications in Scala...

Delay-loading TestCaseSource in NUnit

I have some NUnit tests which uses a TestCaseSource function. Unfortunately, the TestCaseSource function that I need takes a long time to initialize, because it scans a folder tree recursively to find all of the test images that would be passed into the test function. (Alternatively it could load from a file list XML every time it's run, but automatic discovery of new image files is still a requirement.)
Is it possible to specify an NUnit attribute together with TestCaseSource such that NUnit does not enumerate the test cases (does not call the TestCaseSource function) until either the user clicks on the node, or until the test suite is being run?
The need to get all test images stored in a folder is a project requirement because other people who do not have access to the test project will need to add new test images to the folder, without having to modify the test project's source code. They would then be able to view the test result.
Some dogmatic unit-testers may counter that I am using NUnit to do something it's not supposed to do. I have to admit that I have to meet a requirement, and NUnit is such a great tool with a great GUI that satisfies most of my requirements, such that I do not care about whether it is proper unit testing or not.
Additional info (from NUnit documentation)
Note on Object Construction
NUnit locates the test cases at the
time the tests are loaded, creates
instances of each class with
non-static sources and builds a list
of tests to be executed. Each source
object is only created once at this
time and is destroyed after all tests
are loaded.
If the data source is in the test
fixture itself, the object is created
using the appropriate constructor for
the fixture parameters provided on the
TestFixtureAttribute or the default
constructor if no parameters were
specified. Since this object is
destroyed before the tests are run, no
communication is possible between
these two phases - or between
different runs - except through the
parameters themselves.
It seems the purpose of loading the test cases up front is to avoid having communications (or side-effects) between TestCaseSource and the execution of the tests. Is this true? Is this the only reason to require test cases to be loaded up front?
Note:
A modification of NUnit was needed, as documented in http://blog.sponholtz.com/2012/02/late-binded-parameterized-tests-in.html
There are plans to introduce this option to later versions of NUnit.
I don't know of a way to delay-load test names in the GUI. My recommendation would be to move those tests to a separate assembly. That way, you can quickly run all of your other tests, and load the slower exhaustive tests only when needed.