I have a project created from the ASP.NET Core Web Application template in VS. When run, the project creates a database to support the Identity package.
The Identity package is a Razor Class Library. I have scaffolded it and the models can be seen. The models are sub-classed from Microsoft.AspNetCore.Mvc.RazorPages.PageModel.
I am tracing the code to try and get a better understanding of how it all works. I am trying to find the path from the models to the physical database.
In the file appsettings.json, I see the connection string DefaultConnection pointing to the physical database.
In startup.cs, I see a reference to the connection string DefaultConnection:
services.AddDbContext<ApplicationDbContext>(options =>
options.UseSqlServer(
Configuration.GetConnectionString("DefaultConnection")));
After this, I lost the trail. I can't find the link from a model in code to a table in the database. What is the code needed to perform a query like select * from AspNetUsers?
As #Daniel Schmid suggested , you should firstly learn the Dependency injection in ASP.NET Core.
ASP.NET Core has an excellent Dependency Injection feature through which this framework provides you with an object of any class that you want. So you don’t have to manually create the class object in your code.
EF Core supports using DbContext with a dependency injection container. Your DbContext type can be added to the service container by using the AddDbContext<TContext> method.
Then you can use the instance like :
public class MyController
{
private readonly ApplicationDbContext _context;
public MyController(ApplicationDbContext context)
{
_context = context;
}
...
}
or using ServiceProvider directly, less common :
using (var context = serviceProvider.GetService<ApplicationDbContext>())
{
// do stuff
}
var options = serviceProvider.GetService<DbContextOptions<ApplicationDbContext>>();
And get users by directly querying the database :
var users = _context.Users.ToList();
Please also read this article .
In normal entity framework I am able to check the database connectivity using dbContext.Database.Exists() but in Entity Framework Core it is not there. What is the alternative of dbContext.Database.Exists() in Entity Framework Core?
You can determine whether or not the database is available and can be connected to with the CanConnect() method:
if (dbContext.Database.CanConnect())
{
// all good
}
You can use CanConnectAsync() for asynchronous operation.
Currently (up to the latest at this time EF Core 2.0) the DatabaseFacade class (which is the type of the DbContext.Database property) does not expose publicly Exists method.
However, the equivalent of the corresponding EF6 method is provided by the EF Core IRelationalDatabaseCreator service. You can expose it with a custom extension method like this:
using Microsoft.EntityFrameworkCore.Infrastructure;
using Microsoft.EntityFrameworkCore.Storage;
public static class DatabaseFacadeExtensions
{
public static bool Exists(this DatabaseFacade source)
{
return source.GetService<IRelationalDatabaseCreator>().Exists();
}
}
But please note that the Exists method was never intended to check the database connectivity, but rather than check if the database needs to be created (used internally when you call methods like EnsureCreated, Migrate etc.).
The "Exists()" method only check whether the database exists or not, It doesn't really check whether your application can connect to the database or not. For example: if the password is wrong in connection string even then Exists() method will return true.
So according to me, a better solution would be open the connection and check if any exception occurs in that.
try
{
dbContext.Database.OpenConnection();
dbContext.Database.CloseConnection();
return true;
}
catch (Exception ex)
{
return false;
}
but if you still want to use Exists() then you can use in this way
dbContext.Database.GetService<IRelationalDatabaseCreator>().Exists();
I see many examples of seeding with Code First, but I'm not sure I understand what the idiomatic way of seeding the database when using EF Database First.
Best practice is very situation dependent. Then there is the DEV versus PROD environments.
Auto seed when using Drop and recreate on model change during DEV so you have test data makes the most sense. This is when it used most.
Of cause you can a have a test method that you trigger manually. I personally find the idea an automatically triggered seed method not that exciting and more for DEV prototyping when the DB structure is volatile. When using migrations, you tend to keep your hard earned test data. Some use Seeding during initial installation in PROD. Others will have a specific load routines triggered during the installation/commissioning process. I like to use custom load routines instead.
EDIT: A CODE FIRST SAMPLE. With DB First you just write to the Db normally.
// select the appropriate initializer for your situation eg
Database.SetInitializer(new MigrateDatabaseToLatestVersion<MyDbContext, MyMigrationConfiguration>());
Context.Database.Initialize(true); // yes now please
//...
public class MyMigrationConfiguration<TContext> : DbMigrationsConfiguration<TContext>
where TContext : DbContext{
public MyMigrationConfiguration() {
AutomaticMigrationsEnabled = true; //fyi options
AutomaticMigrationDataLossAllowed = true; //fyi options
}
public override void Seed(TContext context)
{
base.Seed(context);
// SEED AWAY..... you have the context
}
}
I have an application that uses two separate models stored in a single database. The first model is set up with migrations and is the one that has created the migrations data in the database. The second is a very simple model that needs no model validation at all - the tables it uses exist and are of the proper structure. The second Context works fine in a separate database with the same table structure.
The problem is it fails when running in the same database with the first model since it does provide some sort of model validation. It complains that the context has changed since the last update, but of course the the migrations data does not contain anything about the second context's tables.
Is it possible to turn off the meta data validation for the context, and just let the second context work against the tables as is, since I know that works?
in the context constructor but that has no effect.
The solution is to use implement a "do nothing" database initializer that basically does nothing.
public class QueueMessageManagerContextInitializer : IDatabaseInitializer<QueueMessageManagerContext>
{
protected void Seed(QueueMessageManagerContext context)
{
}
public void InitializeDatabase(QueueMessageManagerContext context)
{
// do nothing
Seed(context);
}
}
To use in one time startup code then:
[ClassInitialize()]
public static void MyClassInitialize(TestContext testContext)
{
//Database.SetInitializer<QueueMessageManagerContext>(new DropCreateDatabaseIfModelChanges<QueueMessageManagerContext>());
Database.SetInitializer<QueueMessageManagerContext>(new QueueMessageManagerContextInitializer());
}
Simple but non-obvious solution.
Edit:
Even simpler solution: Just pass NULL to the SetInitializer() method:
Database.SetInitializer<QueueMessageManagerContext>(null);
I'm looking for suggestions on how to approach using an ORM (in this case, EF5) in the design of modular Non-Monolithic applications, with a Core part and 3rd party Modules, where the Core has no direct Reference to the 3rd party Modules, and Modules only have a reference to Core/Common tables and classes.
For arguments sake, a close enough analogy would be DNN.
CodeFirst:
With CodeFirst, the approach I used was to build up the model of the Db was via reflection: in the Core's DbContext's DbInitialation phase, I used Reflection to find any class in any dll (eg Core or various Modules) decorated with IDbInitializer (a custom contract containing an Execute() method) to define just the dll's structure. Each dll added to the DbModel what it knew about itself.
Any subsequent Seeding was also handled in the same wa (searching for a specific IDbSeeder contract, and executing it).
Pro:
* the approach works for now.
* The same core DbContext can be used across all respositories, as long as each repo uses dbContext.GetSet(), rather than expecting it to be a property of the dbContext. No biggie.
Cons:
* it only works at startup (ie, adding new modules would require an AppPool refresh).
* CodeFirst is great for a POC. But in EF5, it's not mature enough for Enterprise work yet (and I can't wait for EF6 for StoredProcs and other features to be added).
* My DBA hates CodeFirst, at least for the Core, wanting to optimize that part with Stored Procs as much as as possible...We're a team, so I have to try to find a way to please him, if I can find a way...
Database-first:
The DbModel phase appears to be happening prior to the DbContext's constructor (reading from embedded *.edmx resource file). DbInitialization is never invoked (as model is deemed complete), so I can't add more tables than what the Core knows about.
If I can't add elements to the Model, dynamically, as one can with CodeFirst, it means that
* either the Core DbContext's Model has to have knowledge of every table in the Db -- Core AND every 3rd party module. Making the application Monolithic and highly coupled, defeating the very thing I am trying to achieve.
* Or each 3rd party has to create their own DbContext, importing Core tables, leading to
* versioning issues (module not updating their *.edmx's when Core's *.edmx is updated, etc.)
* duplication everywhere, in different memory contexts = hard to track down concurrency issues.
At this point, it seems to me that the CodeFirst approach is the only way that Modular software can be achieved with EF. But hopefully someone else know's how to make DatabaseFirst shine -- is there any way of 'appending' DbSet's to the model created from the embedded *.edmx file?
Or any other ideas?
I would consider using a sort of plugin architecture, since that's your overall design for the application itself.
You can accomplish the basics of this by doing something like the following (note that this example uses StructureMap - here is a link to the StructureMap Documentation):
Create an interface from which your DbContext objects can derive.
public interface IPluginContext {
IDictionary<String, DbSet> DataSets { get; }
}
In your Dependency Injection set-up (using StructureMap) - do something like the following:
Scan(s => {
s.AssembliesFromApplicationBaseDirectory();
s.AddAllTypesOf<IPluginContext>();
s.WithDefaultConventions();
});
For<IEnumerable<IPluginContext>>().Use(x =>
x.GetAllInstances<IPluginContext>()
);
For each of your plugins, either alter the {plugin}.Context.tt file - or add a partial class file which causes the DbContext being generated to derive from IPluginContext.
public partial class FooContext : IPluginContext { }
Alter the {plugin}.Context.tt file for each plugin to expose something like:
public IDictionary<String, DbSet> DataSets {
get {
// Here is where you would have the .tt file output a reference
// to each property, keyed on its property name as the Key -
// in the form of an IDictionary.
}
}
You can now do something like the following:
// This could be inside a service class, your main Data Context, or wherever
// else it becomes convenient to call.
public DbSet DataSet(String name) {
var plugins = ObjectFactory.GetInstance<IEnumerable<IPluginContext>>();
var dataSet = plugins.FirstOrDefault(p =>
p.DataSets.Any(ds => ds.Key.Equals(name))
);
return dataSet;
}
Forgive me if the syntax isn't perfect - I'm doing this within the post, not within the compiler.
The end result gives you the flexibility to do something like:
// Inside an MVC controller...
public JsonResult GetPluginByTypeName(String typeName) {
var dataSet = container.DataSet(typeName);
if (dataSet != null) {
return Json(dataSet.Select());
} else {
return Json("Unable to locate that object type.");
}
}
Clearly, in the long-run - you would want the control to be inverted, where the plugin is the one actually tying into the architecture, rather than the server expecting a type. You can accomplish the same kind of thing using this sort of lazy-loading, however - where the main application exposes an endpoint that all of the Plugins tie to.
That would be something like:
public interface IPlugin : IDisposable {
void EnsureDatabase();
void Initialize();
}
You now can expose this interface to any application developers who are going to create plugins for your architecture (DNN style) - and your StructureMap configuration works something like:
Scan(s => {
s.AssembliesFromApplicationBaseDirectory(); // Upload all plugin DLLs here
// NOTE: Remember that this gives people access to your system!!!
// Given what you are developing, though, I am assuming you
// already get that.
s.AddAllTypesOf<IPlugin>();
s.WithDefaultConventions();
});
For<IEnumerable<IPlugin>>().Use(x => x.GetAllInstances<IPlugin>());
Now, when you initialize your application, you can do something like:
// Global.asax
public static IEnumerable<IPlugin> plugins =
ObjectFactory.GetInstance<IEnumerable<IPlugin>>();
public void Application_Start() {
foreach(IPlugin plugin in plugins) {
plugin.EnsureDatabase();
plugin.Initialize();
}
}
Each of your IPlugin objects can now contain its own database context, manage the process of installing (if necessary) its own database instance / tables, and dispose of itself gracefully.
Clearly this isn't a complete solution - but I hope it starts you off in a useful direction. :) If I can help clarify anything herein, please let me know.
While it's a CodeFirst approach, and not the cleanest solution, what about forcing initialization to run even after start up, as in the answer posted here: Forcing code-first to always initialize a non-existent database? (I know precisely nothing about CodeFirst, but seeing as this is a month old with no posts it's worth a shot)