How to migrate DB without nuget? It is not possible to use Visual Studio with nuget in production environment. Currently, many examples only teach us to use Visual Studio with nuget.
How to use the generated DbMigration classes?
The easiest way is:
Database.SetInitializer(
new MigrateDatabaseToLatestVersion<MyDbContext,
MyDbMigrationsConfiguration>());
This will run the migrations when initializing the DbContext.
You can also force the execution manually:
var migrator = new DbMigrator(new MyMigrationsConfiguration());
migrator.Update();
(I believe you also have to set TargetDatabase on the configuration, but you can try)
Here are the options:
Use the migrate.exe command line tool that ships in our NuGet
package.
Use the MigrateDatabaseToLatestVersion initializer as
others have described.
Use the runtime API available from the
DbMigrator class.
You can migrate to the latest version using a Web.config setting - see this blog post by Rowan Miller:
If you are using Code First Migrations, you can configure the database to be migrated automatically using the MigrateDatabaseToLatestVersion initializer.
<contexts>
<context type="Blogging.BlogContext, MyAssembly">
<databaseInitializer type="System.Data.Entity.MigrateDatabaseToLatestVersion`2[[Blogging.BlogContext,
MyAssembly], [Blogging.Migrations.Configuration, MyAssembly]], EntityFramework" />
</context>
</contexts>
Just swap your context class in here: the System.Data.Entity.MigrateDatabaseToLatestVersion is built-in to EF. This setting updates the old AppSettings version of the same idea.
To my mind this is the best way, because the question of which initializer to use is a configuration one really, and you want to be able to Web.config this, and ideally apply config transforms to work for your different environments.
You can do it using the EF Power Tools, there's the migrate.exe program that you can use to run migrations from the command prompt (post build for example). If you want to run migrations on production database you can also use the Update-Database command to generate SQL scripts from the migration classes, very useful if you need to pass through a DBA.
EF Power Tools are available on the Visual Studio Gallery and optionally here, check out this very useful video that, among other things, talks about the Update-Database command.
there is another solution :
Using DB = New SHAContext()
If DB.Database.Exists() Then
Dim migrator As New DbMigrator(New SHAClassLibrary.Migrations.Configuration())
For Each m In migrator.GetDatabaseMigrations()
Try
migrator.Update(m)
Catch ex As Exception
End Try
Next
End If
'DB.test()
End Using
I was looking for a way to control which migrations run explicitly in code without the need of a DbConfiguration class or automatic migrations enabled.
So i managed to create the following extension:
public static void RunMigration(this DbContext context, DbMigration migration)
{
var prop = migration.GetType().GetProperty("Operations", BindingFlags.NonPublic | BindingFlags.Instance);
if (prop != null)
{
IEnumerable<MigrationOperation> operations = prop.GetValue(migration) as IEnumerable<MigrationOperation>;
var generator = new SqlServerMigrationSqlGenerator();
var statements = generator.Generate(operations, "2008");
foreach (MigrationStatement item in statements)
context.Database.ExecuteSqlCommand(item.Sql);
}
}
As an example, having a migration like the following:
public class CreateIndexOnContactCodeMigration : DbMigration
{
public override void Up()
{
this.CreateIndex("Contacts", "Code");
}
public override void Down()
{
base.Down();
this.DropIndex("Contacts", "Code");
}
}
You could run it using your DbContext:
using (var dbCrm = new CrmDbContext(connectionString))
{
var migration = new CreateIndexOnContactCodeMigration();
migration.Up();
dbCrm.RunMigration(migration);
}
Related
I upgrade Ef core 6 to 7 and get this error at Database.EnsureCreated();
public AppDbContext(DbContextOptions<AppDbContext> options) : base(options)
{
Database.EnsureCreated();
if (Database.GetPendingMigrations().Any())
{
Database.Migrate();
}
}
The database is created but I get this error. I fear if some migrations are ended with an error and are not complete. and I want these codes always be here and not commented after first run. What is this error and how to correct it?
Edits:
I deleted all content of the Migrations folder and the Sqlite Db and created an Initial migration, But when I want to apply this migration I get the same error.
I get this error after Updating Database by a Migration or at the line:
if (Database.GetPendingMigrations().Any()):
SQLite provider.
fail: Microsoft.EntityFrameworkCore.Database.Connection[20004]
An error occurred using the connection to database 'main' on server 'C:\Users\...\Documents\Developer\WinForms Blazor2_2022_Book_Secure_Active\WinFormsBlazor\bin\Debug\net7.0-windows\AppData\AppDB.db'.
Microsoft.Data.Sqlite.SqliteException (0x80004005): SQLite Error 1: ''.
I created a AppData\AppDB.db in root project folder and set this db to Copy always then in the Onconfiguring in the DbContext:
if (!optionsBuilder.IsConfigured)
{
optionsBuilder.UseSqlite($"Data Source={Path.Combine(Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location))}\\AppData\\AppDB.db"
, options =>
{
options.UseNetTopologySuite();
});
}
I have no main Database, I don't understand why I have this setup worked in EF Core 6 for a year but after updating to 7 it is not working! any ideas to solve this problem?
Looks like you're hitting issue #29584. It will hopefully be fixed in version 7.0.3.
For now, you can work around it by manually updating the SQLitePCLRaw dependency:
<ItemGroup>
<PackageReference Include="Microsoft.EntityFrameworkCore.Sqlite" Version="7.0.2" />
<PackageReference Include="Microsoft.EntityFrameworkCore.Sqlite.NetTopologySuite" Version="7.0.2" />
<!-- Manually updated to fix NetTopologySuite issues -->
<PackageReference Include="SQLitePCLRaw.bundle_e_sqlite3" Version="2.1.4" />
</ItemGroup>
I find the problem was using x => { x.UseNetTopologySuite(); } in the UseSqlite function when adding the ef core context, When UseNetTopologySuite() is there, Any access to the database give error: SQLite Error 1: ''. and if I remove UseNetTopologySuite() I get another error: sqlite error 1: 'no such function: initspatialmetadata'..
In Ef core 6.0.12 there is no problem and I can use Spatial Point type without any problems, even after migrating that includes a point type, several tables and views related to geometry are created automatically with Ef core 6.
I had to forget using Spatial types in EF Core 7 so that I can upgrade my EF core version.
I am using this setup to generate a ddl file:
spring.jpa.properties.javax.persistence.schema-generation.create-source=metadata
spring.jpa.properties.javax.persistence.schema-generation.scripts.action=create
spring.jpa.properties.javax.persistence.schema-generation.scripts.create-target=./ddl/schema.sql
The generation is executed via a specific test in Maven validation phase:
#RunWith(SpringRunner.class)
#DataJpaTest
#AutoConfigureTestDatabase(replace = Replace.NONE)
#TestPropertySource(locations = "/ddl.properties")
public class GenerateDDL {
#Autowired
private EntityManager em;
#Test
public void generateDDL() throws IOException {
em.close();
em.getEntityManagerFactory().close();
}
}
This is working fine, with on problem: the generator does not create a new file but just appends always it's stuff.
Is there a way or setting to let generator always create a new file or clean up the old?
Deleting it within the test would delete it after generation. We also need the file to be published on git thus it is not generated within target.
UPDATE
There seems at least no solution within Hibernate (until Hibernate 6):
https://hibernate.atlassian.net/browse/HHH-11817
Is there a way to hook into Spring context creation - before persistence context is created? There i could delete the file.
I was faced with the same problem, and found the following way to clean up before schema generation: use BeanFactoryPostProcessor. BeanFactoryPostProcessor is called when all bean definitions are loaded, but no beans are instantiated yet.
/**
* Remove auto-generated schema files before JPA generates them.
* Otherwise new content will be appended to the files.
*/
#Configuration
public class SchemaSupport {
#Bean
public static BeanFactoryPostProcessor schemaFilesCleanupPostProcessor() {
return bf -> {
try {
Files.deleteIfExists(Path.of("schema-drop-auto.sql"));
Files.deleteIfExists(Path.of("schema-create-auto.sql"));
} catch (IOException e) {
throw new IllegalStateException(e);
}
};
}
TLDR: Assuming your spring boot project includes a version of Hibernate >= 5.5.3, then to configure the schema file(s) to be overwritten (i.e. disable appending) you would add this entry in your application.properties file:
spring.jpa.properties.hibernate.hbm2ddl.schema-generation.script.append=false
Since Hibernate version 5.5.3 there is a new property available that can be set to disable the appending behaviour. From the Hibernate User Guide:
26.15. Automatic schema generation
hibernate.hbm2ddl.schema-generation.script.append (e.g. true (default value) or false)
For cases where the jakarta.persistence.schema-generation.scripts.action value indicates that schema commands should be written to DDL script file, hibernate.hbm2ddl.schema-generation.script.append specifies if schema commands should be appended to the end of the file rather than written at the beginning of the file. Values are true for appending schema commands to the end of the file, false for writing achema commands at the beginning of the file.
Short example using Gradle and Springboot.
Assuming you have a project property defining your environment, and "dev" is the one creating DDL files for Postgres.
Excerpt from application.yml:
spring:
jpa:
database-platform: org.hibernate.dialect.PostgreSQLDialect
database: POSTGRESQL
properties:
javax.persistence.schema-generation.database.action: drop-and-create
javax.persistence.schema-generation.scripts.action: drop-and-create
javax.persistence.schema-generation.scripts.create-target: ./sql/create-postgres.sql
javax.persistence.schema-generation.scripts.create-source: metadata
javax.persistence.schema-generation.scripts.drop-target: ./sql/drop-postgres.sql
javax.persistence.schema-generation.scripts.drop-source: metadata
Add some code in bootRun task to delete the files:
bootRun {
def defaultProfile = 'devtest'
def profile = project.hasProperty("env") ? project.getProperty("env") : defaultProfile
if (profile == 'dev') {delete 'sql/create-postgres.sql'; delete 'sql/drop-postgres.sql';}
...}
I only ever use the schema file for testing and solved the problem by specifying the location of the output file as follows:
spring.jpa.properties.javax.persistence.schema-generation.scripts.create-target=target/test-classes/schema.sql
This works well for Maven but the target directory could be modified for a Gradle build.
When I run locally I run the commands below manually and then package and publish the app onto my IIS server.
Add-Migration Initial
Update-Database
When I want to publish to an azure appservice will these commands run automatically? If so how does it know to use a different ConnectionString when I publish it to azure?
I added the connectionString for azure in appsettings.json but I don't understand how I can tell my controllers etc to use that when I publish to azure AppServices
"ConnectionStrings": {
"AzureTestConnection": "Data Source=tcp:xxxxxx-test.database.windows.net,1433;Initial Catalog=xxxxx;User Id=xxx#yyyy.database.windows.net;Password=xxxxxx",
"NWMposBackendContext": "Server=(localdb)\\mssqllocaldb;Database=NWMposBackendContext-573f6261-6657-4916-b5dc-1ebd06f7401b;Trusted_Connection=True;MultipleActiveResultSets=true"
}
I am trying to have three profiles with different connection strings
Local
Published to AzureApp-Test
Published to AzureApp-Prod
When I want to publish to an azure appservice will these commands run automatically?
EF does not support Automatic migrations, you may need to manually execute Add-Migration or dotnet ef migrations add for adding migration files. You could explicitly execute the command to apply the migrations, also you could apply migrations in your code.
And you could add the following code in the Configure method of Startup.cs file:
using (var scope = app.ApplicationServices.GetService<IServiceScopeFactory>().CreateScope())
{
scope.ServiceProvider.GetRequiredService<ApplicationDbContext>().Database.Migrate();
}
I am trying to have three profiles with different connection strings
You would dynamically choose a connection string based on Environment, so here is main steps, you could refer to it.
Set the ASPNETCORE_ENVIRONMENT value to azure in webapp>property>debug.
2.Follow ASP.NET Core MVC with Entity Framework Core to get started.
3.Set the appsetting.json with your two connection string.
{
"ConnectionStrings": {
"DefaultConnection": "connectiondefault",
"azure": "connectionazure"
},
"Logging": {
"IncludeScopes": false,
"LogLevel": {
"Default": "Warning"
}
}
}
Note:You could also set the connectionstring in database on portal to here, then you could test it in local and could use debug to troubleshooting.
Also, you could try to test with one connectionstring to ensure you have no problem with connecting to database.
4.Enable Developer exception page by using app.UseDeveloperExceptionPage(); and the app.UseExceptionHandler methods in your startup class which would display the errors.
public Startup(IHostingEnvironment env)
{
Configuration = new ConfigurationBuilder()
.SetBasePath(env.ContentRootPath)
.AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
.Build();
HostingEnvironment = env;
}
public IConfigurationRoot Configuration { get; }
public IHostingEnvironment HostingEnvironment { get; }
// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
if (HostingEnvironment.IsDevelopment())
{
services.AddDbContext<SchoolContext>(options =>
options.UseSqlServer(Configuration.GetConnectionString("DefaultConnection")));
}
else
{
services.AddDbContext<SchoolContext>(options =>
options.UseSqlServer(Configuration.GetConnectionString("azure")));
}
services.AddMvc();
}
For more details, you could refer to this thread.
In an existing Grails 3.1.15 application, recently upgraded to Hibernate 5.1, an odd issue with afterInsert (or other) hooks started to appear. After some hours of testing, I could track this down to Hibernate 5.1 + PostgreSQL combination - issue is not reproducible with H2. To reproduce, create a simple application consisting of 2 domain objects - an Audit trail and a User, as shown here:
class Audit {
Date dateCreated
String auditData
}
class User {
String name
String email
def afterInsert() {
new Audit(auditData: "User created: $this").save()
}
}
The code above works OK with Hibernate 4, however, if the application is upgraded to Hibernate5 plugin + Hibernate 5.1.x (tested with 5.1.0.Final and 5.1.5.Final) the above scenario will always lead to a ConcurrentModificationException when you attempt to save a new User instance. You can just use a scaffold controller to reproduce. Note this only happens with PostgreSQL as the data source - with H2 it would still work OK.
Now, according to GORM Docs (see chapter 8.1.3) one should use a new session when attempting to save other objects in beforeUpdate or afterInsert hooks anyway:
def afterInsert() {
Audit.withNewSession() {
new Audit(auditData: "User created: $this").save()
/* no exception logged, but Audit instance not preserved */
}
}
But this wouldn't really resolve the issue with PSQL. The exception is gone, the User instance is persisted, but the Audit instance is not saved. Again, this code would work OK with H2. To really avoid the issue with PSQL, you would have to manually flush the session in afterInsert:
def afterInsert() {
Audit.withNewSession() { session ->
new Audit(auditData: "User created: $this").save()
session.flush()
/* finally no exceptions, both User and Audit saved */
}
}
Question: is this a bug, or is this expected? I find it a bit suspicious that the manual flush is required in order for the Audit instance to be persisted - and even more so when I see it works without a flush with H2 and only seems to affect PostgreSQL. But I couldn't really find any reports - any pointers are appreciated!
For the sake of completeness, I tested with the following JDBC driver versions for PostgreSQL:
runtime 'org.postgresql:postgresql:9.3-1101-jdbc41'
runtime 'org.postgresql:postgresql:9.4.1208.jre7'
runtime 'org.postgresql:postgresql:42.0.0'
And for the upgrade to Hibernate 5.1, the following dependencies were used:
classpath "org.grails.plugins:hibernate5:5.0.13"
...
compile "org.grails.plugins:hibernate5:5.0.13"
compile "org.hibernate:hibernate-core:5.1.5.Final"
compile "org.hibernate:hibernate-ehcache:5.1.5.Final"
I'm working on set of constants for my project, and I'd like to use roslyn to verify some of them in source code level. To achieve this, I'm loading entire solution using following snippet into AppDomain with IsFullyTrusted == true and IsHomogenous == true, i.e. remoting is enabled with x86 platform target:
// load workspace, i.e. solution from Visual Studio
var workspace = Roslyn.Services.Workspace.LoadSolution(solutionFile);
Test runners for NCrunch and NUnit with x86 platform with Roslyn
But while using either ncrunch 1.45 or nunit 2.6.2 nunit-console-x86.exe with platform configuration x86 as test runners, I'm constantly getting following System.Security.SecurityException:
System.Security.SecurityException : Type System.Runtime.Remoting.ObjRef and the types derived from it (such as System.Runtime.Remoting.ObjRef) are not permitted to be deserialized at this security level.
Server stack trace:
at System.Runtime.Serialization.FormatterServices.CheckTypeSecurity(Type t, TypeFilterLevel securityLevel)
at System.Runtime.Serialization.Formatters.Binary.ObjectReader.CheckSecurity(ParseRecord pr)
at System.Runtime.Serialization.Formatters.Binary.ObjectReader.ParseObject(ParseRecord pr)
at System.Runtime.Serialization.Formatters.Binary.ObjectReader.Parse(ParseRecord pr)
at System.Runtime.Serialization.Formatters.Binary.__BinaryParser.ReadObjectWithMapTyped(BinaryObjectWithMapTyped record)
at System.Runtime.Serialization.Formatters.Binary.__BinaryParser.ReadObjectWithMapTyped(BinaryHeaderEnum binaryHeaderEnum)
at System.Runtime.Serialization.Formatters.Binary.__BinaryParser.Run()
at System.Runtime.Serialization.Formatters.Binary.ObjectReader.Deserialize(HeaderHandler handler, __BinaryParser serParser, Boolean fCheck, Boolean isCrossAppDomain, IMethodCallMessage methodCallMessage)
at System.Runtime.Serialization.Formatters.Binary.BinaryFormatter.Deserialize(Stream serializationStream, HeaderHandler handler, Boolean fCheck, Boolean isCrossAppDomain, IMethodCallMessage methodCallMessage)
at System.Runtime.Remoting.Channels.CoreChannel.DeserializeBinaryRequestMessage(String objectUri, Stream inputStream, Boolean bStrictBinding, TypeFilterLevel securityLevel)
at System.Runtime.Remoting.Channels.BinaryServerFormatterSink.ProcessMessage(IServerChannelSinkStack sinkStack, IMessage requestMsg, ITransportHeaders requestHeaders, Stream requestStream, IMessage& responseMsg, ITransportHeaders& responseHeaders, Stream& responseStream)
Exception rethrown at [0]:
at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg)
at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type)
at Roslyn.Utilities.RemoteServices.Initialize(Int32 clientProcessId)
at Roslyn.Utilities.RemoteServices.StartRemoteServicesProcess()
at Roslyn.Utilities.RemoteServices.get_Instance()
at Roslyn.Utilities.RemoteServices.CreateInstance[T]()
at Roslyn.Services.Host.TemporaryStorageServiceFactory.CreateService(IWorkspaceServiceProvider workspaceServices)
at Roslyn.Services.Host.WorkspaceServiceProviderFactory.<>c__DisplayClass6.<OnImportsSatisfied>b__1(IWorkspaceServiceProvider wsp)
at Roslyn.Services.Host.WorkspaceServiceProvider.ConstructService(Type type)
at System.Collections.Concurrent.ConcurrentDictionary`2.GetOrAdd(TKey key, Func`2 valueFactory)
at Roslyn.Services.Host.WorkspaceServiceProvider.GetService[TWorkspaceService]()
at Roslyn.Services.SolutionServices..ctor(IWorkspaceServiceProvider workspaceServices)
at Roslyn.Services.Solution..ctor(SolutionId id, String filePath, VersionStamp version, VersionStamp latestProjectVersion, IWorkspaceServiceProvider workspaceServices)
at Roslyn.Services.Host.SolutionFactoryServiceFactory.SolutionFactoryService.CreateSolution(SolutionId id)
at Roslyn.Services.Host.TrackingWorkspace.CreateNewSolution(ISolutionFactoryService solutionFactory, SolutionId id)
at Roslyn.Services.Host.TrackingWorkspace..ctor(IWorkspaceServiceProvider workspaceServiceProvider, Boolean enableBackgroundCompilation, Boolean enableInProgressSolutions)
at Roslyn.Services.Host.HostWorkspace..ctor(IWorkspaceServiceProvider workspaceServiceProvider, Boolean enableBackgroundCompilation, Boolean enableInProgressSolutions, Boolean enableFileTracking)
at Roslyn.Services.Host.LoadedWorkspace..ctor(IWorkspaceServiceProvider workspaceServiceProvider, IDictionary`2 globalProperties, Boolean enableBackgroundCompilation, Boolean enableFileTracking)
at Roslyn.Services.Host.LoadedWorkspace.LoadSolution(String solutionFileName, String configuration, String platform, Boolean enableFileTracking)
at Roslyn.Services.Workspace.LoadSolution(String solutionFileName, String configuration, String platform, Boolean enableFileTracking)
There is a discussion on NCrunch forum, but I have tried all following options without success:
Add [assembly: AllowPartiallyTrustedCallers] to AssemblyInfo.cs
Add [assembly: SecurityRules(SecurityRuleSet.Level1)] to AssemblyInfo.cs
Add <NetFx40_LegacySecurityPolicy enabled="true" /> to app.config
Run VS2012 as Administrator
Decorate both unittests and implementation with [SecuritySafeCritical]
Update: create new AppDomain and provide
PermissionState.Unrestricted, SecurityPermissionFlag.AllFlags and DataProtectionPermissionFlags.AllFlags
Add Host Evidence: SecurityZone.MyComputer, System.Security.Policy.Hash and System.Security.Policy.StrongName
Add all assemblies (both mine and Roslyn CTP) to fullTrustAssemblies while creating of AppDomain
Update #2
This exception happens only while I'm running test under x86 configuration, after I had switched to x64 platform configuration, everything seems to work fine
Question
Are there any other attributes or configuration changes to app.config or AppDomain that could help to enable deserialization in .NET Framework remoting for System.Runtime.Remoting.ObjRef while running under x86 configuration?
Temporary solution
Switch to x64 build configuration only for unittest project(s)
Source code
Whole source code is available at github: to reproduce error run following unittest using NCrunch IntrospectionTests.Introspection_SearchForComplexityGt10_ApprovedList
Discussion at NCrunch forum
Additional information
Also I notice...
A lot of instances of Roslyn.Services.dll hang in background, after all tests had been completed.
Lack of Host Evidences for NCrunch: System.Security.Policy.Hash and System.Security.Policy.StrongName with test runner assembly name
resharper MSIL (should be x64 inside) and nunit 2.6.2 nunit-console.exe test runner are working fine, so it looks like Roslyn configuration/remoting/security configuration issue.
It looks like ncrunch is executing the tests in Partial Trust, whereas Resharper is running them in Full Trust.
Roslyn has not been tested in Partial Trust scenarios. There are likely to be accesses to APIs that require Full Trust.
I haven't used ncrunch, but maybe there is a way to configure it to run the tests in Full Trust?
I want to add something!
After upgrading an NUnit instance I manage to run to both 2.6.2 and 2.6.3 of the software, my team ran into similar issues with this exact System.Security exception that Akim was seeing.
We were creating an IpcChannel with some of our custom NUnit logic that wasn't created with the right trust settings, so we had to change something that looked like:
IpcChannel channel = new IpcChannel(string.Format("localhost:{0}", portNum));
To -
BinaryServerFormatterSinkProvider serverProvider = new BinaryServerFormatterSinkProvider();
serverProvider.TypeFilterLevel = System.Runtime.Serialization.Formatters.TypeFilterLevel.Full;
BinaryClientFormatterSinkProvider clientProvider = new BinaryClientFormatterSinkProvider();
var properties = new System.Collections.Hashtable();
properties["name"] = "ipc";
properties["priority"] = "20";
properties["portName"] = string.Format("localhost:{0}", portNum);
IpcChannel channel = new IpcChannel(properties, clientProvider, serverProvider);
Just a quick fix I noticed that I figured I would forward to anybody seeing something similar that can't just switch their platform settings. To be fair, it took me about four hours to figure out so I didn't want the knowledge to go to waste.