play framework 2.0 evolutions, how to mark an inconsistent state as resolved in PROD - scala

I have an application developed in scala play2.0,
it worked successfully in local, but if failed when deployed to heroku.
the reason of the fail is that locally i was using a H2 database,
and using postgresql in heroku, i have to change one of the data types from "clob" to "text".
the problem now is that the database in heroku is in a "inconsistent state", according to the play20 documentation.
in DEV mode (locally), you can just click on the "Mark it as resolved" when the html appears.
how to "mark it ask resolved" in the heroku PROD environtment?
http://www.playframework.com/documentation/2.1.1/Evolutions
ps: note, because it was a new application, i just deleted the database and re-started.
however, here i am asking what is the proper way to handle evolutions in the PROD env.
that is, the "Mark it as resolved" issue for PROD is not explained here: http://www.playframework.com/documentation/2.1.1/Evolutions

Although I couldn't find a way to do it via the play command, you can do it by editing the database directly.
Imagine you're trying to go from 5.sql to 6.sql. Here's what you do:
Figure out and fix the problem(s) that caused the database to enter an inconsistent state (i.e. manually apply your !Ups and fix all the problems with them).
Manually apply your !Downs so that the database is in the state it was after 5.sql was applied.
Go into your database, find the table called play_evolutions, and look at the row with id 6. It should saying something like applying ups in the state column and have the error message in the last_problem column.
Delete the row with id 6. This will make Play think you are in the state you were with 5.sql.
Now you should be able to run play -DapplyEvolutions.default=true start to evolve to 6.sql.

Inconsistent state just means that the evolutions could not be applied and thus, the application is blocked. Update your evolution scripts and re-deploy.

Related

PostgreSQL "forgets" default schema when closing data source connection

I am running into a very strange issue with Spring Boot and Spring Data: after I manually close a connection, the formerly working application seems to "forget" which schema it's using and complains about missing relations.
Here's the code snippet in question:
try (Connection connection = this.dataSource.getConnection()) {
ScriptUtils.executeSqlScript(connection, new ClassPathResource("/script.sql"));
}
This code works fine, but after it executes, the application immediately starts throwing errors like the following:
org.postgresql.util.PSQLException: ERROR: relation "some_table" does not exist
Prior to executing the code above, the application works fine (including referencing the table it later complains about). If I remove the try-resource block, and do not close the Connection, everything also works fine, except that I've now created a resource leak. I have also tried explicitly setting the default schema (public) in the following ways:
In the JDBC URL with the currentSchema parameter
With the the spring.datasource.hikari.schema parameter
With the spring.datasource.jpa.properties.hibernate.default_schema property
The last does alleviate the issue with respect to Hibernate managed classes, but the issue persists with native queries. I could, of course, make the schema explicit in those queries, but that doesn't seem to address the root issue. Why would closing a connection trigger this behavior?
My environment:
Spring Boot 2.5.1
PostgreSQL 12.7
Thanks to several users above who immediately saw what I did not. The script, adapted from an older pg_dump run, was indeed mucking with the search_path:
SELECT pg_catalog.set_config('search_path', '', false);
Removing that line, and some other unnecessary ones, resolved the problem. Big duh on my part.

Getting EF Context has Changed Error After Updating Site

I am having an issue where I am getting an error "context has changed since the database was created," but when I look at the migrations history table it is the latest migration. This is a test site, so I updated the database using a sql script I got by running this command "Update-Database -Script -SourceMigration: $InitialDatabase" I can't just delete the whole database and recreate it. Has anyone run into a similar issue like this? Currently, I am using EF 6.1.3.
Note: I used code-first for this, so it was not an existing database I am adding to.
The issue was that a new directory had been created for the site due to the domain name being changed. I knew the domain name had changed, but didn't realize the physical directory had changes. It was such a slight change I missed it when I was checking the application paths (an s was removed). So the problem was caused by pebkac (problem exists between keyboard and chair).
Try to set your Initializer to null in the constructor of your DbContext class.
Like this:
Database.SetInitializer<YourDbContext>(null);
ScottGu's Blog explains why this happens :
http://weblogs.asp.net/scottgu/using-ef-code-first-with-an-existing-database
Have you tried using this command instead of executing a script?
Update-Database -Force
If you have enabled auto migrations, this should pick up your code-first changes and deploy them. The force will allow columns which have data to be dropped etc.

EF Code first migrations not running after deploy to Azure

I have two folders for my migrations (AuthContext and UserProfileContext), each has their own migration and some custom sql to run afterwards for data migrations and whatnot.
This works fine when using package manager console. I
Restore from production
Run Update-Database -ConfigurationTypeName Migrations.Auth.Configuration
Run Update-Database -ConfigurationTypeName Migrations.UserProfile.Configuration
Then everything is very happy in the new database, migrations executed data shuffled where it needs to.
I tried to test out the migrations on publish piece by:
Restore production on dev database
Single connection string (all contexts use the same) pointed to dev database
Publish to azure web site
Checked the box for Apply Code First Migrations
Selected that single connection string
Okay it published fine; however, when I went to look at the database, nothing happened! It did not create the necessary tables, columns, or data moves.
TLDR; Code first migrations are not running after publish to Azure
Update 1
I've tried any combination of the below: only one single connection string so I'm guessing that's not the issue, and execute migrations is checked.
On publish the api runs but no database changes are made. I thought perhaps I needed to hit it first but I just get random errors when I try to use the api (which now of course relies on the new database setup), and the database is still not changed.
I've seen a couple references out there about needing to add something to my Startup class but I'm not sure how to proceed.
Update 2
I solved one issue by added "Persist Security Info=True" to my connection string. Now it actually connects to the database and calls my API; however, no migrations are running.
I attached debugger to Azure dev environment and stepped through... on my first database call it steps into the Configuration class for the Migration in question, then barfs and I can't track down the error.
public Configuration()
{
AutomaticMigrationsEnabled = false;
MigrationsDirectory = #"Migrations\Auth";
ContextKey = "AuthContext";
}
Update 3
Okay, dug down and the first time it hits the database we're erroring. Yes this makes sense since the model has changed, but I have migrations in place, enabled, and checked! Again, it works fine when running "Update-Database" from package manager console, but not when using Execute Code First Migrations during publish to Azure
The model backing the 'AuthContext' context has changed since the
database was created. Consider using Code First Migrations to update
the database (http://go.microsoft.com/fwlink/?LinkId=238269).
Update 4
Okay I found the root issue here. VS is setting up the additional web.config attrib for databaseInitializer on only one of my database contexts, the one not mentioned is in fact hit first from my app.
So now I have to figure out how to get it to include multiple contexts, or, combine all of my stuff into a single context.
The answer to this post is not very detailed.
This article explains what I had to do to fix a similar problem to this:
https://blogs.msdn.microsoft.com/webdev/2014/04/08/ef-code-first-migrations-deployment-to-an-azure-cloud-service/
I'll roughly describe the steps I had to take below:
Step 1
Add your connection strings to your dbContexts, in my situation, they were both the same.
Step 2
Add this to your web.config
<appSettings>
<add key="MigrateDatabaseToLatestVersion" value="true"/>
</appSettings>
Step 3
And add this to the bottom of your global.asax.cs / Startup.cs(OWIN startup)
var configuration = new Migrations.Configuration();
var migrator = new DbMigrator(configuration);
migrator.Update();
Solved! To summarize the solution for posterity:
Enable Code First Migrations only enables them for one base connection string per checkbox checked, regardless of how many contexts have migrations against that base connection string. So in my case I broke out the two in question into two different connection strings.
Then I was hitting other errors and identified that if you're changing the base connection string to the model backing asp identity you need to include (one time publish) the additional flag base("AuthContext" , throwIfV1Schema: false)
For anyone who has this issue and may have overlooked the following: be sure to check that you have correctly set the connection string in your Web.config file and/or Application settings on Azure. This includes DefaultConnection and DefaultConnection_DatabasePublish.
In our case the former was correct but the latter contained the wrong database instance because it had been carried over from an App Service clone operation. Therefore the wrong database was being migrated.

Can I access a migrated EF database with "old" code?

If I have an EF6 Code First environment in need of a schema change, is it possible to configure it in such a way that I can apply the migration (via Update-Database -Script) before the code gets deployed?
I just ran a simple test with a console app building a DB with migration "Initial", taking a copy of the application at this point. I then modified the schema by adding a new property to my entity and added a "V2" migration and ran Update-Database. When trying to run the "old" code against this migrated DB, I get an InvalidOperationException "The model backing the context has changed since the database was created."
Is the Continuous Delivery type operation where you might want one server running new application code with others running old versions possible with EF code first?
can you modify old code ?
if yes disabling schema checking in old code is an option.
btw: Are you sure the added column is nullable or as a default value ?
to avoid surprises, you can also use a connection string that has read only rights on the schema to avoid data corruption.

Configuring spring-xd to use oracle as job repository

I want to run spring xd with Oracle(11g) which i already have in my environment. Currently my first concern is the jobs UI (my database has existing data of job executions that were performed by spring-batch and i simply want to display the details of those executions).
i'm using spring-xd-1.0.0.M5. I followed the instructions in the reference guide and i changed application.yml to have the following:
spring:
datasource:
url: jdbc:oracle:oci:MY_USERNAME/MYPWD#//orarmydomain.com:1521/myservice
username: MY_USERNAME
password: MYPWD
driverClassName: oracle.jdbc.OracleDriver
profiles:
active: default,oracle
i modified also batch-jdbc.properties to have the database configuration similar to the above.
Yet, when i start xd-singlnode.bat (or either xd-admin.bat) it seems like it ignores my oracle configuration and still uses the default hsqldb.
what am i doing wrong?
Thanks
The likely reason is that we did not upgrade the windows .bat scripts to take advantage of the property overriding via xd-config.yml. If you go into the unix script for xd-singlenode you will see that when java is invoked there there is an option
-Dspring.config.location=$XD_CONFIG
you can for now hardcode your location of that file, use file: as the prefix.
Also, The UI right now is very primitive, you will not be able to see many details about the job execution. There are however many job related commands you can execute in the shell and there is only one gap regarding step execution information as compared to what is available via spring-batch-admin.
The issue to watch for this is https://jira.springsource.org/browse/XD-1209 and it is schedule for the next milestone release.
Let me know how it goes, thanks!
Cheers,
Mark