I have run the complete source for Getting Started - Creating a Batch Service
Knowing that the sample uses the memory-based database provided by the #EnableBatchProcessing, is the db query result expected or it will only be available if data will be persisted permanently?
After adding some debug lines, it seems that the DB query is executed first before the job gets executed. Was this the expected behavior?
Is there anything I'm missing here.
Thanks
Alex
You aren't missing anything. This is related to issue number 8 for that guide (https://github.com/spring-guides/gs-batch-processing/issues/8). I just created a pull request to address this issue. You can view the PR here (https://github.com/spring-guides/gs-batch-processing/pull/9) until it's merged.
UPDATE
The PR has been merged and the guid has been updated. The new version should no longer have this issue.
Related
All, when running a build pipeline using Azure Devops with ARM template, the process is consistently failing when trying to deploy a dataset or a reference to a dataset with this error:
ARM Template deployment: Resource Group scope (AzureResourceManagerTemplateDeployment)
BadRequest: The document creation or update failed because of invalid reference 'dataset_1'.
I've tried renaming the dataset and also recreating it to see if that would help.
I then deleted the dataset_1.json file from the repo and still get the same message so it's some reference to this dataset and not the dataset itself I think. I've looked through all the other files for references to this but they all look fine.
Any ideas on how to troubleshoot this?
thanks
try this
Looks like you have created 'myTestLinkedService' linked service, tested connection but haven't published it yet and trying to reference that linked service in the new dataset that you are trying to create using Powershell.
In order to reference any data factory entity from Powershell, please make sure those entities are published first. Please try publishing the linked service first from the portal and then try to run your Powershell script to create the new dataset/actvitiy.
I think I found the issue. When I went into the detailed logs I found that in addition to this error there was an error message about an invalid SQL connection string, so I though it may be related since the dataset in question uses Azure SQL database linked service.
I adjusted the connection string and this seems to have solved the issue.
I got a problem. Somehow my CodeFirst Migrations are no longer executed.
Everything worked perfectly before but not it does not work anymore. I deleted all the database now and tried to redeploy it, but the database is simply not updated anymore.
Any help?
( I got the checkbox in the publishing wizard checked to deploy CF Migrations)
Ok I found the solution, so I will post it here if anyone else gets stuck here.
The Code First Migrations only get executed after a request is made. However, somehow my requests did fail because the database structure was not correct. So I could not do a request to execute the migrations, and therefor the database was not updated. So I created a simple dummy Request that just returns an OK status, and called it. This did trigger the migration and now everythign works.
Weird.
It is not working because you might have created/selected other connection in deploy wizard. Same is confirmed in the deployed connection string where you can see two connnection strings.
The second connection string is also referecened in EF seciton -
And, in the context you have used first connectionstring - public ApplicationDbContext() : base("DefaultConnection", throwIfV1Schema: false) {}
Changing name here will solve your issue.
I use this to force the migration to run.
var db = new MyDbContext();
db.Database.ExecuteSqlCommand("select 1");
db.Dispose();
I'm trying to submit a Data Factory pipeline to Azure Batch compute, with a linked service that I have previously been using and works fine.
However, the pipeline is failing with the following message:
Azure Batch entity not found. Code: 'JobNotFound' Message: 'The specified job does not exist. RequestId:d8f2b8d6-b34b-4823-9a06-9037ff549185 Time:2016-05-26T10:21:43.1480686Z'
The two sentences seem inconsistent, one states that the batch entity wasn't found, thought code says JobNotFound, which is referring to a Azure Batch Job.
Would appreciate help.
I fixed the problem by deleting the batch account, and creating a new one with a different name.
Could not figure out what was causing the issue.
I have two folders for my migrations (AuthContext and UserProfileContext), each has their own migration and some custom sql to run afterwards for data migrations and whatnot.
This works fine when using package manager console. I
Restore from production
Run Update-Database -ConfigurationTypeName Migrations.Auth.Configuration
Run Update-Database -ConfigurationTypeName Migrations.UserProfile.Configuration
Then everything is very happy in the new database, migrations executed data shuffled where it needs to.
I tried to test out the migrations on publish piece by:
Restore production on dev database
Single connection string (all contexts use the same) pointed to dev database
Publish to azure web site
Checked the box for Apply Code First Migrations
Selected that single connection string
Okay it published fine; however, when I went to look at the database, nothing happened! It did not create the necessary tables, columns, or data moves.
TLDR; Code first migrations are not running after publish to Azure
Update 1
I've tried any combination of the below: only one single connection string so I'm guessing that's not the issue, and execute migrations is checked.
On publish the api runs but no database changes are made. I thought perhaps I needed to hit it first but I just get random errors when I try to use the api (which now of course relies on the new database setup), and the database is still not changed.
I've seen a couple references out there about needing to add something to my Startup class but I'm not sure how to proceed.
Update 2
I solved one issue by added "Persist Security Info=True" to my connection string. Now it actually connects to the database and calls my API; however, no migrations are running.
I attached debugger to Azure dev environment and stepped through... on my first database call it steps into the Configuration class for the Migration in question, then barfs and I can't track down the error.
public Configuration()
{
AutomaticMigrationsEnabled = false;
MigrationsDirectory = #"Migrations\Auth";
ContextKey = "AuthContext";
}
Update 3
Okay, dug down and the first time it hits the database we're erroring. Yes this makes sense since the model has changed, but I have migrations in place, enabled, and checked! Again, it works fine when running "Update-Database" from package manager console, but not when using Execute Code First Migrations during publish to Azure
The model backing the 'AuthContext' context has changed since the
database was created. Consider using Code First Migrations to update
the database (http://go.microsoft.com/fwlink/?LinkId=238269).
Update 4
Okay I found the root issue here. VS is setting up the additional web.config attrib for databaseInitializer on only one of my database contexts, the one not mentioned is in fact hit first from my app.
So now I have to figure out how to get it to include multiple contexts, or, combine all of my stuff into a single context.
The answer to this post is not very detailed.
This article explains what I had to do to fix a similar problem to this:
https://blogs.msdn.microsoft.com/webdev/2014/04/08/ef-code-first-migrations-deployment-to-an-azure-cloud-service/
I'll roughly describe the steps I had to take below:
Step 1
Add your connection strings to your dbContexts, in my situation, they were both the same.
Step 2
Add this to your web.config
<appSettings>
<add key="MigrateDatabaseToLatestVersion" value="true"/>
</appSettings>
Step 3
And add this to the bottom of your global.asax.cs / Startup.cs(OWIN startup)
var configuration = new Migrations.Configuration();
var migrator = new DbMigrator(configuration);
migrator.Update();
Solved! To summarize the solution for posterity:
Enable Code First Migrations only enables them for one base connection string per checkbox checked, regardless of how many contexts have migrations against that base connection string. So in my case I broke out the two in question into two different connection strings.
Then I was hitting other errors and identified that if you're changing the base connection string to the model backing asp identity you need to include (one time publish) the additional flag base("AuthContext" , throwIfV1Schema: false)
For anyone who has this issue and may have overlooked the following: be sure to check that you have correctly set the connection string in your Web.config file and/or Application settings on Azure. This includes DefaultConnection and DefaultConnection_DatabasePublish.
In our case the former was correct but the latter contained the wrong database instance because it had been carried over from an App Service clone operation. Therefore the wrong database was being migrated.
I have an application developed in scala play2.0,
it worked successfully in local, but if failed when deployed to heroku.
the reason of the fail is that locally i was using a H2 database,
and using postgresql in heroku, i have to change one of the data types from "clob" to "text".
the problem now is that the database in heroku is in a "inconsistent state", according to the play20 documentation.
in DEV mode (locally), you can just click on the "Mark it as resolved" when the html appears.
how to "mark it ask resolved" in the heroku PROD environtment?
http://www.playframework.com/documentation/2.1.1/Evolutions
ps: note, because it was a new application, i just deleted the database and re-started.
however, here i am asking what is the proper way to handle evolutions in the PROD env.
that is, the "Mark it as resolved" issue for PROD is not explained here: http://www.playframework.com/documentation/2.1.1/Evolutions
Although I couldn't find a way to do it via the play command, you can do it by editing the database directly.
Imagine you're trying to go from 5.sql to 6.sql. Here's what you do:
Figure out and fix the problem(s) that caused the database to enter an inconsistent state (i.e. manually apply your !Ups and fix all the problems with them).
Manually apply your !Downs so that the database is in the state it was after 5.sql was applied.
Go into your database, find the table called play_evolutions, and look at the row with id 6. It should saying something like applying ups in the state column and have the error message in the last_problem column.
Delete the row with id 6. This will make Play think you are in the state you were with 5.sql.
Now you should be able to run play -DapplyEvolutions.default=true start to evolve to 6.sql.
Inconsistent state just means that the evolutions could not be applied and thus, the application is blocked. Update your evolution scripts and re-deploy.