EF db migrations with rolling/canary deployments - entity-framework-core

I have a .NET application that is using EF Core as ORM, and all db modifications are done using Db Migrations in EF.
The application is hosted on the cloud on multiple VMs in production, after do all testing, a rolling deployment is initiated to take one VM at a time, deploy the new application, and so on.
The database itself hosted on managed Db service (Like aws RDS, Azure SQL) with multi-az/replication setup.
The main goal, is to make sure there is no downtime (0 downtime), and rollback if any issue happened (or manually distribute canary weighted requests accordingly)
the main issue is, if application successfully deployed to one instance, and that instance receives a connection, this will cause the database to be migrated to the new version, causing all other instances requests to fail (as EF will have different migrated db in the old instances)

Related

Postgresql server not shown on azure application map

I'm trying to use Application Insights to monitor an application composed of different microservices in an AKS (Azure Kubernetes Services) cluster.
As AKS does not support the auto-instrumentation scenario, I had to instrument myself my js/.net services with the dedicated libs.
And this works fine, I can see my different microservices on an application map.
But I can't see my database server in the dependencies like in the documentation's example, even if those dependencies should be automatically collected as stated in the dependencies documentation.
I'm using Azure Database for PostgreSQL - Flexible Server. Is this normal? Is it due to the fact I am using PostgreSQL instead of SQL Server? Is this related to the fact that I'm using Npqsl instead of SqlClient ?

Using Flyway with Redshift Serverless

I'm currently using a provisioned Redshift cluster, and managing database migrations with Flyway. I'm thinking on migrating to Redshift Serverless, but I'm not sure if can still use Flyway to manage the migrations.
Already added the rule in my security group to allow my IP (I'm trying to run Flyway migrations locally), and also have the Publicly accessible parameter set to On, following the steps in this document, and using the endpoint given by the workspace.

Best practice for running database schema migrations

Build servers are generally detached from the VPC running the instance. Be it Cloud Build on GCP, or utilising one of the many CI tools out there (CircleCI, Codeship etc), thus running DB schema updates is particularly challenging.
So, it makes me wonder.... When's the best place to run database schema migrations?
From my perspective, there are four opportunities to automatically run schema migrations or seeds within a CD pipeline:
Within the build phase
On instance startup
Via a warm-up script (synchronously or asynchronously)
Via an endpoint, either automatically or manually called post deployment
The primary issue with option 1 is security. With Google Cloud Sql/Google Cloud Build, it's been possible for me to run (with much struggle), schema migrations/seeds via a build step and a SQL proxy. To be honest, it was a total ball-ache to set up...but it works.
My latest project is utilising MongoDb, for which I've connected in migrate-mongo if I ever need to move some data around/seed some data. Unfortunately there is no such SQL proxy to securely connect MongoDb (atlas) to Cloud Build (or any other CI tools) as it doesn't run in the instance's VPC. Thus, it's a dead-end in my eyes.
I'm therefore warming (no pun intended) to the warm-up script concept.
With App Engine, the warm-up script is called prior to traffic being served, and on the host which would already have access via the VPC. The warmup script is meant to be used for opening up database connections to speed up connectivity, but assuming there are no outstanding migrations, it'd be doing exactly that - a very light-weight select statement.
Can anyone think of any issues with this approach?
Option 4 is also suitable (it's essentially the same thing). There may be a bit more protection required on these endpoints though - especially if a "down" migration script exists(!)
It's hard to answer you because it's an opinion based question!
Here my thoughts about your propositions
It's the best solution for me. Of course you have to take care to only add field and not to delete or remove existing schema field. Like this, you can update your schema during the Build phase, then deploy. The new deployment will take the new schema and the obsolete field will no longer be used. On the next schema update, you will be able to delete these obsolete field and clean your schema.
This solution will decrease your cold start performance. It's not a suitable solution
Same remark as before, in addition to be sticky to App Engine infrastructure and way of working.
No real advantage compare to the solution 1.
About security, Cloud Build will be able to work with worker pool soon. Still in alpha but I expect in the next month an alpha release of it.

Can an Entity Framework migration be run without performing the Seed

I am using Entity Framework (version 6.1.3) - Code First - for my application.
The application is hosted on the Azure platform, and uses Azure SQL Databases.
I have a database instance in two different regions, and I am using the Sync Preview to keep the data in sync.
Since the sync takes care of ensuring the data is kept synchronised, when I run a migration, I'd like the schema changes and seed to happen in only one database, and the schema changes only (with no seed) in the other.
Is this possible with the EF tooling, or do I need to move the seeding out to a manual script?
This is possible by spreading out your deployment.
if worker role 1 updates your database and seed
if after the sync worker role 2 connects to your other database it will see that the migration already took place.
One way to trigger this is to disable automatic migrations on all but 1 worker role. The problem is that you potentially have to deal with downtime/issues while part of your application landscape is updated/migrated but your database is still syncing.
(worker role can also be replaced by webjob , website etc )

In a containerized cluster, should mongodb servers be running on a worker or a core service?

I'm trying to implement an architecture that's similar to the coreos's production architecture (shown below)
Should I run the database as a central service or one or more of the workers?
I figured the database needs some kind of replication, which makes me think that putting it in the worker cluster makes more sense, but I'm just not sure.
This should be run as a worker. The central services are the basic things that come with CoreOS (mainly etcd). The workers host your applications, the database being one of them. You do have a persistence issue because your database will have state to remember between restarts. So, there is a bigger issue of how do you make that persistence? One was to do it is use a host file and give the database an affinity to that host and mount the host file. Another thing you might consider is running more than one database (if your db technology supports that) and replicate that database so you have two (or more) copies in different workers. (non-affinity). If your database creates transaction logs that can be applied to a backup, you can manage those transaction logs in a worker.
Another thing to consider is not using a container for your database. The database is a weird animal, its care and feeding is not like the rest of the applications. So it is reasonable (in my opinion) to have your database managed and maintained outside the scope of your cluster (but still reachable by the cluster).