I am looking for help creating a release definition in VSTS to deploy database changes for a solution that targets 7 databases in a Data Warehouse/BI environment.
My solution is under TFVC in VSTS that contains 17 database projects targeting 7 databases, some databases have multiple projects due to cross database joins and reuse of a database objects in projects via database references. If the cross database joins weren't there this problem would be a lot easier to solve but it will take time to eradicate these.
When any change is committed the solution is built and the dacpacs generated are kept as artifacts for release.
My release definition is not particularly smart which consists of a PowerShell script that iterates over each dacpacs which generates a deployment report followed by a deployment. Whilst this works it does have its problem:
The deployments are done in alphabetical order, so if a changeset involves numerous databases it's possible the deployment will fail as something referenced in one database may not yet exist in another.
The deployments are made against each database regardless of what has changed, so creating a view in one database means that all of them have compare/deployment done against them which isn't necessary. Something like this would take seconds to create outside of the CI/CD process which currently takes 30 minutes for the build, release to test then live.
How can I make the release definition smarter?
Related
I am trying to find the most correct and simplest deployment process for each application release we have. The difficulties are the following:
It is Azure DevOps Releases.
Application has more than one artifact that should be delivered as a whole. Application distro, Its modules, and some configuration for servers that are not directly linked with the application, and of course automation scripts (which is versioned). All of them have their own build process and can be versioned separately.
All those components (App, separate server config, automation) are delivered as one product version. However, they could have different versions, for example: 12.2.1 (app), 12.2.3 (server), 12.2.1 (automation).
The question is how to build the process when after release (official), we can stick all of the final versions together (i mean not specifying them manually during pipeline release creation for each) taking into account that one of the component's version can be increased and we should be able to increase the version for release in terms of hotfixes for example.
Release Pipelines and 3 Artifacts: ok, there are 3 artifacts and a user has to specify all 3 versions manually during creation - quite a high risk to misclick. Unfortunately, there are 10 of them... 10 multiply 3 = 30 times to do a mistake.
Release Pipeline and 1 Artifact (app): Consider only one version of the application and automatically obtain automation scripts and configuration from feed by using app version. Could work, but no observability on what artifacts are going to be used, no way to downgrade, the only the latest version of artifacts (12.2.3.*).
Specify the version in the variable group connected to the stage (environments). Can easily make a mistake because release uses the baken version of the variable group. If you update VG, but not create a release - it will be epic fail. Moreover, there is no vision of what it is going to install/update, etc.
Please, share your ideas on how to manage multiple artifact versions within one product release to make the process more robust and clear with a little of flexibility.
That's how it works for me via this simple approach:-
A single build pipeline with many Agent Jobs where each one represents an artifact.
So here you will publish the different artifacts under separate folders and automatically increase versions "If needed", using a script per each and at the end all will be under the same BuildId.
Another variety if separation isn't needed that all be under the same Agent Job.
A single release pipeline that is linked to this big nested artifact that will perform the deploy.
Here also if separation is needed, could have multi stages or multi Agent Jobs as before.
Versions increasing either will happen manually at commits or scripts will automatically increase each build.
I would prefer the first as the 2nd doesn't indicate that there was any change.
The possibility of developer forgetting to update the version will be the same as developer introduces a bug so that can be fixed later.
Then at the end, you have a Single Release => Single Build => Many versions => A commit.
I'm a developer whose transitioning into Devops. By observation, I've noticed that a lot of dev shops have started using Octopus Deploy and Azure Devops Services (AzDo, formerly VSTS), or they are starting new projects to setup devops ci/cd pipelines AND they spec to use both tools.
I've been through some quick training for both tools and though they aren't perfectly the same, AzDo seems to offer all of the same features as Octopus Deploy.
So, my question is if a company is already using AzDo for much of their version control, or anything CI/CD pipeline-related, why would you use Octopus? What benefit does it offer to use Octopus for your build and deploys to AzDo?
Note, I am very, very new to Devops. I'm just asking because at the "10,000 feet view" there doesn't seem to be any reason for Octopus if you're already using AzDo. I mention Octopus Deploy by name because I see it come up frequently. However, I assume there could be other tools that serve the same purpose of automatic build and deploying that might also integrate with AzDo. However, AzDo offers build and deploy built in one. Why split out the work?
Let me preface why I like to both build and deploy with VSTS:
Same permissioning end to end
Line of sight from end to end build and deployment
Reasons I favor Octopus Deploy over VSTS Release:
Ability to upload packages/artifacts
External ones that are maybe one off packages to get deployed for a specific release
Target Definition
When you create Targets or servers you are deploying to, you are able to add a target to one or multiple environments and assign tags/roles to a target. What does this mean? More flexible server definition rather than defining strict Agents to a pool or servers to a Deployment Group, you can allow a target to span multiple (ie: a testing server that spans your Dev and Test environments and only gets triggered on steps that are defined for that role). I realize you can accomplish similar things to this in VSTS but in my opinion it's far more cumbersome.
Variable Definition
Variables can be grouped at a global level and grouped by a specific pipeline/process (that part is similar to VSTS). Variables can also be grouped or scoped by environments or roles (above) so you are able to have different variable values per role per environment; both super granular and flexible. Places this comes in handy is if you have a backend server with a connection string and maybe 2 content delivery nodes (role - content delivery) that get slightly different values than the backend server. At the moment, I do not know (other than creating new environments) how one would accomplish the same in VSTS.
Process Definition
All of the above comes together in the process definition of Octopus Deploy. The super flexible and granular variables and target definition allows you to focus on the actual deployment process rather than getting hung up on the nuances of the UI and its limitations. One example would be defining a process where the first step would be taking something out of a load balancer from a central server, step two deploy code to delivery server one, step three put back in lb, step 4 take out node two from lb called from a central server, step 5 deploy code to node two, and last step, back into load balancer. I realize it's a very simple hypothetical, but within Octopus Deploy, it's one steady process filtered to execute on specific roles, within VSTS you would have to break that down into different agent phases and probably pipelines.
Above are really the biggest points I see to use Octopus Deploy over VSTS Release. Now why would someone use VSTS to build and OD to release/deploy? There are a lot of different factors that go into it, some are corporate drivers like having an enterprise git client that has permissions handled thru MSDN. Sometimes it's a project management driver of having work items tied tightly to commit and builds, but with the added flexibility that OD brings to the table for free/minimal cost.
Hoping this help shine a little light into maybe why some people are crossing streams and using both VSTS and OD.
A lot of good points have been made already, but it really comes down to what you need. I would venture a lot of us started using Octopus before Release Management was really a thing.
We use VSTS for all our source control and builds and then all our deployments are handled through Octopus.
When we started evaluating tools, VSTS had nothing for deployments. Even now, they are still playing catch up to Octopus in feature set.
If you are doing true multi-tenanted and multi-environment deployments, I don't think VSTS really compares. We are using Octopus with around 30 tenants, some on Azure, some on premise. We deploy a mix of web and desktop apps. We are even using Octopus to deploy some legacy VB6 and winforms applications.
Multi-Tenancy (critical for us)
VSTS added Deployment Groups a while ago which sound pretty similar to Octopus Environments before multi-tenancy was implemented. Before Octopus had true multi-tenancy (it's been around a while now), people would work around it by creating different environments per tenant, like "CustomerA - Dev", "CustomerA - Prod", etc. Now you just have your Dev/Test/Prod environments and each tenant can have variables scoped to those individual environments.
Support
Documentation is excellent and it's really easy to get up and running.
The few times I've needed to contact someone at Octopus, they've answered very quickly and knowledgeably.
Usability
Having the Octopus dashboard giving us an overview of all our projects is amazing. I don't know of anyway to do this in VSTS, without going into each individual project.
Octopus works great on a mobile device for checking deployment status and even starting new deployments.
Community
Octopus works with their customers to understand what they want and they often release draft RFCs and have several times completely changed course based on customer feedback.
If we know what sort of applications you are deploying, and to what kinds of environments, we would be able to better tailor our responses.
The features you see today in VSTS weren't there a few years ago, so there might be an historical reason.
But I want to state here some non-opinionated reasons that may suggest an organization to opt for different tools instead of one.
Separate responsibility and access levels
Multiple CI tools in dev teams (orgs that are using also Jenkins or TeamCity or else) and need to standardize and control deployments
An org needs a feature available only in Octopus (maybe Multi-tenancy)
Octopus does a great job of focusing on deployments. Features reach octopus before vsts, support is local and responsive. That, and you never run out of build/release minutes!
Seriously though, I just like to support smaller companies where possible and if all features were equal, I'd still pick them.
The big reason in the past was that TFS On prem and early VSTS did NOT support non-Microsoft (.Net) code very well if at all. You could utilize the source control and work features of TFS and then use octopus/Jenkins etc... as the build release parts to cover code that TFS didn't really know what to do with.
Also the release pipelines used to be very simplistic and not that useful where the other products were all plugin based and could do (almost) anything you needed them to. Most of that has changed so that VSTS is much better at working with Non-Microsoft code bases then it used to be. Over time integrations get created inside a companies walls and undoing those decisions can be more painful then just having "too many" tools. Also I feel like there is just more people out there familiar with those tools since they have been mature longer and cover a larger part of the development world then VSTS has in the past.
To fully implement CD you need both. VSTS runs tests and is a build server. OD isn’t. VSTS is light on sophisticated application installations. And if you are provisioning environments, IaC style, you need Terraform in addition. Don’t try to shoehorn everything into a single tool. DevOps requires a whole ecosystem. The reasons are not historical.
My company is moving to microservices and as part of this shift devops is standing up CI/CD build release pipelines in Azure with VSTS and GIT.
The current model for managing migrations is a developer is given 2 projects in 2 separate git repositories.
Project 1 - API Services project - .NET Framework / .Net Core
Project 2 - Database project based on EF6 using the migration API
These projects have completely independent release pipelines based on the repositories. So when you create a pull request into master the pipeline builds and releases the project.
This new architecture also supports blue green deployments and our app services run on multiple nodes.
The issue we have is that with this set up we have to basically hand code our migrations and can't use any of the tooling provided in EF Core.
Most of the articles and documentation I have read shows running the migrations from app start up, but if you have multiple app service nodes how do you prevent 2 nodes from running the migrations?
Other articles I have looked at show moving migrations into a separate project, but that project needs a reference to the project with the dbcontext in it. In my company's setup this is not possible. Neither can we do the reverse since moving the dbcontext into the database project prevents us from referencing it in the api services project.
Is there any way to support this model with EF Core?
What is the preferred way to implement blue green deployments with EF Core Migrations on a multi node app service?
I will try to claim that there isn't and not because EF Core doesn't support it in some way, but because this sounds impossible from what I understood in your question.
Your company want the ability to do blue/green deployments, but it is only really possible on the service layer, but not on database. The idea sounds really cool, fast rollback, almost no downtime. But in reality databases complicate things a lot.
So imagine your Project 1 is running on machines A and B (representing blue and green deployments). A currently is a production environment and B is identical but not serving any requests ATM. Both of them are pointing to the exact same database (if not, it's not blue/green, it's just a separate environment). When you want to deploy your DB changes, you migrate your database, but now both of the machines A and B will be pointing to the updated database. You can keep switching from A to B, but they both might have stopped working if your database migration broke anything.
Therefore I don't really understand what you achieve with having DB migrations in a separate repository with a separate pipeline. This just complicates coordination of the releases as they are clearly dependent, but I don't see how it helps to solve anything. As you noted, you can't delegate creation of migration scripts to EF Core, at least without some manual work.
Would be happy to hear any advantages of such design.
We are using RedGate combined with SQL Test (tSQLt). In order to unit test, we install the framework on each database.
Is there a way to use the tSQLt framework in such a way where your unit tests and framework objects can reside in one central location which can then be used by multiple databases?
We are also using RedGate's SQL Source Control with TFS as our repository to track schema changes. These changes get promoted in the following environment order: Development --> Test --> Production.
Needless to say, the addition of the framework combined with the tests themselves represent large amount of new SQL objects (tables, stored procedures, etc) now in our databases. Ideally we would like these objects to reside only in Development and Test and avoid cluttering our production database. We could skip merging the tSQLt changes to Production, but then we would have unmerged changes sitting around in the Test environment's source control until the end of time.
Any thoughts on getting around this problem?
As you're using SQL Source Control to manage your database changes, checking in your tSQLt tests is the right thing to do. If you want to ensure that these don't get pushed to staging or production, you need to ensure that the tools you use to push the changes exclude the tSQLt tests. If you are using Redgate SQL Compare for this, use the option "Ignore tSQLt framework and tests". See the product documentation for a detailed explanation. If you are using a different tool or process, post a comment and I'll amend this answer.
There is currently no way to install tSQLt in a separate database. I have started the process of making tSQLt database agnostic, but that is basically a complete rewrite, so it will take a while.
In the meantime, you can exclude tSQLt from SQL Source Control: https://redgate.uservoice.com/forums/39019-sql-source-control/suggestions/4901910-faster-way-to-exclude-all-tsqlt-content
If you still want your tests in source control but don't want to promote them to the higher environments, that is the default behaviour in Redgate's DLM Automation Suite. You can either use one of the build server plugins (like TeamCity or TFS for build/test then Octopus Deploy for release) or do it all in PowerShell using SQL Release. https://documentation.red-gate.com/display/SR1/SQL+Release+documentation
If you have a license for Redgate's SQL Toolbelt, you might already be licensed for the Automation tools (this is a change to previous licensing); http://www.red-gate.com/products/sql-development/sql-toolbelt/#automation
I've spent the past few months developing a webApi solution that I'm ready to push up to Azure and hook into an Azure SQL Database. It was built with EF Code First.
I'm wondering what standard approaches there are to making changes to the database while in production. I've been using database initializers up to this point but they all blow away data and re-seed.
I have a feeling this question is too broad for a concise answer, so I'd like to ask: what terminology / processes / resources should a developer look into when designing a continuous integration workflow for a solution built with EF Code First and ASP.NET WebAPI, hosted as an Azure Service and hooked up to Azure SQL?
On the subject of database migration, there was an interesting article on ASP.NET about this subject: Strategies for Database Development and Deployment.
Also since you are using EF Code First you will be able to use Code First Migrations here for database changes. This will allow you to better manage the changes you make to the database.
I'm not sure how far you want to go with continuous integration but since you are using Azure it might be worth it to have a look at Continuous delivery to Windows Azure by using Team Foundation Service. Although it relies on TFS in the cloud it's of course also possible to configure it with for example Jenkins. However this does require a bit more work.
I use this technic:
1- Create a clone database for your development environment if it doesn't exist.
2- Make the necessary changes in your dev environment and dev
database.
3- Deploy to your staging environment.
4- If you added some static datas
that should also exist in your prod database, use a tool like
SQLDataExaminer to find the data differences and execute the
insert, update, deletes for according rows. Use Schema Compare in VS2012 to find differences between your dev
and prod environment by selecting source as dev and target as prod.
And execute the script in your prod.
5- Swap the environments