tSQLt Object Organization - redgate

We are using RedGate combined with SQL Test (tSQLt). In order to unit test, we install the framework on each database.
Is there a way to use the tSQLt framework in such a way where your unit tests and framework objects can reside in one central location which can then be used by multiple databases?
We are also using RedGate's SQL Source Control with TFS as our repository to track schema changes. These changes get promoted in the following environment order: Development --> Test --> Production.
Needless to say, the addition of the framework combined with the tests themselves represent large amount of new SQL objects (tables, stored procedures, etc) now in our databases. Ideally we would like these objects to reside only in Development and Test and avoid cluttering our production database. We could skip merging the tSQLt changes to Production, but then we would have unmerged changes sitting around in the Test environment's source control until the end of time.
Any thoughts on getting around this problem?

As you're using SQL Source Control to manage your database changes, checking in your tSQLt tests is the right thing to do. If you want to ensure that these don't get pushed to staging or production, you need to ensure that the tools you use to push the changes exclude the tSQLt tests. If you are using Redgate SQL Compare for this, use the option "Ignore tSQLt framework and tests". See the product documentation for a detailed explanation. If you are using a different tool or process, post a comment and I'll amend this answer.

There is currently no way to install tSQLt in a separate database. I have started the process of making tSQLt database agnostic, but that is basically a complete rewrite, so it will take a while.
In the meantime, you can exclude tSQLt from SQL Source Control: https://redgate.uservoice.com/forums/39019-sql-source-control/suggestions/4901910-faster-way-to-exclude-all-tsqlt-content

If you still want your tests in source control but don't want to promote them to the higher environments, that is the default behaviour in Redgate's DLM Automation Suite. You can either use one of the build server plugins (like TeamCity or TFS for build/test then Octopus Deploy for release) or do it all in PowerShell using SQL Release. https://documentation.red-gate.com/display/SR1/SQL+Release+documentation
If you have a license for Redgate's SQL Toolbelt, you might already be licensed for the Automation tools (this is a change to previous licensing); http://www.red-gate.com/products/sql-development/sql-toolbelt/#automation

Related

Sending a file to multiple servers

I'm working on a web project(built with the .Net framework) on a remote windows server, and this project is connected to a database my SQL server management studio, now on multiple other remote windows servers exist the same web project linked to the same database, now I change a page's code in my project or add/remove a table or stored procedure in my database, is there a way(or an already existing software) which will my to deploy the changes that I made to all the others(or to choose multiple servers if I don't want to deploy the changes to all of them)?
If it were me, I would stand up a git server somewhere (cloud or local vm), make a branch called something like Prod or Stable, and create a script (powershell if the servers are windows, bash on anything else) on a nightly or hourly job to pull from that branch. Only push to that branch after testing thoroughly. If your code requires compilation, you have the choice to compile once before committing (in which case you're probably going to commit to releases), or on each endpoint after the pull. I would have the script that does the pull also compile and restart the service (only if there was something new in the pull).
You can probably achieve this by following two things :
Create a separate publishing profile for each server.
Use git/vsts branches to keep the code separate. (as suggested by #memtha).
Let's say you have total 6 servers and two branches A and B. So, you'll have to create 6 publishing profiles. Then, you can choose which branch to deploy where. e.g. you can deploy branch B on server 1,3 and 4.
For the codebase you could use Git Hooks.
https://gist.github.com/noelboss/3fe13927025b89757f8fb12e9066f2fa
And for the database, maybe you could use migrations or something similar. You will need to provide more info about your database, do you store your database across multiple servers etc.
If the same web project is connecting to the same database and the database changes, I suspect you would need to update all the web apps to ensure the database changes don't break any of the apps and to keep all the apps updated to prevent any being left behind.
You should look at using Azure Devops to build and deploy your apps and update the database.
If you use Entity Framework, you can run the migrations on startup and have the application update the database when deployed manually or automatically using devops.
To maintain the software updated in multiple server you could use Git with hooks, post-receive hook is what you need.
The idea is to use one server as your Remote Repository and here configure the post-receive hook to update the codebase in the same server and the others.

Multiple Database Solution VSTS Release Definition

I am looking for help creating a release definition in VSTS to deploy database changes for a solution that targets 7 databases in a Data Warehouse/BI environment.
My solution is under TFVC in VSTS that contains 17 database projects targeting 7 databases, some databases have multiple projects due to cross database joins and reuse of a database objects in projects via database references. If the cross database joins weren't there this problem would be a lot easier to solve but it will take time to eradicate these.
When any change is committed the solution is built and the dacpacs generated are kept as artifacts for release.
My release definition is not particularly smart which consists of a PowerShell script that iterates over each dacpacs which generates a deployment report followed by a deployment. Whilst this works it does have its problem:
The deployments are done in alphabetical order, so if a changeset involves numerous databases it's possible the deployment will fail as something referenced in one database may not yet exist in another.
The deployments are made against each database regardless of what has changed, so creating a view in one database means that all of them have compare/deployment done against them which isn't necessary. Something like this would take seconds to create outside of the CI/CD process which currently takes 30 minutes for the build, release to test then live.
How can I make the release definition smarter?

Octopus deploy, I need to deploy all packages up till latest on promotion to QA

Here is the story, I am using RedGate SqlCompare to generate update scripts for my Dev env, each package contains only changes from current Dev version to Latest in source control.
Here is an example:
I create a table (package-0.1) -> Deploy to DevDB
I add Columns (package-0.2) -> Deploy to DevDB
I renamed some Column (package-0.3) -> Deploy to DevDB
But once I want to promote it to QA it causes me problem because it promotes only latest package-0.3 that contains only part of the changes (renaming of the column)
So I am looking for a way to deploy all the packages prior to current on Promotion if it is possible.
By now I solved that by creating custom package that contains all the change scripts, but is it possible to solve that with Octopus?
Thanks
Ihor
each package contains only changes from current Dev version to Latest
The way you do it is going to be painful for you as SQL Compare takes a state based approach. What you want to apply is the migrations based approach. You can see Alex's post on the difference between two approaches.
SQL Source Control 5 will come with a better migrations approach which will work with SQL Compare command line tool and DLM Automation tools. However, beta is closed right now unfortunately but I suggest you to contact the team through the e-mail address provided there.
The other option you have is ReadyRoll which has the pure migrations based approach. You can see this post on its octopus deploy integration.

How to manage database context changes in production / CI

I've spent the past few months developing a webApi solution that I'm ready to push up to Azure and hook into an Azure SQL Database. It was built with EF Code First.
I'm wondering what standard approaches there are to making changes to the database while in production. I've been using database initializers up to this point but they all blow away data and re-seed.
I have a feeling this question is too broad for a concise answer, so I'd like to ask: what terminology / processes / resources should a developer look into when designing a continuous integration workflow for a solution built with EF Code First and ASP.NET WebAPI, hosted as an Azure Service and hooked up to Azure SQL?
On the subject of database migration, there was an interesting article on ASP.NET about this subject: Strategies for Database Development and Deployment.
Also since you are using EF Code First you will be able to use Code First Migrations here for database changes. This will allow you to better manage the changes you make to the database.
I'm not sure how far you want to go with continuous integration but since you are using Azure it might be worth it to have a look at Continuous delivery to Windows Azure by using Team Foundation Service. Although it relies on TFS in the cloud it's of course also possible to configure it with for example Jenkins. However this does require a bit more work.
I use this technic:
1- Create a clone database for your development environment if it doesn't exist.
2- Make the necessary changes in your dev environment and dev
database.
3- Deploy to your staging environment.
4- If you added some static datas
that should also exist in your prod database, use a tool like
SQLDataExaminer to find the data differences and execute the
insert, update, deletes for according rows. Use Schema Compare in VS2012 to find differences between your dev
and prod environment by selecting source as dev and target as prod.
And execute the script in your prod.
5- Swap the environments

How do you keep track of what you have released in production?

Tipically a deploy in production does not involve just a mere source code update (build) but requires a lot of other important tasks like, for example:
Db scripts
Configuration files (differents from test\production)
Batch to schedule
Executables to move to the correct path
Etc. etc.
In our company we just send an email to a "Release email address" describing the tasks in order, which changeset need to be published (TFS), which SP need to be updated, db scripts and so on.
I believe there's not a magic tool that does these tasks automagically in order, rollback included; but probably there's something better than email that helps to keep track of releases in production.
Do you have any tools to suggest or practices to share?
When multiple tasks are required to support a full project deployment (and that's frequently the case, in my experience), I'd suggest using a build/deployment tool. I've used Ant in the past with great success, but know others who swear by Capistrano, Maven and others.
Using Ant, I wrote a script that would:
Pull the specific revision I wanted from my VCS
Create a tarball of the target directory on the remote machine (in case a rollback was required)
Create a MySQL dump file of the database (also for rollback purposes)
Delete the remote directory and SSH the new content just pulled from the VCS
Perform various other logistical operations (setting file perms, ownership, etc.)
Create a release branch on the VCS itself
Create a tag with the appropriate version information so I always had a snapshot of the code base at that moment of deployment.
Hope that helps some. I've written a few blog posts about this that may (or may not) be useful. They're dated now, but the general information should still be solid enough.
Introductory thoughts
Details of how I use Ant for deploying--including scripts
You might be interested in the Team Foundation Build Recipes Website, that showcases some build scripts developed using SDC Tasks Library and the MSBuildTasks library
How about something like SVN? You can put all of your code in a repository, then when you are ready to release from production bring your stuff over from test. Then you'll have very specific revisions with information on what happened. SVN keeps track of all of it.