Recreate Azure SQL Database before E2E test - powershell

I have an application with an Azure SQL Server. I have a test environment where I deploy the app for end to end testing. I want to reset the database to a certain state before the tests. I have a bacpac file with that state. My goal is to do this with a ci tool (appveyor). I am aware that I can drop the entire database with azure powershell api. Should I drop the database or should I empty it? Or am I thinking this totally wrong?

You could use Powershell to restore the BACPAC before the tests and drop the database after the tests. An example is available on Azure docs: https://learn.microsoft.com/en-us/azure/sql-database/scripts/sql-database-import-from-bacpac-powershell
An option that will probably execute faster, is to directlry run T-SQL against the database to setup your initial state before the tests and empty the database after the tests.
However, doing a full BACPAC restore and drop out is much easier to setup in such a way that you can run multiple testruns in parallel as it is not relying on a pre-existent database that it is claiming for the duration of the run.
Edit: Changed link to en-us instead of nl-nl

Another option is you run Azure database as docker service, this may simplify your initial databse setup.

Related

Sqlproj deployment to AzureSql (dacpac vs bacpac)

The Situation
I have an Azure Devops build pipeline that is building and deploying to an existing AzureSql Database instance via the outputted .dacpac.
I would like to have the ability to run a script or execute API calls to create new AzureSql database instances based on that project. I have found the New-AzSqlDatabaseImport powershell cmdlet that ALMOST lets me do that, requiring a .bacpac rather than a .dacpac. I attempted to use the .dacpac and naturally the process failed.
The Question
Can I output a .bacpac from my SqlProj build process?
Alternatively is there a way to create a new database and have that database schema imported from the dacpac in a relatively smooth elegant fashion?
What we have gone with is the following:
Host a "template" database alongside the other databases.
Update the "template" database during each update cycle with the dacpac changes.
On new user/organization creation, execute single call powershell script that performs a quick copy of the "template" database. New-AzSqlDatabaseCopy
This appears to go faster than separate provision and dacpac deploy, and is a single call to execute. In the future the powershell execution is likely to be changed to an Azure API call.

Azure Devops - Manage, Run and Track one-time Sql Scripts

We have a database project that uses a dacpac to deploy schema changes and also allows a pre-deployment and post-deployment script.
However, we frequently have to run one-off scripts and security would prefer that developers not have write access in prod (we do not have DBA role at this time). I'm trying to find a solution that would work with azure devops to store one-time run scripts in git, run the script if it has not been run before, and not run the script the next time the pipeline runs. We'd like this done through devops so the SP has access to run the queries and not the dev, and anything flowing through the pipe has been through our peer review process, plus we have record of what was executed.
I'm looking for suggestions from anyone who has done this or is aware of any product which can do this.
Use liquibase. Though I would have it as part of my code base you can also use it from the CLI and run your scripts using that tool.
Liquibase keeps track of what SQL files you have published across deployments so you can have multiple stages say DIT, UAT, STAGING, PROD and it can apply the remaining one off SQL changes over time.
Generally unless you really need support, I doubt you'd need the commercial version. The opensource version is more than sufficient for my system needs and I have a relatively complex system already.
The main reason I like liquibase over other technologies is it allows for SQL based change sets. So the learning curve is a lot lower.
Two tips:
don't rely on the automatic computation of the logicalFilePath, explicitly set it even if it is repeating yourself. This allows you to refactor your scripts so instead of lumping everything into a single folder you may group them later on.
Name your scripts with the date first. That way you can leverage the natural sorting order.
I've faced a similar problem in the past:
Option 1
If you can afford to have an additional table in your database to keep track of what was executed or not, your problem can be easily solved, there is a tool which helps you: https://github.com/DbUp/DbUp
Then you would have a new repository let's call it OneOffSqlScriptsRepository and your pipeline would consume this repository:
resources:
repositories:
- repository: OneOffSqlScriptsRepository
endpoint: OneOffSqlScriptsEndpoint
type: git
Thus you'd create a pipeline to run this DbUp application consuming the scripts from the OneOffSqlScripts repository, the DB would take care of executing the scripts only once (it's configurable).
The username/password for the database can be stored safely in the library combined with azure keyvaults, so only people with the right access rights could access them (apart from the pipeline).
Option 2
This option assumes that you wanna do everything by using only the native resources that azure pipelines can provide.
Create a OneOffSqlScripts as in option1
Create a ScriptsRunner repository
In the ScriptRunner repository, you'd create a folder containing a .json file with the name of the scripts and the amount of times (or a boolean) you've had run them.
eg.:
[{
"id": 1
"scriptName" : "myscript1.sql"
"runs": 0 //or hasRun : false
}]
Then write a python script that reads and writes a json file by updating the amount of runs, thus you'd need to update your repository after each pipeline run. It would mean that your pipeline will perform a git commit / push operation after each run in case there new scripts to be run.
The algorithm is like these, the implementation can be tuned.

Using SQL developer to automate several sql scripts

I have several scripts that I run in SQL Developer every day. I have it set up as #script_1.sql; #script_2.sql; #script_n.sql; and I hit F5 and let it run through all my scripts and then look at my results when it has finished parsing through all of them. I want to have this automated so it is finished running those scripts by the time I get into work. My system admins have disabled task scheduler. I have a few jobs set up that work fine but when I tried turning this into a job it wouldn't work. How would I automate this using only SQL Developer?
You could create a package and convert those SQL files into its procedures. Then schedule their execution with DBMS_JOB or DBMS_SCHEDULER package.

Run Powershell script every hour on Azure

I have found this great script which backs up SQL Azure database to BLOB.
I want to run many different variations of this script - e.g. DB1 goes to Customer1Blob, DB2 goes to Customer2Blob.
I have looked at Scheduler Job Collections. However I can only see options (Action settings) for HTTP(S)/ Storage Queue / Service Bus.
Is it possible to run a specific .ps1 script (with commands) scheduled?
You can definitely run a Powershell script as a WebJob. If you want to run a script on a schedule, you can add a settings.job file containing a chron expression with your webjob. The docs for doing so are here.
For this type of automation tasks, I prefer to use the Azure Automation service. You can create runbooks using powershell and then schedule this with the use of the Azure scheduler. You can have it run "on azure" so you do not need to use compute power that you pay for (rather you pay by the minute the job runs) or you can configure it to run with a hybrid worker.
For more information, please see the documentation
When exporting from SQL DB or from SQL Server, make sure you are exporting from a quiescent database. Exporting from a database with active transactions can result in data integrity issues - data being added to various tables while they are also being exported.

Run SSIS Package from T-SQL

I noticed you can use the following stored procedures (in order) to schedule a SSIS package:
msdb.dbo.sp_add_category #class=N'JOB', #type=N'LOCAL', #name=N'[Uncategorized (Local)]'
msdb.dbo.sp_add_job ...
msdb.dbo.sp_add_jobstep ...
msdb.dbo.sp_update_job ...
msdb.dbo.sp_add_jobschedule ...
msdb.dbo.sp_add_jobserver ...
(You can see an example by right clicking a scheduled job and selecting "Script Job as-> Create To".)
AND you can use sp_start_job to execute the job immediately, effectively running SSIS packages on demand.
Question: does anyone know of any msdb.dbo.[...] stored procedures that simply allow you to run SSIS packages on the fly without using sp_cmdshell directly, or some easier approach?
Well, you don't strictly need the sp_add_category, sp_update_job or sp_add_jobschedule calls. We do an on-demand package execution in our app using SQL Agent with the following call sequence:
- sp_add_job
- sp_add_jobstep
- sp_add_jobserver
- sp_start_job
Getting the job status is a little tricky if you can't access the msdb..sysjobXXX tables, but our jobs start & run just fine.
EDIT:: Other than xp_cmdshell, I'm not aware of another way to launch the the SSIS handlers from withinSQL Server. Anyone with permissions on the server can start the dtexec or dtutil executables; then you can use batch files, job scheduler etc.
Not really... you could try sp_OACreate but it's more complicated and may not do it.
Do you need to run them from SQL? They can be run from command line, .net app etc
In SQL Server 2012+ it is possible to use the following functions (found in the SSISDB database, not the msdb database) to create SSIS execution jobs, prime their parameters, and start their execution:
[catalog].[create_execution]
[catalog].[set_execution_parameter_value]
[catalog].[start_execution]