Azure Devops - Manage, Run and Track one-time Sql Scripts - azure-devops

We have a database project that uses a dacpac to deploy schema changes and also allows a pre-deployment and post-deployment script.
However, we frequently have to run one-off scripts and security would prefer that developers not have write access in prod (we do not have DBA role at this time). I'm trying to find a solution that would work with azure devops to store one-time run scripts in git, run the script if it has not been run before, and not run the script the next time the pipeline runs. We'd like this done through devops so the SP has access to run the queries and not the dev, and anything flowing through the pipe has been through our peer review process, plus we have record of what was executed.
I'm looking for suggestions from anyone who has done this or is aware of any product which can do this.

Use liquibase. Though I would have it as part of my code base you can also use it from the CLI and run your scripts using that tool.
Liquibase keeps track of what SQL files you have published across deployments so you can have multiple stages say DIT, UAT, STAGING, PROD and it can apply the remaining one off SQL changes over time.
Generally unless you really need support, I doubt you'd need the commercial version. The opensource version is more than sufficient for my system needs and I have a relatively complex system already.
The main reason I like liquibase over other technologies is it allows for SQL based change sets. So the learning curve is a lot lower.
Two tips:
don't rely on the automatic computation of the logicalFilePath, explicitly set it even if it is repeating yourself. This allows you to refactor your scripts so instead of lumping everything into a single folder you may group them later on.
Name your scripts with the date first. That way you can leverage the natural sorting order.

I've faced a similar problem in the past:
Option 1
If you can afford to have an additional table in your database to keep track of what was executed or not, your problem can be easily solved, there is a tool which helps you: https://github.com/DbUp/DbUp
Then you would have a new repository let's call it OneOffSqlScriptsRepository and your pipeline would consume this repository:
resources:
repositories:
- repository: OneOffSqlScriptsRepository
endpoint: OneOffSqlScriptsEndpoint
type: git
Thus you'd create a pipeline to run this DbUp application consuming the scripts from the OneOffSqlScripts repository, the DB would take care of executing the scripts only once (it's configurable).
The username/password for the database can be stored safely in the library combined with azure keyvaults, so only people with the right access rights could access them (apart from the pipeline).
Option 2
This option assumes that you wanna do everything by using only the native resources that azure pipelines can provide.
Create a OneOffSqlScripts as in option1
Create a ScriptsRunner repository
In the ScriptRunner repository, you'd create a folder containing a .json file with the name of the scripts and the amount of times (or a boolean) you've had run them.
eg.:
[{
"id": 1
"scriptName" : "myscript1.sql"
"runs": 0 //or hasRun : false
}]
Then write a python script that reads and writes a json file by updating the amount of runs, thus you'd need to update your repository after each pipeline run. It would mean that your pipeline will perform a git commit / push operation after each run in case there new scripts to be run.
The algorithm is like these, the implementation can be tuned.

Related

Sqlproj deployment to AzureSql (dacpac vs bacpac)

The Situation
I have an Azure Devops build pipeline that is building and deploying to an existing AzureSql Database instance via the outputted .dacpac.
I would like to have the ability to run a script or execute API calls to create new AzureSql database instances based on that project. I have found the New-AzSqlDatabaseImport powershell cmdlet that ALMOST lets me do that, requiring a .bacpac rather than a .dacpac. I attempted to use the .dacpac and naturally the process failed.
The Question
Can I output a .bacpac from my SqlProj build process?
Alternatively is there a way to create a new database and have that database schema imported from the dacpac in a relatively smooth elegant fashion?
What we have gone with is the following:
Host a "template" database alongside the other databases.
Update the "template" database during each update cycle with the dacpac changes.
On new user/organization creation, execute single call powershell script that performs a quick copy of the "template" database. New-AzSqlDatabaseCopy
This appears to go faster than separate provision and dacpac deploy, and is a single call to execute. In the future the powershell execution is likely to be changed to an Azure API call.

How to resolve "No hosted parallelism has been purchased or granted" in free tier?

I've just started with Azure DevOps pipelines and just created a very simple pipeline with a Maven task. For now I don't care for parallelism and I'm not sure in which way I've added it to my pipeline. Is there any way to use the Maven task on the free tier without parallelism?
This is my pipeline:
trigger:
- master
pool:
vmImage: ubuntu-latest
steps:
- task: Maven#3
My thought was that tasks are always parallel? Other than that I cannot see where's the parallel step.
First - tasks are always executed sequentially. And 1 sequential pipeline is documented as "1 parallel agent", yes naming could be better. Due to the changes laid out below new accounts now get zero parallel agents, and a manual request must be made to get the previous default of 1 parallel pipeline and the free build minutes.
See this:
We have temporarily disabled the free grant of parallel jobs for public projects and for certain private projects in new organizations. However, you can request this grant by submitting a request. Existing organizations and projects are not affected. Please note that it takes us 2-3 business days to respond to your free tier requests.
More background information on why these limitations are in play:
Change in Azure Pipelines Grant for Private Projects
Change in Azure Pipelines Grant for Public Projects
Changes to Azure Pipelines free grants
TLDR; People were using automation to spin up 1000's of Azure DevOps organizations, adding a pipeline and using the service to send spam, mine bitcoin or for other nefarious purposes. The fact that they could do so free, quick and without any human intervention was a burden on the team. Automatic detection of nefarious behavior proved hard and turned into an endless cat-and-mouse game. The manual step a necessary evil that has put a stop to this abuse and is in no way meant as a step towards further monetization of the service. It's actually to ensure a free tier remains something that can be offered to real peopjle like you and me,
This is absurd. 'Free-tier' is not entirely free unless you request again!
Best Option: Use self-hosted pool. It can be your laptop where you would like to run tests.
MS azure doc here
and use above pool in YAML file
pool: MyPool
Alternatively
Request access to MS:
Folks, you can request here. Typically it get approved in a day or two.
##[error]No hosted parallelism has been purchased or granted. To request a free parallelism grant, please fill out the following form https://aka.ms/azpipelines-parallelism-request
The simplest solution is to change the project from public to private so that you can use the free pool. Private projects have a free pool by default.
Consider using a self hosted pool on your machine as suggested otherwise.
Here's the billing page.
If you're using a recent version of MacOS with Gatekeeper, this "security enhancement" is a serious PITA for the unaware as you get 100s of errors where each denied assembly has to be manually allowed in Security.
Don't do that.
After downloading the agent file from DevOps and BEFORE you unzip the file, run this command on it. This will remove the attribute that triggers the errors and will allow you to continue uninterrupted.
xattr -c vsts-agent-osx-x64-V.v.v.tar.gz ## replace V.v.v with the version in the filename downloaded.
# then unpack the gzip tar file normally:
tar xvfz vsts-agent-osx-x64-V.v.v.tar.gz
Here are all the steps you need to run, including the above, so that you can move past the "hosted parallelism" issue and continue testing immediately, either while you are waiting for authorization or to skip it entirely.
Go to Project settings -> Agent pools
Create new Agent pool, call it "local" (Call it whatever you want, or you can also do this in the Default agent pool)
Add a new Agent and follow the instructions which will include downloading the Agent for your OS (MacOS here).
Run xattr -c vsts-agent-osx-x64-V.v.v.tar.gz on the downloaded file to remove the Gatekeeper security issues.
Unzip the archive with tar xvfz vsts-agent-osx-x64-V.v.v.tar.gz
cd into the archive directory and type ./config.sh Here the most important configuration option is Server URL which will be https://dev.azure.com/{organization name} Defaults are fine for the rest. Continue until you are back at the command prompt. At this point, if you were to look inside DevOps either in your new agent pool or Default (depending on where you put it) You'll see your new agent as "offline" so run:
./run.sh which will bring your agent online. Your agent is now running and listening for you to start your job. Note this will tie up your terminal window.
Finally, in your pipeline YAML file configure your job to use your local agent by specifying the name of the agent pool where the self-hosted agent resides, like so:
trigger:
- main
pool:
name: local
#pool:
# vmImage: ubuntu-latest
I faced the same issue. I changes the project visibility from Public to Private and then it worked. No requirement to fill a form or to purchase anything.
Best Regards,
Hitesh

Cannot remove file from data lake store using runbook

I am trying to run a runbook on azure that contains the following command:
Remove-AzureRmDataLakeStoreItem
When the Runbook is run, the following error comes out:
"Remove-AzureRmDataLakeStoreItem : The term 'Remove-AzureRmDataLakeStoreItem' is not recognized as the name of a cmdlet,..."
What should I do?
This issue typically happens when there is a version mismatch between PS modules in your runbook and the Azure Automation account. To resolve, you will need to update your Azure PS Modules within the Azure Automation Account. "update" steps are published HERE.
Important note:
"Because modules are updated regularly by the product group, changes can occur with the included cmdlets, which may negatively impact your runbooks depending on the type of change, such as renaming a parameter or deprecating a cmdlet entirely. To avoid impacting your runbooks and the processes they automate, it is recommended that you test and validate before proceeding. If you do not have a dedicated Automation account intended for this purpose, consider creating one so that you can test many different scenarios and permutations during the development of your runbooks, in addition to iterative changes such as updating the PowerShell modules. After the results are validated and you have applied any changes required, proceed with coordinating the migration of any runbooks that required modification and perform the following update as described in production."

How to make self updating pipeline in concourse

I would like to make a pipeline that as first step checks its own configuration and updates itself if needed.
What tool / API should I use for this? Is there a docker image that has this installed for the correct concourse version? What is the advised way to authenticate in concourse from such task?
Regarding the previous answer suggesting the Fly binary, see the Fly resource.
However, having a similar request, I am going to try with the Pipeline resource. It seems more specific and has var injection solved directly through parameters.
I still have to try it out, but it seems to me that it would be more efficient to have a single pipeline which updates all pipelines, and not having to insert this job in all of your pipelines.
Also, a specific pipeline should not be concerned with itself, just the source code it builds (or whatever it does). If you want to start a pipeline if its config file changed, this could be done by modifying a triggering resource, e.g. pushing an empty "pipeline changed" commit
naively, it'd be a task which gets the repo the pipeline is committed to, and does a fly set-pipeline to update the configuration. However there are a few gotchas here:
fly binary. you'll want fly executable to be available to your container which runs this task, and it should be same version of fly as the concourse that's being targeted. Probably that means you should download it directly via curl from the host.
authenticating with the concourse server. you'll need to provide credentials for fly to use -- probably via parameters.
parameter updates. if new parameters become needed, you'll need to use some kind of single source for all the parameters that need to be set, and use --load-vars-from rather than just --var. My group uses Lastpass notes with a bunch of variables saved in them and download via the lpass tool, but that gets hard if you use 2FA or similar.
moving the server. You will need the external address of the concourse to be injected as a parameter as well, if you want to be resilient to it changing.

How to trigger a build within a build chain after x days?

I am currently using Teamcity to deploy a web application to Azure Cloud Services. We typically deploy using powershell scripts to the Staging Slot and thereafter do a manual swap (Staging to Production) on the Azure Portal.
After the swap, we typically leave the Staging slot active with the old production deployment for a few days (in the event we need to revert/backout of the deployment) and thereafter delete it - this is a manual process.
I am looking to automate this process using Teamcity. My intended solution is to have a Teamcity build kick off x days after the deployment build has suceeded (The details of the build steps are irrelevant since I'd probably use powershell again to delete the staging slot)
This plan has pointed me to look into Teamcity build chains, snapshot dependencies etc.
What I have done so far is
correctly created the build chain by creating a snapshot dependency on the deployment build configuration and
created a Finish Build Trigger
At the moment, the current approach kickoffs the dependent build 'Delete Azure Staging Web' (B) immediately after the deployment build has succeeded. However, I would like this to be a delayed build after x days.
Looking at the above build chain, I would like the build B to run on 13-Aug-2016 at 7.31am (if x=3)
I have looked into the Schedule Trigger option as well, but am slightly lost as to how I can use it to achieve this. As far as I understand, using a cron expression will result in the build continuously running which is not what I want - I would like for the build B to only execute once.
Yes this can be done by making use of the REST api.
I've made a small sample which should convey the fundamental steps. This is a PowerShell script that will clear the triggers on another build configuration (determined by the parameter value in the script) and add a scheduled trigger with a start time X days on from the current time (determined by the parameter value in the script)
1) Add a PowerShell step to the main build, at the end and run add-scheduled-trigger as source code
2) Update the parameter values in the script
$BuildTypeId - This is the id of the configuration you want to add the trigger to
$NumberOfDays - This is the number of days ahead that you want to schedule the trigger for
There is admin / admin embedded in the script = Username / Password authentication for the REST api
One this is done you should see a scheduled trigger created / updated each time you build the first configuration
Hope this helps