Executing onetime scripts from Azure DevOps Pipelines - azure-devops

I'm looking for some advice on how others might have managed the handling and execution of one-time scripts which need to be executed either pre-deployment or post-deployment.
We are looking at building a solution but I was wondering if there are any tools out there already?
I want the pipeline to find the scripts which are relevant to the sprint release being deployed and run the script if it has not been run before. These scripts are often data changes following schema updates. We use CosmosDB. Scripts are written in .Net

Related

How to deploy SQL Database in Bitbucket pipeline to Azure

Asking for an opinion or direction on the current problem.
We are using bitbucket pipeline to deploy ci/cd web applications to Azure. Now what is remaining - the database, also being hosted on Azure.
From my research - everything on SQL Database Projects deployments usually utilizes Azure DevOps pipelines (connects to github repo, allows plural environments, has a built-in SqlAgent allows deploy SQL db to the target server via dacpac file. It allows CI with every check-in, every time you push changes. Nice!
But what if can not (for some reason) use Azure DevOps and have to utilize Bitbucket pipelines instead. is that possible? how? via scripting? a tool? to call in the command line? Any help - highly appreciated.
It's true that in Azure DevOps it is easier to deploy (Azure) SQL Database, as Azure DevOps offers many tasks (including 3rd party custom tasks you can find in Microsoft MarketPlace).
However, no matter what tool will you use, you should be able to do the same, knowing the concept of deployment of a specific service.
I don't know BitBucket very well, but I bet the product has the capability to execute some commands, including PowerShell commands as well. If so, you must do 2 steps in your pipeline to publish Azure SQL database:
1) Create server and (empty) database - perhaps BitBucket offers some task for creating services in Azure (from ARM template or other way). If not - you can always use CLI or PowerShell to do so. More info: az cli server
2) Deploy the database or changes to it. This step is always to compare DACPAC file (which is compiled version of SQL Server database project) to target database on the server. The result is T-SQL (differential) script which must be executed against the target database. There is only one way to do so - sqlpackage.exe - tool provided by Microsoft. You can find the whole documentation here and plenty of examples on how to use it on the Internet.
Let me know if that helps.

How to diagnose a problem with Azure DevOps build pipeline without re-running the pipeline every time you make a change?

I have an Azure DevOps pipeline build that has several steps and the build is long. Every time there is something wrong with the build we review the logs and identify issues or come up with theories, then in case of a theory we have to insert a diagnostic command line (such as get directory, show contents of a file, etc) in between the steps; and in case of a fix we add a fix but we have to wait for the whole pipeline to rerun and find out. This is causing us to take a lot of time to fix build issues.
If we had access to the state of the agent of an unfinished build and we could just log on using RDP or any other terminal and checkout the contents, and the state of the files on disk that would have saved us a lot of hours.
Is there any way with Azure DevOps to do any diagnostic of this type?
No, if you are using hosted agent. If you are using self-hosted agent you can obviously log in to that one. You can, however, implement steps that only work if the build failed and those steps can attempt to capture information you are interested in (say publish the state of the build directory).
If you are using Azure DevOps Services, there is a new REST API version out that will let you do a "preview" run of changes to the YAML definitions: https://learn.microsoft.com/en-us/azure/devops/release-notes/2020/sprint-165-update#preview-fully-parsed-yaml-document-without-committing-or-running-the-pipeline

How to configure Azure DevOps Pipeline decorators to run pre-tasks in classic pipelines?

We have a custom Azure DevOps extension to in order to inject SonarQube pipeline tasks into every definition using the Pipeline Decorator feature. These tasks are a mixutre of both pre and post tasks.
In YAML defined pipelines, the tasks run perfectly, however in Classic pipeline definitions, only the post tasks run, although the classic and YAML pipelines are defined identically (steps, agents, demands, variables etc.).
As this is a relatively new feature of Azure DevOps, there is a lack of documentation, especially regarding classic pipelines.
Is there something that we could possibly be missing for this to happen?
Is there something that we could possibly be missing for this to
happen?
This seems the issue on our side. And, it only exists to the sonarcloud/sonarqube prepare task if we apply it into Decorator.
As you know, users use yaml template for the steps to inserted at the specified location. And in fact, on our backend, this template file is processed through yaml template engine.
As our design, after you enable the Pipeline decorators at organization level. In Initialize job, Pipeline will call one backend class to get the JobContext, which will add decorator providers to JobContext. Then JobContext use these providers to fetch contributions to add pre/post tasks in job while preparing the job to run.
BUT, the sonar prepare task can not actively be detected by engine, then inject it into JobContext. For why I point to this specific task, because this kind of abnormality only exists in the prepare task of sonarcloud and sonarqube until now.
Our team will do some investigation and fix with sonar team together.
Until now, there has 2 work around you could consider to apply.
Work around 1:
As I mentioned previously, this prepare task can not actively be detected and injected into JobContext. So, the first work around we actively add this task info into JobContext via adding prepare task into agent job.
But this will cause one disadvantage is, it will load 2 prepare tasks. One is executed in pre-job, and next it will run second.
Work around 2:
Try to use YAML to build your pipeline until we implement this abnormality thing. So that it will not cause error because of lacking prepare task
Will update the status here to let you know once we have any progress.

Using azure devops to deploy to an offline server

I'm using azure devops pipeline to build my IIS application and deploy via release management to several different servers, and it works great. My issue though is that one of the servers I need to deploy to will always be offline, so I need to set up some sort of offline installer for that deployment. Is there a way to do this using the build and release management I already have that I'm not seeing?
Azure Pipelines assumes that the server is always available. Best I can think of is to generate some kind of drop on a fileshare and then add a Manual Intervention Task to pause the pipeline and allow you to do your thing.
There is no air-gapped agent nor a way to run part of your pipeline on another system and import the results.

How to test Concourse pipelines

My team has multiple Concourse pipelines and as we refactor tasks, we've realized the need to test our actual pipelines.
We already test our tasks by using environment variables enabling task scripts to be run locally, but the pipeline yaml is another matter.
What is the best way to accomplish testing of the pipeline itself?
You can use the Concourse Pipeline Resource to monitor the git repository where you keep your pipeline config. Whenever the pipeline resource detects a change, it will automatically run a fly set-pipeline to update the config in your running Concourse installation. From there, it's easy to script tests against the updated pipeline that is now running in your Concourse installation.
fly validate-pipeline is pretty useful, running that against pipelines before merging has caught a few bugs in "obviously correct" changes for me.
If you want to test the whole pipeline before merging you need to make sure that the data it's using is static and working (no sense in failing the pipeline if it's the repo that's broken), and that there are no side effects (like notifications) shared between the 'real pipeline' and the 'test pipeline'. I suspect that as long as you're careful with the restrictions, you could make it work, but it would have to be designed in the context of your existing pipelines and infrastructure.