Azure DevOps build pipeline unreliable triggering by schedule - azure-devops

I run build pipelines in Azure DevOps to daily update a Dockerfile and rebuild a container image with updated dependencies. The purpose is to have an up-to-date version of a dependency for the project and release a new artifact in container registry.
In Azure DevOps I have three chained build pipelines. The first pipeline is triggered every day with scheduled trigger. The next two pipelines are triggered with CI trigger file path filters. This all works well, most of the times.
My problem is that sometimes the schedule is not triggered at all. This happens after the pipelines have been running normally for days (ranging from about 1 to 15 days). The checkbox "Only schedule builds if the source or pipeline has changed" is unchecked, so having no commits should not be the problem.
Strange thing after this problem situation is that when I login to Azure DevOps portal the scheduled event is immediately triggered and I can see that the latest daily build starts running. I don't need to start it manually, it starts automatically like it would be scheduled but at the time I logged in.
This project is running with the free version of Azure DevOps. The project and pipelines have been created when Azure DevOps was VSTS and the same triggering problem was also in VSTS. Sometimes I run out of free quota and then I get an error that the agent cannot be started. This is not the case when the scheduled trigger is not running.
What could cause the problem in triggering by the schedule? Have any of you encountered this same problem? How could I debug or resolve this and get my builds running reliably? I cannot find any debug information about the trigger events, only logs from agent after the trigger has already happened. I have not yet recreated the pipelines to find out if "rebooting" helps in this case. That's my next step if no better answers will come up.

Update 07/11/2019:
​We have since updated this logic to give 1 full month of scheduled builds to continue to run without any user activity.
Nightly builds require someone to sign in daily.
From the docs:
My build didn't run. What happened?
Your Azure DevOps organization goes dormant five minutes after the last user signed out. After that, each of your build pipelines will run one more time. For example, while your organization is dormant:
A nightly build of code in your Azure DevOps organization will run only one night until someone signs in again.
CI builds of an external Git repo will stop running until someone signs in again.
https://learn.microsoft.com/en-us/azure/devops/pipelines/build/triggers?view=vsts&tabs=yaml

Related

Why was a build cancelled in Azure DevOps?

A YAML build pipeline execution was cancelled in Azure DevOps. No logs are available for download. How do I find out why it was cancelled?
This execution was started automatically by Bitbucket to verify a pull request. I might figure out the reason of cancellation if I review Bitbucket's webhook logs (if Bitbucket has cancelled it), but is there a way to find it out from Azure DevOps?
Knowing what event or user have triggered cancellation is enough.
Any pipelines run started in Azure DevOps should be visible in https://dev.azure.com/YOURORGANIZATION/YOURTEAMPROJECT/_build?view=runs (replace YOURORGANIZATION and YOURTEAMPROJECT with your own). Another way is to check the Agent Pool (but then you'll need to know which Agent Pool was used);
Navigate to your organization: https://dev.azure.com/YOURORGANIZATION
Click Organization settings in the right bottom corner
Click Agent pools under Pipelines
Choose the Agent Pool
Then you see the jobs that reached an Agent
From either of the two options, you should be able to check the logs or warnings to find out what is wrong.
If it's not there, it never started, and you should check BitBucket.

Triggers for build completion not kicking off pipeline

Hoping someone could help me out with an issue I'm running into. I have 4 different pipelines set up with the first triggering the second upon build completion and so on down the line. The triggers are not kicking off after the previous pipeline steps build completion as they are supposed to do so. THey're also all on the same branch so i'm at a loss as to what to do. Any ideas? Classic pipeline not a YAML
First, you need to make sure that your MPV Automated Testing Step 1 pipeline runs successfully, because a failed run will not trigger the Build completion trigger.
I tested two pipelines on the same branch. On my side, build completion trigger works well.
In addition, there is a recently event of availability degradation of Azure DevOps, which could affected these services, and it has been resolved. If you want to know more information, please click here. You can try again to see if the problem still exists.

How to diagnose a problem with Azure DevOps build pipeline without re-running the pipeline every time you make a change?

I have an Azure DevOps pipeline build that has several steps and the build is long. Every time there is something wrong with the build we review the logs and identify issues or come up with theories, then in case of a theory we have to insert a diagnostic command line (such as get directory, show contents of a file, etc) in between the steps; and in case of a fix we add a fix but we have to wait for the whole pipeline to rerun and find out. This is causing us to take a lot of time to fix build issues.
If we had access to the state of the agent of an unfinished build and we could just log on using RDP or any other terminal and checkout the contents, and the state of the files on disk that would have saved us a lot of hours.
Is there any way with Azure DevOps to do any diagnostic of this type?
No, if you are using hosted agent. If you are using self-hosted agent you can obviously log in to that one. You can, however, implement steps that only work if the build failed and those steps can attempt to capture information you are interested in (say publish the state of the build directory).
If you are using Azure DevOps Services, there is a new REST API version out that will let you do a "preview" run of changes to the YAML definitions: https://learn.microsoft.com/en-us/azure/devops/release-notes/2020/sprint-165-update#preview-fully-parsed-yaml-document-without-committing-or-running-the-pipeline

Execute a stage in DevOps Release pipeline every night on scheduler

I have an Azure DevOps CI Build and Release pipeline in following setup:
CI Build runs with each new commit in develop branch and creates a Build Drop (Artifact)
Release pipeline runs with each new Artifact and deploys to INT and eventually to PROD (after manual approval)
I would like to add a 3rd stage (called eg. MONITOR) which would run after the PROD release every night using the same drop as the PROD stage used, with following schema:
[Build Drop] -> [INT] -> manual approval: [PROD] -> nightly scheduler: [MONITOR]
This seems to be impossible to me, do you know how to achieve this goal?
Following is crucial for me:
the MONITOR and PROD run always from exactly the same Artifact
MONITOR is executed only if the PROD was successful
if there is a newer PROD release, the old MONITOR is not executed any more and instead the newest one is executed using the newest Artifact which made it to PROD
I tried so far following:
merge develop to master when the commit made it to PROD. And then used scheduled nightly Build from master with MONITOR stage - it works, but MONITOR uses different Artifact than PROD, so not usable for me
used scheduled trigger for MONITOR after PROD - does not work, the MONITOR is executed only once at scheduled time and never again
created extra release pipeline based on specific Artifact version with a scheduled trigger - this works, but I have to maintain the specific Artifact version manually with each successful PROD release. Another caveat is that I have to use 2 separate pipelines which makes the overview not so nice. (but, so far the best solution I achieved)
do you have better ideas? many thanks
What I would do is have 2 separate Release Pipelines.
This allows you to schedule the release without producing a new artifact (scheduled build).
Then, I would do some of what #Soccerjoshj07 suggested in that I would invoke the REST api in a task on the MONITOR pipeline/stage.
I would make the REST api call to the Releases endpoint to get the top=1 releases for releasedefinitionid=x. Then use the Release Environment endpoint to get the PROD environment for that latest release id. With the environment in hand, check the status for succeeded. If not, fail the release.
Edit as per new requirement outlined in comment
Given PROD.1 is succeeded and PROD.2 is failed when MONITOR is triggered, then the artifact from PROD.1 should be used for MONITOR.
With this criteria I would change some things. Rather than have the MONITOR go digging for the latest PROD release and fail if the latest is failed, I would make the successful PROD stage tag its build artifact and employ artifact filters on the Monitor pipeline.
The tagging can occur via the REST api or using the Tag Build or Release Task from Colin's ALM Corner Build & Release Tools and might look like this:
Are you using a YAML template, and if so have you played with the cron schedules? https://learn.microsoft.com/en-us/azure/devops/pipelines/build/triggers?view=azure-devops&tabs=yaml#scheduled-triggers
If using classic Release UI, I think you can have the definition trigger be on a schedule but that would queue the entire definition. You might have to get creative with variables and maybe create 'isScheduled=true' and use that to determine if it should skip tasks.
Other ideas:
Create a logic app or function app that calls the REST API? Sample app and github link here: https://oshamrai.wordpress.com/2019/04/22/azure-devops-rest-api-19-queue-builds-and-download-build-results/
The Azure-Devops AZ CLI extension might be easier, though: https://learn.microsoft.com/en-us/cli/azure/ext/azure-devops/pipelines/build?view=azure-cli-latest#ext-azure-devops-az-pipelines-build-queue
Beside setting up two release pipelines, if you want to use scheduled trigger for only one Stage, I am afraid there is no such out of box way to achieve that, scheduled trigger is only for entire pipeline.
As a workaround, you can add a custom condition for the job of MONITOR stage.
For example in yaml:
- stage: MONITOR
jobs:
- job:
condition: and(always(), eq(variables['Release.Reason'], 'Schedule'))
steps:
In UI , you can set this in Run this job of agent job:
In this case, the stage only executed when the release triggered by scheduled trigger. If the release is triggered by other reasons, the MONITOR stage will be skipped .
The limitation of this workaround is that when your pipeline is triggered by a scheduled trigger, two other stages are also executed.
Or write a script with powershell task (in INT/PROD stages) to determine whether Release.Reason is Schedule. If yes , skip the current stage.
For how to obtain the latest artifact version of PROD and determine the deployment status of PROD, you can refer to the two answers above.

Azure DevOps Release Pipelines: Letting release flow through multiple environments with manual triggers

I'm trying to configure Azure DevOps Release pipelines for our projects, and I have a pretty clear picture of what I want to achieve, but I'm only getting almost all the way there.
Here's what I'd like:
The build pipeline for each respective project outputs, as artifacts, all the things needed to deploy that version into any environment.
The release pipeline automatically deploys to the first environment ("dev" in our case) on each successful build, including PR builds.
For each successive environment, the release must have been deployed successfully to all previous environments. In other words, in order to deploy to the second environment ("st") it must have been deployed to the first one ("dev"), and in order to deploy to the third ("at") it must have been successfully deployed to all previous (both "dev" and "st"), etc.
All environments can have specific requirements on from what branches deployable artifacts must have been built; e.g. only artifacts built from master can be deployed to "at" and "prod".
Each successive deploy to any environment after the first one is triggered manually, by someone on a list of approvers. The list of approvers differs between environments.
The only way I've found to sort-of get all of the above working at the same time, is to automatically trigger the next environment after a successful deployment, and add a pre-deployment gate with a manual approval step. This works, except the manual approval doesn't trigger the deployment per se, but rather let an already triggered deployment start executing. This means that any release that's not approved for lifting into the next environment, is left hanging until manually dismissed.
I can avoid that by having a manual trigger instead of automatic, but then I can't enforce the flow from one environment to the next (it's e.g. possible to deploy to "prod" without waiting for successful deployments to the previous stages).
Is there any way to configure Azure DevOps Release Pipelines to do all of the things I've outlined above at once?
I think you are correct, you can only achieve that by setting automatic releases after successful release with approval gates. I dont see any other options with currect Azure DevOps capabilities.
Manual with approval gates doesnt check previous environments were successfully deployed to, unfortunately.
I hope this provides some clarity after the fact. Have you looked at YAML Pipelines In this you can specify the conditions on each stage
The stages can then have approvals on them as well.