Today i'm setting up azure devops to checkout how it can help us in our build/release process. It is a slow process I have to say especially because al my jobs are queued and I don't know why. I have two pipelines which do basically the same thing. But one is made with the classic editor and one with YAML.
# Xamarin.Android
# Build a Xamarin.Android project.
# Add steps that test, sign, and distribute an app, save build artifacts, and more:
# https://learn.microsoft.com/azure/devops/pipelines/languages/xamarin
trigger:
- master
schedules:
- cron: "0 3 * * Mon-Fri"
displayName: M-F 3:00 AM (UTC) daily build
branches:
include:
- master
pool:
vmImage: 'macos-latest'
variables:
buildConfiguration: 'Release'
outputDirectory: '$(build.binariesDirectory)/$(buildConfiguration)'
steps:
- task: NuGetToolInstaller#1
- task: NuGetCommand#2
inputs:
restoreSolution: '**/*.sln'
- task: XamarinAndroid#1
inputs:
projectFile: '**/*droid*.csproj'
outputDirectory: '$(outputDirectory)'
configuration: '$(buildConfiguration)'
The log of the job itself doesn't say very much:
Pool: Azure Pipelines
Image: macos-latest
Queued: Today at 15:13 [manage parallel jobs]
The agent request is not running because all potential agents are running other requests. Current position in queue: 3
Job preparation parameters
2 queue time variables used
system.debug : true
agent.diagnostic : true
I don't know what the problem is of the queued jobs.... The project itself is just the template when you create a new xamarin forms project.
Also as a side note, if the build succeeds where does azure put the apk file?
Thanks in advance!
After investigation, there is a recently event of availability degradation of Azure DevOps, which affected these services, and it has been resolved now. This could affect customers in
Europe. If you want to know more information, please click here: Hosted Pools Availability Degradation in Europe
Our engineers are currently investigating an event impacting Azure
DevOps hosted pools in Europe. The event is being triaged and we will
post an update as soon as we know more.
The issue is now fully mitigated. Our engineers will be investigating
this further to learn from and reduce the risk of potential
recurrences. We apologize for the impact this had on our customers.
About the second part, agree with Krzysztof Madej . After the build succeeds, you need to publish the file as artifact for deployment.
I had this the same, so I assume that this is global issue. Maybe related to this:
From March 24th - 26th, 2020 many customers in Europe and the United
Kingdom experienced delays in their builds and releases targeting our
hosted Windows and Linux agents. This incident was caused by VM
capacity constraints arising from the global health pandemic that led
to increased machine reimage times and then increased wait times for
available agents. Many customers experienced significant delays in
their pipelines over multiple days. We sincerely apologize for the
impact of this incident.
I know that this is related to March, but could appear again. Just a guess.
Part 2
Since you build your app you must publish your artifact (apk file). You can use Publish Build Artifacts task
- task: PublishBuildArtifacts#1
inputs:
pathToPublish: $(outputDirectory)
artifactName: MyBuildOutputs
I landed on this page stumped as to why I was getting no error message in the pipeline.
Turns out when I had changed the pipeline yaml file name, I had accidently set the pipeline status to 'Paused'
Related
I'm converting to a full YAML AzDO pipeline and need to wait for manual validation for certain stages of my pipeline. Added the new ManualValidation task into a serverless job, however it fails immediately with no details about why. I did add a Delay task in there as well (just as a sanity check to make sure my serverless job was actually running successfully), and it runs fine.
- job: waitForValidation
displayName: Wait for external validation
pool: Server
timeoutInMinutes: 4320 # job times out in 3 days
steps:
- task: Delay#1
inputs:
delayForMinutes: '1'
- task: ManualValidation#0
timeoutInMinutes: 1440 # task times out in 1 day
inputs:
notifyUsers: |
me#email.com
you#email.com
instructions: 'Please validate deployment can continue and resume'
onTimeout: 'reject'
These are the docs I'm using:
https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/utility/manual-validation?view=azure-devops&tabs=yaml
I also dropped into the GitHub project just to make sure the task is still version 0 (it is).
Suggestions on why this might be failing and/or ways I can get some more details in the pipeline about WHY it failed?
Turns out we are actually using AzDO Server, not AzDO Services (thanks, Microsoft for naming them so similarly) and this task is not yet available in the Server version :(
For anyone also frustrated by this lack of functionality on-prem, here’s the documentation on using Deployment Jobs and some about Environments
We are able to get most of the functionality we were looking for this way, thou it does require setting up environments.
With the classic Azure DevOps release pipeline our release flow was very easy to setup.
We had a build pipeline running many times during the day. On success it deployed to our development environment. Every night the latest successful deployment to dev was released to our test environment (running automated tests for hours), before it deployed to UAT. But often we also need to deploy to test during the day, if we have a new change which needs to go directly into test or UAT. The classic pipelines allowed us to skip a stage, or deploy if the previous was only partly successful.
1) Development - automatic
2) Test - nightly or manually
3) UAT - nightly or manually
4) Staging - manual approval
5) Production - manual approval
With the multi-stage pipelines the same flow seems to be very difficult to do. At least when it comes to making it as a single deployment pipeline. The first part is fine. We can have our build trigger the development deployment. But how can we hold back the release to the test environment until 0:30am, while still retain the ability to also release it manually? If I created a separate test environment pipeline, then it could work if it had no triggers, but a schedule.
Same with the UAT, as we also need the flexibility to manually run UAT deployments, then it would also need to go into its own pipeline. Releases to our staging and production environment we "gate" with manual approvals, which is fine.
While this technically could work, if we split the deployment pipeline into multiple pipelines it really gets difficult to manage "a release". Not to say that it kind of goes against the whole multi-stage pipeline principle if we create a separate pipeline per stage.
But with this being so easy to setup like this in the classic pipelines, then I cannot really imaging that other companies have not run into the same limitations. Is it just me who cannot see the light, or cannot this really not be done with multi-stage pipelines?
manually run UAT deployments
We could add Azure DevOps Multi-Stage Pipelines Approval Strategies in the yaml build.
Steps:
Open the tab Environments and click the button New environment-> Click the button approvals and checks-> My environment name is TEST.
Then use it in the yaml pipeline(just a sample):
trigger: none
pool:
vmImage: 'ubuntu-latest'
stages:
- stage: A
jobs:
- deployment: 'MyDeployment'
displayName: MyDeployment
environment: 'TEST'
- job: A1
steps:
- script: echo "##vso[task.setvariable variable=skipsubsequent;isOutput=true]false"
name: printvar
- stage: B
condition: and(succeeded(), ne(stageDependencies.A.A1.outputs['printvar.skipsubsequent'], 'true'))
dependsOn: A
jobs:
- job: B1
steps:
- script: echo hello from Stage B
Result:
We could also configure schedule Trigger and use them in the multi-stage pipelines.
Note: The schedule trigger and Approval Strategies are using in the stage level.
For scheduled jobs: you can use something like this in your YAML:
(Copied from Microsoft documentation)
schedules:
- cron: string # cron syntax defining a schedule
displayName: string # friendly name given to a specific schedule
branches:
include: [ string ] # which branches the schedule applies to
exclude: [ string ] # which branches to exclude from the schedule
always: boolean # whether to always run the pipeline or only if there have been source code changes since the last successful scheduled run. The default is false.
For manual jobs, you can use the Create Release button to create and deploy a release manually. Do note that sometimes this can create a conflict with the schedule. Also, to "hold back a release" put an approver on the release, and then when approving, defer the release:
noting that it's in UTC, and it defaults to tomorrow - you can change it to any time after now.
I've got a single self-hosted agent. Its used as a kind of deployment agent.
All release versions of our software gets build by this agent and then copied to a network location.
Question: Is there a way I can utilize both the agent from the 'azure-pipelines' Microsoft hosted pool and my own self-hosted pool in my pipelines?
EDIT
Unfortunately this is not possible at the moment.
This is why you should upvote the feature request:
https://developercommunity.visualstudio.com/t/allow-agent-pools-to-contain-microsoft-hosted-and/396893
This is not possible. There is a ticket on developer community asking for similar functionality but it is already closed.
There is another ticket Allow agent pools to contain Microsoft hosted and self-hosted agents which refer to similar case, it is open but MS is silent there.
Which benefits do you want to achieve?
Basically, you can use several agent pools in one build/release definition. You just split your definition into several jobs and assign the needed agent pool to the corresponding job.
If you want to dynamically assign different pools from one pipeline to do the same build steps, we can not do that (as Krzysztof mentioned).
You can do hacky thing and use multiple jobs/stages. Jobs/stages will use different pools. You just need to skip depending if it is release version. Note that pipeline skeleton is not tested.
variables:
${{ eq(variables['Build.SourceBranch'], 'release/*') }}:
release_build: True
stages:
- stage: normal
condition: eq(variables['release_build'], False)
pool:
vmImage: 'windows-latest'
jobs:
- job: Builds
steps:
- template: build.yaml
- stage: release
condition: eq(variables['release_build'], True)
pool: My-agent
jobs:
- job: Builds
steps:
- template: build.yaml
Dear assorted Developers,
in azure pipeline's container jobs, for every job containers get pulled from registry, even if the same container is used for multiple jobs.
Of course in case the images are really small, this is no problem, but in case anyone is intending to build with the same image which is covering the vscode local development - this can use up more time than the actual build.
So has anyone solved caching the container?
Here is an example:
# in this example, all jobs use the same container.
# in stage 1, the jobs are started serial, so job 2 only starts if
# job 1 is done -> and the image is downloaded for both jobs independently
# in stage 2, the jobs are started in parallel,
# and the image is downloaded for both jobs in the stage independently
trigger:
batch: true
branches:
include:
- "*"
resources:
containers:
- container: ubuntu
image: ubuntu:18.04
stages:
- stage: STAGE1
jobs:
- job: PrintInfoStage1Job1
container: ubuntu
steps:
- script: |
echo "THIS IS STAGE 1, JOB 1"
displayName: "JOB 1"
- job: PrintInfoStage1Job2
dependsOn: PrintInfoStage1Job1
container: ubuntu
steps:
- script: |
echo "THIS IS STAGE 1, JOB 2"
displayName: "JOB 2"
- stage: STAGE2
dependsOn: STAGE1
jobs:
- job: PrintInfoStage2Job1
dependsOn: []
container: ubuntu
steps:
- script: |
echo "THIS IS THE STAGE 2, JOB 1"
displayName: "JOB 1"
- job: PrintInfoStage2Job2
container: ubuntu
dependsOn: []
steps:
- script: |
echo "THIS IS THE STAGE 2, JOB 2"
displayName: "JOB 2"
Azure DevOps Container Jobs: Cache Container for multiple Jobs?
Initially, our design and develop idea is mostly considering for the security and consistency reasons, it should be a fresh image each time. Now, we have received many feature request about hoping support cache image which same with yours from lots of developers. Now, considering the disadvantage of this design idea, it would let developers wasting too much time to wait for the image pulled down. If the image can be cached, it can greatly improve the efficiency of the build.
Now, the bulk of the actual caching work about this feature has been developed done by our Azure Artifacts Team. Since the latest process I got from that team is before we can release this feature in azure devops, there are some work we need to do, which about around security to make sure that the cache can't be used as an attack vector. Once this is done we will launch a customer preview. It would be deployed recently.
Please see our Roadmap: Speed up pipeline with caching to track its develop and release process. You can also track this blog which published by the azure artifacts PM. Also, you can follow and monitor this PR.
Until now, there's no much better work around to improve this. Even use the Cache task to perform its thing in combination with the Docker save/load respective operations pretty much matched that of downloading the base image/layers from a public registry.
I will still monitor this feature develop process. Once the PR finished and the feature code deployed to all regions, even it released as a preview feature, I will update this answer to let you and other SO users know.
I have the following YML file for my pipeline:
trigger: none
stages:
# Other stages here...
- stage: Release
jobs:
- deployment: Staging
environment: staging
strategy:
runOnce:
deploy:
steps:
- download: none
- task: DownloadBuildArtifacts#0
# ...
- task: AzureRmWebAppDeployment#4
displayName: Deploy in staging
# ...
- deployment: Production
environment: prod
dependsOn: Staging
strategy:
runOnce:
deploy:
steps:
- download: none
- task: AzureAppServiceManage#0
displayName: Swap stg-prod slots
# ...
Based on this, to give more context, my thinking is to have 2 stages: the first one is to build my application, the second one is to release in staging (QA) and to production next.
The environment "prod" though, has a check (or approval, whatever you want to call it).
I'm not sure if I'm encountering a bug or not, but what is happening is that when stage 1 completes (the build phase), the release phase of stage 2 is blocked and waiting for approval even considering that "staging" has not any check enabled (only prod).
The easiest workaround is to create different stages, one for staging and one for production, but the thing is that it's not matching my expected behaviour. I'm expecting that the deployment for the job staging completes successfully, then the job "production" waits for the approval.
Do you have any suggestion regarding this? Is this a bug?
Checks (approvals) for a deployment job are blocking the entire stage
Sorry for any inconvenience.
Personally, This behavior is by designed at this moment.
As the document state:
Approvals in multi-stage YAML pipelines
We continue to improve multi-stage YAML pipelines, we now let you add manual approvals to
these pipelines. Infrastructure owners can protect their environments
and seek manual approvals before a stage in any pipeline deploys to
them.
This feature is designed based on stage not environment, so it block the whole stage.
As I test, I could reproduce this issue as you. But your request is reasonable (Personally), this feature should be designed based on environment.
You could add your request for this feature on our UserVoice site (https://developercommunity.visualstudio.com/content/idea/post.html?space=21 ), which is our main forum for product suggestions. Thank you for helping us build a better Azure DevOps.
Hope this helps.