Is it possible to take values from Downstream job to actual pipeline in Jenkins - azure-devops

Is it possible to take values from Downstream job to actual pipeline in Jenkins
If yes, What is the proccess?
This is Downstream job
build job: 'andorid/automation', parameters: [string(name:'URL', value:'val')], wait: true
I am taking the values like this
URL = "${params.URL}"
echo "upload appcenter"
echo "$URL"

Related

Create Stage that can be Called from Another Stage or Manually?

Hi I have a pipeline dependency challenge and have thought up a number of possible solutions. I can try them all out in a lab but the problem I am wondering if they work well "in the field" and so would like to know if anyone has tried them?
I have 3 stages, each in their own YML file. Each one is called from a main YML which is called from a main pipeline.
- template: 'build.yml'
- template: 'deploy.yml'
- template: 'test.yml'
The 'deploy.yml' generates a large number of output environment variables and 4 of these are consumed by the 'test.yml' using the "stageDependencies" syntax:
stages:
- stage: 'Test_Stage'
dependsOn: Deploy_Stage
jobs:
job: 'Test_Job'
variables:
MyWebSite: [ stageDependencies.Deploy_Stage.Variables_Job.outputs['Variables_Job.Variables.MyWebSite'] ]
This works nicely.
But, I'd like to be able to create a pipeline that just does the test stage (to test a pre-existing web site). That doesn't work of course because of the dependency dependsOn: Deploy_Stage.
I can think of a few possible solutions:
Instead of having a dependency and using the [ stageDependencies... ] syntax, send the MyWebSite as a pipeline parameter between stages. (Note that there are actually parameters not 1, I just simplified to demonstrate the challenge.) If I do that, the tester gets prompted to fill out (or choose from a list) the various parameters. But, it does create linkage between Deploy_Stage and Test_Stage - I don't know if that's a bad thing?
Pass a Boolean parameter from Deploy_Stage to Test_Stage such as "CalledFromDeployStage" and then in Test_Stage, do this:
stages:
- stage: 'Test_Stage'
${{ if eq(parameters.CalledFromDeployStage, true) }}:
dependsOn: Deploy_Stage
jobs:
job: 'Test_Job'
variables:
MyWebSite: [ stageDependencies.Deploy_Stage.Variables_Job.outputs['Variables_Job.Variables.MyWebSite'] ]
This feels a bit clunky.
Create a new YML called "Test_Stage_Manual" and get it to prompt for the various parameters and leave the rest as-is. (If I do this, I would probably put the jobs into their own YML file and call that YML from both Test stages.)
Something else?
You can try like as below:
Create an individual YAML pipeline to only run the test.
In the "Deploy_Stage" of the main pipeline, add a step or job at the end of this stage to execute the API "Runs - Run Pipeline" to trigger the pipeline for test after all the previous steps and jobs in this stage are completed successfully.
When calling the "Runs - Run Pipeline" API, you can pass the variables and parameters generated in the "Deploy_Stage" to the pipeline for test via the Request Body (JSON type) of the API.
Due to the test is in an individual pipeline, you can manually trigger this pipeline if you like. When manually trigger, you can manually set the value of the required variables and parameters in the pipeline.
With this way, you can trigger the pipeline for test both via the "Deploy_Stage" and manually.

How to set an azure devops pipeline result as success even if one of the job fails

I am developing an Azure CD YAML pipeline to deploy the result of a CI pipeline onto a Virtual Machine.
Right now and simplifying things a little for the purpose of this post, the CD pipeline is quite simple and consist of a single stage with 3 jobs:
The first job runs scripts to stop a somehow complex applications. This can sometimes fail.
The second job will only run if first job fails. This to give the opportunity for an administrator to do a manual intervention (leveraging the built-in Manual Validation task) and fix the issue encountered in the first job. If the administrator is happy to continue to run the deployment pipeline, he will resume the run of the pipeline.
The third step is the deployment of the new version of the application.
Here is the overall structure of the YAML pipeline:
jobs:
- deployment: StopApplication
environment:
name: 'EnvA' # This environment is a set of virtual machines running self-hosted Azure Agents.
resourceType: VirtualMachine
strategy:
rolling:
maxParallel: 1
deploy:
steps:
- task: ...
- job: ManualIntervation
displayName: Manual intervention to fix issue while stopping application
pool: server
dependsOn: StopApplication
condition: failed() # This job will run only if job StopApplication has failed.
timeoutInMinutes: 60
steps:
- task: ManualValidation#0
timeoutInMinutes: 50
inputs:
notifyUsers:
someone#somewhere.com
instructions: 'Do something'
onTimeout: 'reject'
- deployment: DeployApp
dependsOn:
- StopApplication
- ManualIntervation
condition: xor(succeeded('StopApplication'), succeeded('ManualIntervation'))
workspace:
clean: all
environment:
name: 'EnvA' # This environment is a set of virtual machines running self-hosted Azure Agents.
resourceType: VirtualMachine
strategy:
rolling:
maxParallel: 1
deploy:
steps:
- task: ...
The problem I have is that if the first deployment job fails but that the administrator review the problem, fixes it, resume the run of the pipeline and that the last deployment job succeeds, Azure DevOps shows my pipeline as Failed (red cross in the DevOps portal) which I can understand as one of the jobs failed.
Nevertheless, functionally, the deployment succeeded and so I would like to set/force the result of the pipeline run as a success so that Azure DevOps display the green check.
Does anyone know the way to achieve this?
I would assume that it is possible otherwise I would not understand why we have the opportunity for manual interventions in a pipeline.
The build result is read-only and cannot be updated after the build is completed. However, You can check out below workarounds to get rid of the Failed sign(Red Cross) in Devops portal.
1, Use continueOnError for the task in StopApplication job. For below example:
jobs:
- deployment: StopApplication
...
steps:
- task: taskName
...
continueOnError: true
When the continueOnError attribute is set to true. The pipeline's result will be set to SucceededWithIssues when the task failed. You will have a exclamation mark instead of red Cross
You also need to change to the condition for job ManualIntervation.
Then change the condition for job ManualIntervation to check if the flag variable was set to true. See below:
- job: ManualIntervation
dependsOn: StopApplication
condition: eq(dependencies.StopApplication.result, 'SucceededWithIssues')
2, Another workaround is to separate the StopApplication job from the others jobs in a different pipeline.
You need to create two pipelines. The first pipeline only have StopApplication job. The second pipeline contains the rest of the jobs. And trigger the second pipeline from the first pipeline using rest api.
In the First pipeline. And a powershell task after the failed task to check if the job status and trigger the second pipeline using rest api. See below example:
- powershell: |
$body = #{
templateParameters=#{
ManualIntervation= "false"
}
}
if("$(Agent.JobStatus)" -eq "Failed"){
$body.templateParameters.ManualIntervation='true'
}
$url = "$(System.TeamFoundationCollectionUri)$(System.TeamProject)/_apis/pipelines/{second-pipelineId}/runs?api-version=6.1-preview.1"
$result5 = Invoke-RestMethod -Uri $url -Headers #{Authorization = "Bearer $(system.accesstoken)"} -Method post -Body (convertto-json $body) -ContentType "application/json"
condition: always() #always run this task
Then in the second pipeline define a runtime parameter ManualIntervation and set the condition for job ManualIntervation see below:
parameters:
- name: ManualIntervation
type: string
default: false
...
- job: ManualIntervation
dependsOn: StopApplication
condition: eq('${{parameters.ManualIntervation}}', 'true')
When the first pipeline is executed. The powershell task will be trigger the second pipeline will the template parameter request body to override the parameter ManualIntervation in the second pipeline. If the ManualIntervation is true. Then the ManualIntervation job will be executed.
So that the second pipeline will be succeeded even if the first pipeline failed.

Can you use build tags in conditions statements in multi-stage devops pipelines

Is it possible to use the build tags set on a multi-stage pipeline build, in the condition section of a later stage?
##### task in build stage #####
- task: YodLabs.VariableTasks.AddTag.AddTag#0
displayName: Adding environment tag to build
inputs:
tags: |
deploy
$(DEPLOY_ENV)
#### some later stage ####
- stage: deploy
displayName: deploy
condition: |
and(
succeeded(),
#Is there something I can put here to condition on tags
)
jobs:
Thanks
From what I know this is not possible with YAML yet, because there is no easy way to get tags availabe in YAML. What you can try is output variable
jobs:
- job: Foo
steps:
- script: |
echo "This is job Foo."
echo "##vso[task.setvariable variable=doThing;isOutput=true]Yes" #The variable doThing is set to true
name: DetermineResult
- job: Bar
dependsOn: Foo
condition: eq(dependencies.Foo.outputs['DetermineResult.doThing'], 'Yes') #map doThing and check if true
steps:
- script: echo "Job Foo ran and doThing is true."
You can also try with workaraund which is in this case:
Fetch tags using REST API in powershell script PUT https://dev.azure.com/{organization}/{project}/_apis/build/builds/{buildId}/tags/{tag}?api-version=5.1
Then assign your tags to output variables using logging commands
And finally use output variable in condition
EDIT
So it looks that this is not possible at the moment but should be available soon. Please check this GitHub issue.
Output variables may now be used across stages in a YAML-based
pipeline. This helps you pass useful information, such as a go/no-go
decision or the ID of a generated output, from one stage to the next.
The result (status) of a previous stage and its jobs is also
available.
Output variables are still produced by steps inside of jobs. Instead
of referring to dependencies.jobName.outputs['stepName.variableName'],
stages refer to
stageDependencies.stageName.jobName.outputs['stepName.variableName'].
Note: by default, each stage in a pipeline depends on the one just
before it in the YAML file. Therefore, each stage can use output
variables from the prior stage. You can alter the dependency graph,
which will also alter which output variables are available. For
instance, if stage 3 needs a variable from stage 1, it will need to
declare an explicit dependency on stage 1.
I tried it now:
stages:
- stage: A
jobs:
- job: JA
steps:
- script: |
echo "This is job Foo."
echo "##vso[task.setvariable variable=doThing;isOutput=true]Yes" #The variable doThing is set to true
name: DetermineResult
# stage B runs if A fails
- stage: B
condition: eq(stageDependencies.A.JA.outputs['DetermineResult.doThing'], 'Yes') #map doThing and check if true
jobs:
- job: JB
steps:
- bash: echo "Hello world stage B first job"
but I got this error:
An error occurred while loading the YAML build pipeline. Unrecognized
value: 'stageDependencies'. Located at position 4 within expression:
eq(stageDependencies.A.JA.outputs['DetermineResult.doThing'], 'Yes').
For more help, refer to https://go.microsoft.com/fwlink/?linkid=842996
However, this feature can be soon with us!

Azure YAML Get variable from a job run in a previous stage

I am creating YAML pipeline in Azure DevOps that consists of two stages.
The first stage (Prerequisites) is responsible for reading the git commit and creates a comma separated variable containing the list of services that has been affected by the commit.
The second stage (Build) is responsible for building and unit testing the project. This Stage consists of many templates, one for each Service. In the template script, the job will check if the relevant Service in in the variable created in the previous stage. If the job finds the Service it will continue to build and test the service. However if it cannot find the service, it will skip that job.
Run.yml:
stages:
- stage: Prerequisites
jobs:
- job: SetBuildQueue
steps:
- task: powershell#2
name: SetBuildQueue
displayName: 'Set.Build.Queue'
inputs:
targetType: inline
script: |
## ... PowerShell script to get changes - working as expected
Write-Host "Build Queue Auto: $global:buildQueueVariable"
Write-Host "##vso[task.setvariable variable=buildQueue;isOutput=true]$global:buildQueueVariable"
- stage: Build
jobs:
- job: StageInitialization
- template: Build.yml
parameters:
projectName: Service001
projectLocation: src/Service001
- template: Build.yml
parameters:
projectName: Service002
projectLocation: src/Service002
Build.yml:
parameters:
projectName: ''
projectLocation: ''
jobs:
- job:
displayName: '${{ parameters.projectName }} - Build'
dependsOn: SetBuildQueue
continueOnError: true
condition: and(succeeded(), contains(dependencies.SetBuildQueue.outputs['SetBuildQueue.buildQueue'], '${{ parameters.projectName }}'))
steps:
- task: NuGetToolInstaller#1
displayName: 'Install Nuget'
Issue:
When the first stages runs it will create a variable called buildQueue which is populated as seen in the console output of the PowerShell script task:
Service001 Changed
Build Queue Auto: Service001;
However when it gets to stage two and it tries to run the build template, when it checks the conditions it returns the following output:
Started: Today at 12:05 PM
Duration: 16m 7s
Evaluating: and(succeeded(), contains(dependencies['SetBuildQueue']['outputs']['SetBuildQueue.buildQueue'], 'STARS.API.Customer.Assessment'))
Expanded: and(True, contains(Null, 'service001'))
Result: False
So my question is how do I set the dependsOn and condition to get the information from the previous stage?
It because you want to access the variable in a different stage from where you defined them. currently, it's impossible, each stage it's a new instance of a fresh agent.
In this blog you can find a workaround that involves writing the variable to disk and then passing it as a file, leveraging pipeline artifacts.
To pass the variable FOO from a job to another one in a different stage:
Create a folder that will contain all variables you want to pass; any folder could work, but something like mkdir -p $(Pipeline.Workspace)/variables might be a good idea.
Write the contents of the variable to a file, for example echo "$FOO" > $(Pipeline.Workspace)/variables/FOO. Even though the name could be anything you’d like, giving the file the same name as the variable might be a good idea.
Publish the $(Pipeline.Workspace)/variables folder as a pipeline artifact named variables
In the second stage, download the variables pipeline artifact
Read each file into a variable, for example FOO=$(cat $(Pipeline.Workspace)/variables/FOO)
Expose the variable in the current job, just like we did in the first example: echo "##vso[task.setvariable variable=FOO]$FOO"
You can then access the variable by expanding it within Azure Pipelines ($(FOO)) or use it as an environmental variable inside a bash script ($FOO).

Lock resources between multiple builds that run parallel in Azure DevOps

Suppose I have a build:
Job1:
Task1: Build
Job2:
dependsOn: Job1
Task2: Test
And the test task uses some kind of database, or another unique resource.
I would like to know if it is possible, when multiple builds are running in parallel, to lock Job2 to run unique without other builds trying to access the same resource.
I am using cmake and ctest, so I know I can do something similar between separate unit tests with RESOURCE_LOCK, but I am certain that I will not be able to lock that resource between multiple ctest processes.
Agree with #MarTin's work around. Set one variable with powershell task in Job1, then get this variable and use it in job condition for Job2.
You don't need to use API to add global variable. There has another easier way can for you try: Output variables. We expand one feature that customer can configure one output variable and access this output variable in the next job which is depending on the first job.
The sample of set output variable in Job1:
##vso[task.setvariable variable=firstVar;isOutput=true]Job2 Need skip
Then get the output variable from Job1 in Job2:
Job2Var: $[dependencies.Job1.outputs['outputVars.firstVar']]
Then, in job2, you can use it into job condition:
condition: eq(dependencies.Job1.outputs['outputVars.firstVar'], 'Job2 Need skip')
The completed simple sample script look like this:
jobs:
- job: Job1
steps:
- task: PowerShell#2
inputs:
targetType: 'inline'
script: 'echo "##vso[task.setvariable variable=firstVar;isOutput=true]Need skip Job2"'
name: outputVars
- job: Job2
dependsOn: Job1
variables:
Job2Var: $[dependencies.Job1.outputs['outputVars.firstVar'], 'Need skip Job2']
steps:
...
...
The logic I want to express is dynamically assign values to output variable based on Job1 and the current pipeline execution state. One specific value represent that the Job2 should be locked, which means skip its execution. In Job2's condition expression, when the obtained value $[dependencies.Job1.outputs['outputVars.firstVar'] satisfies the predefined expected value, skip current Job2.