We have a scheduled release in Octopus that deploys the last known good release to Prod back to Prod.
However this has started failing because the artifact has fallen out of our retention policy - this we can fix by altering the retention policy.
The real issue is that when it failed no notifications were sent to the team because artifact collection happens before even the first step.
I have tested this with a dummy release that just has a single basic step and then a Slack Notification step for when it fails. However, we never get to the first step - let alone the slack step.
How can i hook on to this failure so that we know about these issues in future.
You have to follow below steps to achieve the same
Step 1) Add Email Template step # First : to inform that Build is triggered
There is a setting in that called : Start Trigger set it to Run in parallel with the previous step so email will be triggered while your artifacts are downloading
Step 2) Add Email Template step # Last : to inform that build failed
Just change the setting Run Condition set it to : Failure: only run when a previous step failed
so when your deployment get fail, It will notify the same. You can add the cause of failure in email body using inbuilt variables also.
Related
I want to kill hundreds of pipeline runs of specific pipeline and specific branch (without deleting either of them). Any idea how I can do it?
This can be done via a script which first calls all the runs of pipeline with
GET /runs?pipelineIds=&statusCodes=
and then cancels them one by one using:
POST /runs/:runId/cancel
Status codes of incomplete runs are 4000, 4001, 4005, 4015, 4016, 4022
Refer documentation for more details
Our structure for a release in azure devops is to
deploy our app to our DEV environment.
Kick off my Selenium (Visual Studio) tests against that environment.
If passes, moves to our TEST environment.
If fails/hard stop.
We want to add new piece/functionality, starts same as above, Except instead of hard stop. 5) if default step fails, continue to next step. 6) New detail testing starts (turns on screen recorder)
The new detailed step has 'Agent Job' settings/parameters, I have the section "Run this job", set to "Only when previous job has failed".
My results have been, that if the previous/default/basic testing passed, the detailed step is skipped. As expected.
But if the previous step fails....the following new detailed step does not kick off.
Is it possible because the step is set up that if it fails hard stop and does not even evaluate the next step?
Or is it because the previous step says 'partially succeeded'. is this basically seen not as a failure?
Yes, this is correct. Because failed is equivalent of eq(variables['Agent.JobStatus'], 'Failed') status. But partially succeeded is eq(variables['Agent.JobStatus'], 'SucceededWithIssues').
Please check here.
You may try custom conditions like :
in(variables['Agent.JobStatus'], 'Failed', 'SucceededWithIssues')
As an addition to the solution, a piece I missed was on the 'detailed' job, the 'Trigger even when the selected stages partially succeed', also needed to be checked, as well as the solution for the same step above.
A certain task generates a ##[warning] and has a warning status.
It causes the final status of stage to be an orange exclamation mark.
I want to suppress this so that the stage will show as succeeded (green check).
Is there a way to achieve this?
Ive looked at the options at the task itself, but it only has ContinueOnError
*Edit:
Im talking about the Azure App Configuration Extension.
I've even delved into the path of updating the Build Result via REST API
but to unfortunaly, the PATH method doesn't seem to update the build result.
Update from OP
It's a known limitation/bug of task:
Azure DevOps Extension: Azure AppConfiguration - Partial Complete
Currently, by default the build result is "Failed" if anything failed to compile/build, "Partially Succeeded" if there are any unit test failures,ContinueOnError checked, and "Succeeded" otherwise.
It causes the final status of stage to be an orange exclamation mark.
According to your description, that task showing as "Partially succeeded" may due to you checked "Continue on error" option.
Continue on error (partially successful)
Select this option if you want subsequent tasks in the same job to
possibly run even if this task fails. The build or deployment will be
no better than partially successful. Whether subsequent tasks run
depends on the Run this task setting.
Please refer to this document for more info: Task control options
I have two jenkins jobs - JobA(upstream) and JobB(downstream). JobA generates the artifact TestReport.zip. I want to take TestReport.zip artifact in JobB and send it as an attachment in an email sent from JobB. I tried copy artifact plug in, but no luck. Can somebody please help with some steps how to do it?
Are you facing issue to passing TestReport.zip to Job B or facing issue to send email from Job B. I am assuming, you are facing issue to pass artifact to job B. As copy artifact plugin you tried and not worked, there is some alternative, you can try. So if both job A and Job B are running from same jenkins slave, then you can store artifact in some common place from where Job B can also access that and Job B can copy that from own workspace and initiate email using that as attachment. Please refer this article on how to pass parameter from one job to another.
https://itisatechiesworld.wordpress.com/jenkins-related-articles/jenkins-configuration/jenkins-passing-a-parameter-from-one-job-to-another/
Let's say I have these 3 Stages: Dev, QC, Prod.
My requirements are:
Artifacts only from specific branches(release/*) can be deployed to QC/Prod
Artifacts from all branches can be deployed to Dev
I can achieve what I want using Artifact filters for "After stage" triggered Releases but I need this for "Manual only".
Is there a workaround that will let me control/filter which artifacts are available for deployment for specific stages/environments?
Basically, I need the Azure DevOps equivalent of Octopus Channels.
Update
I think I'm close to a solution.
In the "Pre-deployment conditions", I can add a new Deployment Gate which makes a Rest API call.
e.g URL suffix=/Release/releases/76
Now, I just need to correctly parse the ApiResponse because the below Success criteria doesn't work
eq(root['artifacts[0].definitionReference.branch.id'], 'refs/heads/master')
Evaluation of expression 'eq(root['artifacts[0].definitionReference.branch.id'], 'refs/heads/master')' failed.
As you said, you can do this using Deployment gates on your stages.
Create a new Generic service connection from Project Settings -> Pipelines -> Service Connections.
For service URL something like https://vsrm.dev.azure.com/{OrgName}/{ProjectName}/_apis
On your stage, open the Pre-Deployment Conditions
Enable the Gates option.
Add a new Invoke REST API gate and set the Delay before evaluation to 0 minutes.
4.1 Set the connection type to Generic.
4.2 Select the service connection you created in step 1.
4.3 Set the method to GET.
4.4 Set the URL suffix to /Release/releases/$(Release.ReleaseId)
4.5 On the Advanced area, set the Completion Event to ApiResponse.
4.6 On the Advanced area, set the success criteria to (or startsWith)
eq(root['artifacts'][0]['definitionReference']['branch']['id'],'refs/heads/master')
Now, if you try to deploy an artifact not from the master branch, the deployment will fail
There is a workaround:
In the QC/Prod stages add a custom condition that the job will be executed only where the artifacts source branch is release/*:
startsWith(variables['Release.Artifacts.{Artifacts-Alias}.SourceBranch'], 'refs/heads/release')
Now, when you manually run the QC/Prod stages and the artifacts not came from the release the job not will be executed:
This works
and(contains(variables['build.sourceBranch'], 'refs/heads/release'), succeeded())