I am creating a pipeline for executing terraform scripts in Azure DevOps, instead of running predefined terraform tasks(which wasn't included in our organization yet) I am planning to run the scripts through Azure CLI. My question is, is there a way we can identify in the terraform plan if "No changes. Your infrastructure matches the configuration.", so that I dont have to run the terraform apply.
I know that terraform apply won't harm if the Configuration matches. I am planning to skip that command, Is there a way to check the plan output and make the decision out of it through Azure CLI?
Yes, you can. You should use -detailed-exitcode. Here you have an example in bash:
terraform init
terraform plan -out=plan.out -detailed-exitcode
status=$?
echo "The terraform command exit status : ${status}"
#run apply only on changed detected
if [ $status -eq 2 ]; then
echo 'Applying terraform changes'
terraform apply -auto-approve plan.out
elif [ $status -eq 1 ]; then
exit 1
fi
Here you have link to documentation
Returns a detailed exit code when the command exits. When provided, this argument changes the exit codes and their meanings to provide more granular information about what the resulting plan contains:
0 = Succeeded with empty diff (no changes)
1 = Error
2 = Succeeded with non-empty diff (changes present)
Related
Problem
I am running an inline shell script on a remote machine via SSH in an Azure Dev Ops pipeline. Depending on some conditions, running the pipeline should throw a custom warning.
The feature works, albeit in a very twisted way: when running the pipeline, the warning always appears. To be more precise:
If the condition it not met, the warning appears once.
If the condition is met, the warning appears twice.
The example below should give a clear illustration of the issue.
Example
Let's say we have the following .yaml pipeline template. Please adapt the pool and sshEndpoint settings for your setup.
pool: 'Default'
steps:
- checkout: none
displayName: Suppress regular checkout
- task: SSH#0
displayName: 'Run shell inline on remote machine'
inputs:
sshEndpoint: 'user#machine'
runOptions: inline
inline: |
if [[ 3 -ne 0 ]]
then
echo "ALL GOOD!"
else
echo "##vso[task.logissue type=warning]Something is fishy..."
fi
failOnStdErr: false
Expected behavior
So, the above bash script should echo ALL GOOD! as 3 is not 0. Therefore, the warning message should not be triggered.
Current behavior
However, when I run the above pipeline in Azure Dev Ops, the step log overview says there is a warning:
The log itself looks like this (connection details blacked out):
So even though the code takes the ALL GOOD! code path, the warning message appears. I assume it is because the whole bash script is echoed to the log - but that is just an assumption.
Question
How can I make sure that the warning only appears in the executed pipeline when the conditions for it are satisfied?
Looks like the SSH task logs the script to the console prior to executing it. You can probably trick the log parser:
HASHHASH="##"
echo $HASHHASH"vso[task.logissue type=warning]Something is fishy..."
I'd consider it a bug that the script is appended to the log as-as. I've filed an issue. (Some parsing does seem to take place as well)...
I have a helm chart deploying to three environments (dev, stage and prod). My is running this command like this:
helm upgrade --install --namespace=$DEPLOYMENT_ENV ingress-external-api -f ./ingress-external-api/values-$DEPLOYMENT_ENV.yaml ./ingress-external-api --atomic
Where $DEVELOPMENT_ENV is either dev, stage or prod.
The important fact here is that only values-prod.yaml has a proper yaml definition. All the others values-dev.yaml and the same for stage are empty and therefore will not deploy any releases.
That results in the following helm error:
+ helm upgrade --install --namespace=$DEPLOYMENT_ENV ingress-external-api -f ./ingress-external-api/values-$DEPLOYMENT_ENV.yaml ./ingress-external-api --atomic
Release "ingress-external-api" does not exist. Installing it now.
INSTALL FAILED
PURGING CHART
Error: release ingress-external-api failed: no objects visited
Successfully purged a chart!
Error: release ingress-external-api failed: no objects visited
Which furthermore results in my bitbucket pipeline to stop and fail.
However as you also can see that did not help.
So my question is how can I tell helm not to throw an error at all if it can not find anything to substitute it's template with?
I am not sure this is supposed to be helm's responsibility. Why do you want to update dev/stage with missing values ? It seems a little bit weird.
If you are not going to update anything there, just run it once in production only.
If you insist doing it that way, there's also the possibility to 'lie' about your returning code in Bash and implement it on pipeline level.
Lie about exit status
how can I tell helm not to throw an error at all?
Add " || true" to the end of your command, something like this:
helm upgrade --install --namespace=$DEPLOYMENT_ENV ... || true
How it works
Most of the commands in the bitbucket-pipelines.yml file are bash/shell commands running on Unix. The program that runs the yml script will look for error exit codes from each command, to see whether the command has failed (to stop the script), or succeeded (to move onto the next command).
When you add "|| true" to the end of a shell command, it means "ignore any errors, and return success code 0 always". Here's a demonstration, which you can run in a Terminal window on your local computer:
echo "hello" # Runs successfully
echo $? # Check the status code of the last command. It should be 0 (success)
echo-xx "hello" # Runs with error, becuase there is no command called "echo-xx"
echo $? # Check the status code of the last command. It should be 127 (error)
echo-xx "hello" || true # Runs with success, even though "echo-xx" ran with an error
echo $? # Check the status code of the last command. It should be 0 (success)
More Info about shell return codes:
Bash ignoring error for a particular command
https://www.cyberciti.biz/faq/bash-get-exit-code-of-command/
If you really don't want to deploy anything you can use empty values.yaml files and then add ifs and loops to your template files.
Basically, you have to fill the values.yaml with an empty structure, e.g.:
my-release:
value:
other-value:
Then you can do something like:
{{ if .Values.my-release.value }}
Maybe i'm missing something obvious, but...
Can i have several (github) PR checks from a single azure pipelines yaml?
For example, on this screenshot i have CI connected to azure pipelines where build & running tests happen within the same check:
Can i somehow separate them so i have 2 checks: Build and running tests and see them pass|fail separately?
if its possible to have N checks in a single yaml and have their
statuses posted separately
For this issue, the answer is yes, you can achieve this with script approach.
Here is an issue about Multiple GitHub checks, in this issue, someone has the same problem as you, and got a solution and the exact config is given in it.
Since the build environment is a shell, for example, you could wrap your lint commands in a shell script that traps the exit code and sends the status to GitHub:
#!/bin/sh
npm run lint
EXIT_CODE=$?
if [[ $EXIT_CODE == 0 ]]
then
export STATUS="success"
else
export STATUS="failure"
fi
GITHUB_TOKEN=<your api token>
curl "https://api.github.com/repos/$CI_REPO/statuses/$CI_COMMIT?access_token=$GITHUB_TOKEN" \
-H "Content-Type: application/json" \
-X POST \
-d "{\"state\": \"$STATUS\", \"description\": \"eslint\", \"target_url\": \"$CI_BUILD_URL\"}"
exit $EXIT_CODE
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed last month.
The community reviewed whether to reopen this question last month and left it closed:
Original close reason(s) were not resolved
Improve this question
When making changes to YAML-defined Azure DevOps Pipelines, it can be quite tedious to push changes to a branch just to see the build fail with a parsing error (valid YAML, but invalid pipeline definition) and then try to trial-and-error fix the problem.
It would be nice if the feedback loop could be made shorter, by analyzing and validating the pipeline definition locally; basically a linter with knowledge about the various resources etc that can be defined in an Azure pipline. However, I haven't been able to find any tool that does this.
Is there such a tool somewhere?
UPDATE: This functionality was removed in Issue #2479 in Oct, 2019
You can run the Azure DevOps agent locally with its YAML testing feature.
From the microsoft/azure-pipelines-agent project, to install an agent on your local machine.
Then use the docs page on Run local (internal only) to access the feature that is available within the agent.
This should get you very close to the type of feedback you would expect.
FYI - this feature has been removed in Issue #2479 - remove references to "local run" feature
Hopefully they'll bring it back later considering Github Actions has the ability to run actions locally
Azure DevOps has provided a run preview api endpoint that takes a yaml override and returns the expanded yaml. I added this support to the AzurePipelinePS powershell module. The command below will execute the pipeline with the id of 01 but with my yaml override and return the expanded yaml pipeline.
Preview - Preview
Service:
Pipelines
API Version:
6.1-preview.1
Queues a dry run of the pipeline and returns an object containing the final yaml.
# AzurePipelinesPS session
$session = 'myAPSessionName'
# Path to my local yaml
$path = ".\extension.yml"
# The id of an existing pipeline in my project
$id = 01
# The master branch of my repository
$resources = #{
repositories = #{
self = #{
refName = 'refs/heads/master'
}
}
}
Test-APPipelineYaml -Session $session -FullName $path -PipelineId $id -Resources
$resources
A pipeline described with YAML, and YAML can be validated if you have a schema with rules on how that YAML file should be composed. It will work as short feedback for the case you described, especially for syntax parsing errors. YAML Schema validation might be available for almost any IDE. So, we need:
YAML Schema - against what we will validate our pipelines
An IDE (VS Code as a popular example) - which will perform validation
on the fly
Configure two of the above to work together for the greater good
The schema might be found from many places, for this case, I'll suggest using https://www.schemastore.org/json/
It has Azure Pipelines schema (this schema contains some issues, like different types of values comparing to Microsoft documentation, but still cover the case of invalid syntax)
VS Code will require an additional plug-in to perform YAML text validation, there are also a bunch of those, who can validate schema. I'll suggest try YAML from RedHat (I know, the rating of the plugin is not the best, but it works for the validation and is also configurable)
In the settings of that VS Code plugin, you will see a section about validation (like on screenshot)
Now you can add to the settings required schema, even without downloading it to your machine:
"yaml.schemas": {
"https://raw.githubusercontent.com/microsoft/azure-pipelines-vscode/v1.174.2/service-schema.json" : "/*"
}
Simply save settings and restart your VS Code.
You will notice warnings about issues in your YAML Azure DevOps Pipeline files (if there is any). Failed validation for purpose on the screenshot below:
See more details with examples here as well
I can tell you how we manage this disconnect.
We use only pipeline-as-code, yaml.
We use ZERO yaml templates and strictly enforce one-file-pr-pipeline.
We use the azure yaml extension to vscode, to get linter-like behaviour in the editor.
Most of the actual things we do in the pipelines, we do by invoking PowerShell, that via sensible defaulting also can be invoked in the CLI, meaning we in essence can execute anything relevant locally.
Exceptions are Configurations of the agent - and actual pipeline-only stuff, such as download-artifact tasks and publish tasks etc.
Let me give some examples:
Here we have the step that builds our FrontEnd components:
Here we have that step running in the CLI:
I wont post a screenshot of the actual pipeline run, because it would take to long to sanitize it, but it basically is the same, plus some more trace information, provided by the run.ps1 call-wrapper.
Such tool does not exists at the moment - There are a couple existing issues in their feedback channels:
Github Issues - How to test YAML locally before commit
Developer Community - How to test YAML locally before commit
As a workaround - you can install azure devops build agent on your own machine, register as its own build pool and use it for building and validating yaml file correctness. See Jamie's answer in this thread
Of course this would mean that you will need to constantly switch between official build agents and your own build pool which is not good. Also if someone accidentally pushes some change via your own machine - you can suffer from all kind of problems, which can occur in normal build machine. (Like ui prompts, running hostile code on your own machine, and so on - hostile code could be even unintended virus infection because of 3rd party executable execution).
There are two approaches which you can take:
Use cake (frosten) to perform build locally as well as perform building on Azure Devops.
Use powershell to perform build locally as well as on Azure Devops.
Generally 1 versus 2 - 1 has more mechanics built-in, like publishing on Azure devops (supporting also other build system providers, like github actions, and so on...).
(I by myself would propose using 1st alternative)
As for 1:
Read for example following links to have slightly better understanding:
https://blog.infernored.com/cake-frosting-even-better-c-devops/
https://cakebuild.net/
Search for existing projects using "Cake.Frosting" on github to get some understanding how those projects works.
As for 2: it's possible to use powershell syntax to maximize the functionality done on build side and minimize functionality done on azure devops.
parameters:
- name: publish
type: boolean
default: true
parameters:
- name: noincremental
type: boolean
default: false
...
- task: PowerShell#2
displayName: invoke build
inputs:
targetType: 'inline'
script: |
# Mimic build machine
#$env:USERNAME = 'builder'
# Backup this script if need to troubleshoot it later on
$scriptDir = "$(Split-Path -parent $MyInvocation.MyCommand.Definition)"
$scriptPath = [System.IO.Path]::Combine($scriptDir, $MyInvocation.MyCommand.Name)
$tempFile = [System.IO.Path]::Combine([System.Environment]::CurrentDirectory, 'lastRun.ps1')
if($scriptPath -ne $tempFile)
{
Copy-Item $scriptPath -Destination $tempFile
}
./build.ps1 'build;pack' -nuget_servers #{
'servername' = #{
'url' = "https://..."
'pat' = '$(System.AccessToken)'
}
'servername2' = #{
'url' = 'https://...'
'publish_key' = '$(ServerSecretPublishKey)'
}
} `
-b $(Build.SourceBranchName) `
-addoperations publish=${{parameters.publish}};noincremental=${{parameters.noincremental}}
And on build.ps1 then handle all parameters as seems to be necessary.
param (
# Can add operations using simple command line like this:
# build a -add_operations c=true,d=true,e=false -v
# =>
# a c d
#
[string] $addoperations = ''
)
...
foreach ($operationToAdd in $addoperations.Split(";,"))
{
if($operationToAdd.Length -eq 0)
{
continue
}
$keyValue = $operationToAdd.Split("=")
if($keyValue.Length -ne 2)
{
"Ignoring command line parameter '$operationToAdd'"
continue
}
if([System.Convert]::ToBoolean($keyValue[1]))
{
$operationsToPerform = $operationsToPerform + $keyValue[0];
}
}
This will allow to run all the same operations on your own machine, locally and minimize amount of yaml file content.
Please notice that I have added also last execution .ps1 script copying as lastRun.ps1 file.
You can use it after build if you see some non reproducible problem - but you want to run same command on your own machine to test it.
You can use ` character to continue ps1 execution on next line, or in case it's complex structure already (e.g. #{) - it can be continued as it's.
But even thus yaml syntax is minimized, it still needs to be tested - if you want different build phases and multiple build machines in use. One approach is to have special kind of argument -noop, which does not perform any operation - but will only print what was intended to be executed. This way you can run your pipeline in no time and check that everything what was planned to be executed - will gets executed.
I have an Azure bot service solution, which is in my VSTS Git repository.
I am using Task Runner in visual studio to compile, run and debug code on my local machine.
Similarly, I want to build and compile the code in my VSTS build pipeline, like how we do build using Visual Studio template for .Net applications
I am very new to Bot service projects where it having C# Script files.
I have seen msdn documents all are mentioned Continuous Integration where it will directly link to my Git repo branch. Whenever I commit code it automatically push to My Azure Bot Service, here I want to make sure the code I commit should be compile before push to Azure Bot service. For that I want to setup a Build pipeline.
Can anyone know how to setup a build pipeline for this kind of projects which having C# Script files?
UPDATE:
In my local PC I have installed Azure Functions CLI tools, and Command Task Runner extension to visual studio. I followed the below link to enable debugging locally
enter link description here
Task runner running debughost.cmd file which is in my Bot Service code, it contains the following code
#echo off
set size=0
call func settings list -data > %temp%\settings-list
call :filesize %temp%\settings-list
if NOT %size% == 0 goto show
#echo ----------------------------------------------------------------------
#echo To fetch your bot service settings run the following command:
#echo func azure functionapp fetch-app-settings [YOUR_BOT_SERVICE_NAME]
#echo func azure functionapp fetch-app-settings AthenaDevbvpn6xsu2tz6i
#echo ----------------------------------------------------------------------
goto start
:show
type %temp%\settings-list
erase %temp%\settings-list
:start
#func host start -p 3978
goto :eof
:filesize
set size=%~z1
exit /b 0
Output in task runner is
There isn't any out of box task to compile CSX file for now. Following is the workaround I can think for your scenario but which is not perfect:
Deploy your own build agent and then configure it by following the steps in the link you provided: Debugging C# bots built using the Azure Bot Service on Windows.
Create a powershell script to call the Azure Function CLI to compile the csx file like the "debughost.cmd" did and check if there is any error occur during the compilation.
Upload the powershell script into the source control.
Add a powershell script task in your build definition to call the powershell script you created.
Save and queue the build definition.
Here is the powershell script I created for your reference:
##Run Azure Func command to compile csx file
$compile = Start-Process 'func' -passthru -WorkingDirectory '.' -ArgumentList 'host start -p 3739' -RedirectStandardOutput 'output.txt'
##You need to set the sleep time base on the build time of your project
Start-Sleep -s 20
Stop-Process $compile -Force
Stop-Process -Name 'Func'
##Get the output from Func and check if there is error in the output
$boutput = Get-Content 'output.txt'
Write-Host 'Azure Function CLI Log:'
Write-Host '*****************************************************************************************************************************'
$boutput
Write-Host '*****************************************************************************************************************************'
$reg = "Function.compilation.error"
foreach($line in $boutput){
if($line -match $reg)
{
##Fail the task if function compilation error exist
Write-Host '##vso[task.logissue type=error]Error occur during function compilation'
Exit 1
}
}
Write-Host 'Function compilation success!'
And you will get this result if the compilation failed:
For Azure Bot Service, set continuous integration with master branch of your repository in VSTS, for repository in VSTS, you can create a new branch, such as Dev, then do work with Dev branch and merge to master. After that the code will be updated to azure.
Simple Steps:
Set continuous integration to your repository (master branch) in VSTS
Go to Code page of your repository in VSTS
Select Branches
Click New branch (e.g. dev)
Clone dev branch to your local and work with it (e.g. modify)
Push changes to remote Dev branch
Create a build definition
Enable Allow Scripts to Access OAuth Token option in Options tab.
Add a step to build app (e.g. gulp) according how do you build in local
Add Command Line step
Add Command Line step
Add Command Line step