Azure Dev Ops warnings inline shell script - azure-devops

Problem
I am running an inline shell script on a remote machine via SSH in an Azure Dev Ops pipeline. Depending on some conditions, running the pipeline should throw a custom warning.
The feature works, albeit in a very twisted way: when running the pipeline, the warning always appears. To be more precise:
If the condition it not met, the warning appears once.
If the condition is met, the warning appears twice.
The example below should give a clear illustration of the issue.
Example
Let's say we have the following .yaml pipeline template. Please adapt the pool and sshEndpoint settings for your setup.
pool: 'Default'
steps:
- checkout: none
displayName: Suppress regular checkout
- task: SSH#0
displayName: 'Run shell inline on remote machine'
inputs:
sshEndpoint: 'user#machine'
runOptions: inline
inline: |
if [[ 3 -ne 0 ]]
then
echo "ALL GOOD!"
else
echo "##vso[task.logissue type=warning]Something is fishy..."
fi
failOnStdErr: false
Expected behavior
So, the above bash script should echo ALL GOOD! as 3 is not 0. Therefore, the warning message should not be triggered.
Current behavior
However, when I run the above pipeline in Azure Dev Ops, the step log overview says there is a warning:
The log itself looks like this (connection details blacked out):
So even though the code takes the ALL GOOD! code path, the warning message appears. I assume it is because the whole bash script is echoed to the log - but that is just an assumption.
Question
How can I make sure that the warning only appears in the executed pipeline when the conditions for it are satisfied?

Looks like the SSH task logs the script to the console prior to executing it. You can probably trick the log parser:
HASHHASH="##"
echo $HASHHASH"vso[task.logissue type=warning]Something is fishy..."
I'd consider it a bug that the script is appended to the log as-as. I've filed an issue. (Some parsing does seem to take place as well)...

Related

GitLab CI: allow_failure:exit_codes does not behave as expected with powershell

Using a shell executor with powershell, allow_failure:exit_codes do not behave as documented
https://docs.gitlab.com/ee/ci/yaml/#allow_failureexit_codes
With the following very simple job:
build:
script:
- exit 4
allow_failure:
exit_codes:
- 4
The job fails instead of passing with a warning sign.
It works perfectly with bash.
it works perfectly with allow_failure: true.
Am I missing something ? Is there a work around ?
This is an aspect of how Powershell works, in general. See: Returning an exit code from a PowerShell script
To workaround this for a shell executor running powershell, use the following:
script:
- $host.SetShouldExit(4); exit 4
# ...
Note, however this workaround may not work for custom executors, like the beta shared-windows autoscaling executors on gitlab.com -- there is no workaround in that case at this time of writing.

How can I command Helm NOT to throw an error if a release has nothing to release?

I have a helm chart deploying to three environments (dev, stage and prod). My is running this command like this:
helm upgrade --install --namespace=$DEPLOYMENT_ENV ingress-external-api -f ./ingress-external-api/values-$DEPLOYMENT_ENV.yaml ./ingress-external-api --atomic
Where $DEVELOPMENT_ENV is either dev, stage or prod.
The important fact here is that only values-prod.yaml has a proper yaml definition. All the others values-dev.yaml and the same for stage are empty and therefore will not deploy any releases.
That results in the following helm error:
+ helm upgrade --install --namespace=$DEPLOYMENT_ENV ingress-external-api -f ./ingress-external-api/values-$DEPLOYMENT_ENV.yaml ./ingress-external-api --atomic
Release "ingress-external-api" does not exist. Installing it now.
INSTALL FAILED
PURGING CHART
Error: release ingress-external-api failed: no objects visited
Successfully purged a chart!
Error: release ingress-external-api failed: no objects visited
Which furthermore results in my bitbucket pipeline to stop and fail.
However as you also can see that did not help.
So my question is how can I tell helm not to throw an error at all if it can not find anything to substitute it's template with?
I am not sure this is supposed to be helm's responsibility. Why do you want to update dev/stage with missing values ? It seems a little bit weird.
If you are not going to update anything there, just run it once in production only.
If you insist doing it that way, there's also the possibility to 'lie' about your returning code in Bash and implement it on pipeline level.
Lie about exit status
how can I tell helm not to throw an error at all?
Add " || true" to the end of your command, something like this:
helm upgrade --install --namespace=$DEPLOYMENT_ENV ... || true
How it works
Most of the commands in the bitbucket-pipelines.yml file are bash/shell commands running on Unix. The program that runs the yml script will look for error exit codes from each command, to see whether the command has failed (to stop the script), or succeeded (to move onto the next command).
When you add "|| true" to the end of a shell command, it means "ignore any errors, and return success code 0 always". Here's a demonstration, which you can run in a Terminal window on your local computer:
echo "hello" # Runs successfully
echo $? # Check the status code of the last command. It should be 0 (success)
echo-xx "hello" # Runs with error, becuase there is no command called "echo-xx"
echo $? # Check the status code of the last command. It should be 127 (error)
echo-xx "hello" || true # Runs with success, even though "echo-xx" ran with an error
echo $? # Check the status code of the last command. It should be 0 (success)
More Info about shell return codes:
Bash ignoring error for a particular command
https://www.cyberciti.biz/faq/bash-get-exit-code-of-command/
If you really don't want to deploy anything you can use empty values.yaml files and then add ifs and loops to your template files.
Basically, you have to fill the values.yaml with an empty structure, e.g.:
my-release:
value:
other-value:
Then you can do something like:
{{ if .Values.my-release.value }}

How to fail a pipeline gracefully when a variable is not defined

I am using a pipeline variable to define a path for a deploy script. There is the danger that someone forgets to define the variable. What would be a good way to detect this and give the appropriate error message in the yaml script file?
I could create a PowerShell script that would fail if the variable is not defined. But I would prefer to keep it all in the yaml file.
The PowerShell script to examine the variable value can be tiny and can still live in the YAML as the inline script of the PowerShell task:
- powershell: if (!$env:MyVar) Write-Error "The variable is not set"
displayName: Check Prerequisite Variable
failOnStderr: true
errorActionPreference: stop
I might be mistaken on syntax, but it describes the idea.

Configure VSTS to properly abort in case of errors

Given the following .vsts-ci.yml file
queue: Hosted Linux Preview
steps:
- script: |
false
true
The expected behavior and the actual behavior differ.
Expected behavior: Build fails at the false command, true will not be executed.
Actual behavior: Build succeeds, true is executed after the false command.
Details:
I would expect the VSTS build to fail on the first command false.
However, VSTS executes the second command true as well and reports success.
This means that the shell is setup incorrectly for build systems. The correct setup would be to have pipefail and errexit set. But it seems that errexit is not set, and probably pipefail isn't set either.
Is there a way to get the correct behavior, that is, pipefail and errexit, within the YAML file, without using bash -c in the scripts section? I know I can easily workaround by just moving the command sequence into a shell script or Makefile, I just want to know if there is a configuration possibility to get the YAML file execute shell commands in a shell with errexit and pipefail set, preferably a bash shell.
It seems that the bash shell created by VSTS does not have the pipefail and errexit flags set. See the following issue on GitHub about this: https://github.com/Microsoft/vsts-agent/issues/1803
But they can be set within the YAML file, like this:
queue: Hosted Linux Preview
steps:
- script: |
set -e ; set -o pipefail
false
true

Gitlab CI Powershell with Write-Output

I have a powershell build script that I am executing from Gitlab CI Pipelines.
When run manually (on the build server) the build script runs fine, but when executed by the Gitlab CI runner it:
Times out after an hour (runs for about 20 mins if run manually)
Does not echo Write-Output statements into the build log
So there is something going wrong when executed from Gitlab CI. However, as the Write-Output statements aren't displayed in the Build Log there is no real way to troubleshoot this.
What do I need to do to get the Write-Output statements to display in the build log? I would have assumed any STDOUT messages would show there, but they're not coming through.
The answer here was to set PowerShell as the shell to use in the gitlab runner.
This is done by adding the following line to the gitlab runners config.toml file:
shell = "powershell"
Now the file executes correctly and Write-Output statements are echo'ed in the build log.