Configure VSTS to properly abort in case of errors - azure-devops

Given the following .vsts-ci.yml file
queue: Hosted Linux Preview
steps:
- script: |
false
true
The expected behavior and the actual behavior differ.
Expected behavior: Build fails at the false command, true will not be executed.
Actual behavior: Build succeeds, true is executed after the false command.
Details:
I would expect the VSTS build to fail on the first command false.
However, VSTS executes the second command true as well and reports success.
This means that the shell is setup incorrectly for build systems. The correct setup would be to have pipefail and errexit set. But it seems that errexit is not set, and probably pipefail isn't set either.
Is there a way to get the correct behavior, that is, pipefail and errexit, within the YAML file, without using bash -c in the scripts section? I know I can easily workaround by just moving the command sequence into a shell script or Makefile, I just want to know if there is a configuration possibility to get the YAML file execute shell commands in a shell with errexit and pipefail set, preferably a bash shell.

It seems that the bash shell created by VSTS does not have the pipefail and errexit flags set. See the following issue on GitHub about this: https://github.com/Microsoft/vsts-agent/issues/1803
But they can be set within the YAML file, like this:
queue: Hosted Linux Preview
steps:
- script: |
set -e ; set -o pipefail
false
true

Related

Difference between '-- /bin/sh -c ls' vs 'ls' when setting a command in kubectl?

I am bit confused with commands in kubectl. I am not sure when I can use the commands directly like
command: ["command"] or -- some_command
vs
command: [/bin/sh, -c, "command"] or -- /bin/sh -c some_command
I am bit confused with commands in kubectl. I am not sure when I can use the commands directly
Thankfully the distinction is easy(?): every command: is fed into the exec system call (or its golang equivalent); so if your container contains a binary that the kernel can successfully execute, you are welcome to use it in command:; if it is a shell built-in, shell alias, or otherwise requires sh (or python or whatever) to execute, then you must be explicit to the container runtime about that distinction
If it helps any, the command: syntax of kubernetes container:s are the equivalent of ENTRYPOINT ["",""] line of Dockerfile, not CMD ["", ""] and for sure not ENTRYPOINT echo this is fed to /bin/sh for you.
At a low level, every (Unix/Linux) command is invoked as a series of "words". If you type a command into your shell, the shell does some preprocessing and then creates the "words" and runs the command. In Kubernetes command: (and args:) there isn't a shell involved, unless you explicitly supply one.
I would default to using the list form unless you specifically need shell features.
command: # overrides Docker ENTRYPOINT
- the_command
- --an-argument
- --another
- value
If you use list form, you must explicitly list out each word. You may use either YAML block list syntax as above or flow list syntax [command, arg1, arg2]. If there are embedded spaces in a single item [command, --option value] then those spaces are included in a single command-line option as if you quoted it, which frequently confuses programs.
You can explicitly invoke a shell if you need to:
command:
- sh
- -c
- the_command --an-argument --another value
This command is in exactly three words, sh, the option -c, and the shell command. The shell will process this command in the usual way and execute it.
You need the shell form only if you're doing something more complicated than running a simple command with fixed arguments. Running multiple sequential commands c1 && c2 or environment variable expansion c1 "$OPTION" are probably the most common ones, but any standard Bourne shell syntax would be acceptable here (redirects, pipelines, ...).

Azure Dev Ops warnings inline shell script

Problem
I am running an inline shell script on a remote machine via SSH in an Azure Dev Ops pipeline. Depending on some conditions, running the pipeline should throw a custom warning.
The feature works, albeit in a very twisted way: when running the pipeline, the warning always appears. To be more precise:
If the condition it not met, the warning appears once.
If the condition is met, the warning appears twice.
The example below should give a clear illustration of the issue.
Example
Let's say we have the following .yaml pipeline template. Please adapt the pool and sshEndpoint settings for your setup.
pool: 'Default'
steps:
- checkout: none
displayName: Suppress regular checkout
- task: SSH#0
displayName: 'Run shell inline on remote machine'
inputs:
sshEndpoint: 'user#machine'
runOptions: inline
inline: |
if [[ 3 -ne 0 ]]
then
echo "ALL GOOD!"
else
echo "##vso[task.logissue type=warning]Something is fishy..."
fi
failOnStdErr: false
Expected behavior
So, the above bash script should echo ALL GOOD! as 3 is not 0. Therefore, the warning message should not be triggered.
Current behavior
However, when I run the above pipeline in Azure Dev Ops, the step log overview says there is a warning:
The log itself looks like this (connection details blacked out):
So even though the code takes the ALL GOOD! code path, the warning message appears. I assume it is because the whole bash script is echoed to the log - but that is just an assumption.
Question
How can I make sure that the warning only appears in the executed pipeline when the conditions for it are satisfied?
Looks like the SSH task logs the script to the console prior to executing it. You can probably trick the log parser:
HASHHASH="##"
echo $HASHHASH"vso[task.logissue type=warning]Something is fishy..."
I'd consider it a bug that the script is appended to the log as-as. I've filed an issue. (Some parsing does seem to take place as well)...

How can I command Helm NOT to throw an error if a release has nothing to release?

I have a helm chart deploying to three environments (dev, stage and prod). My is running this command like this:
helm upgrade --install --namespace=$DEPLOYMENT_ENV ingress-external-api -f ./ingress-external-api/values-$DEPLOYMENT_ENV.yaml ./ingress-external-api --atomic
Where $DEVELOPMENT_ENV is either dev, stage or prod.
The important fact here is that only values-prod.yaml has a proper yaml definition. All the others values-dev.yaml and the same for stage are empty and therefore will not deploy any releases.
That results in the following helm error:
+ helm upgrade --install --namespace=$DEPLOYMENT_ENV ingress-external-api -f ./ingress-external-api/values-$DEPLOYMENT_ENV.yaml ./ingress-external-api --atomic
Release "ingress-external-api" does not exist. Installing it now.
INSTALL FAILED
PURGING CHART
Error: release ingress-external-api failed: no objects visited
Successfully purged a chart!
Error: release ingress-external-api failed: no objects visited
Which furthermore results in my bitbucket pipeline to stop and fail.
However as you also can see that did not help.
So my question is how can I tell helm not to throw an error at all if it can not find anything to substitute it's template with?
I am not sure this is supposed to be helm's responsibility. Why do you want to update dev/stage with missing values ? It seems a little bit weird.
If you are not going to update anything there, just run it once in production only.
If you insist doing it that way, there's also the possibility to 'lie' about your returning code in Bash and implement it on pipeline level.
Lie about exit status
how can I tell helm not to throw an error at all?
Add " || true" to the end of your command, something like this:
helm upgrade --install --namespace=$DEPLOYMENT_ENV ... || true
How it works
Most of the commands in the bitbucket-pipelines.yml file are bash/shell commands running on Unix. The program that runs the yml script will look for error exit codes from each command, to see whether the command has failed (to stop the script), or succeeded (to move onto the next command).
When you add "|| true" to the end of a shell command, it means "ignore any errors, and return success code 0 always". Here's a demonstration, which you can run in a Terminal window on your local computer:
echo "hello" # Runs successfully
echo $? # Check the status code of the last command. It should be 0 (success)
echo-xx "hello" # Runs with error, becuase there is no command called "echo-xx"
echo $? # Check the status code of the last command. It should be 127 (error)
echo-xx "hello" || true # Runs with success, even though "echo-xx" ran with an error
echo $? # Check the status code of the last command. It should be 0 (success)
More Info about shell return codes:
Bash ignoring error for a particular command
https://www.cyberciti.biz/faq/bash-get-exit-code-of-command/
If you really don't want to deploy anything you can use empty values.yaml files and then add ifs and loops to your template files.
Basically, you have to fill the values.yaml with an empty structure, e.g.:
my-release:
value:
other-value:
Then you can do something like:
{{ if .Values.my-release.value }}

Supervisor not detecting file

I am trying to run a command via a proxy. When I run this command in shell it works
http_proxy=http://username:password#proxy:29800 /home/www/program -env prod
But when I put this into my supervisor config it tells me it can't find this file
[program:goprogram]
command = http_proxy=http://username:password#proxy:29800 home/www/program -env prod
directory = /home/www/program
enviroment=PATH='/home/www/env/bin:/usr/bin'
user = user
autorestart = true
Now, I assume it has to do with the http_proxy or syntax, but not sure how to fix it.
Since you are trying to set up an environment variable in the command itself, you might try a different way to call said command:
command = /bin/sh -c 'http_proxy=http://username:password#proxy:29800 home/www/program -env prod'
That way:
you don't have to add that environment variable to the environment section (or the credentials would be visible to all supervisord process' and child process’ environments)
you set http_proxy only for the command to be executed.
You need to set the http_proxy variable. Either the way #VonC described it or:
[program:goprogram]
command = home/www/program -env prod
directory = /home/www/program
enviroment=
PATH='/home/www/env/bin:/usr/bin'
http_proxy=http://username:password#proxy:29800
user = user
autorestart = true
More information can be found in this SO question.

How to tell bash not to issue warnings "cannot set terminal process group" and "no job control in this shell" when it can't assert job control?

To create a new interactive bash shell I call bash -i. Due to issues with my environment, bash cannot assert job control (I'm using cygwin bash in GNU emacs) and issues warnings ("cannot set terminal process group" and "no job control in this shell"). - I have to live with the disabled job control in my environment, but I would like to get rid of the warning:
How can I tell bash not to assert job control and not to issue these warnings? I obviously still want the shell as an interactive one.
Note: I have tried set -m in .bashrc, but bash still writes out the warnings on start up - the ~/.bashrc file might be executed after the shell tries to assert job control. Is there a command line option which would work?
man bash says about set options that The options can also be specified as arguments to an invocation of the shell. Note you will need +m not -m. Admittedly the manual isn't quite clear on that.
However looking at bash source code (version 4.2), apparently it ignores the state of this flag. I would say this is a bug.
Applying the following small patch makes bash honor the m flag on startup. Unfortunately this means you will have to recompile bash.
--- jobs.c.orig 2011-01-07 16:59:29.000000000 +0100
+++ jobs.c 2012-11-09 03:34:49.682918771 +0100
## -3611,7 +3611,7 ##
}
/* We can only have job control if we are interactive. */
- if (interactive == 0)
+ if (interactive == 0 || !job_control)
{
job_control = 0;
original_pgrp = NO_PID;
Tested on my linux machine where job control is available by default, so the error messages you see on mingw are not printed here. You can still see that bash honors the +m now, though.
$ ./bash --noprofile --norc
$ echo $-
himBH
$ fg
bash: fg: current: no such job
$ exit
$ ./bash --noprofile --norc +m
$ echo $-
hiBH
$ fg
bash: fg: no job control