Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed last month.
The community reviewed whether to reopen this question last month and left it closed:
Original close reason(s) were not resolved
Improve this question
When making changes to YAML-defined Azure DevOps Pipelines, it can be quite tedious to push changes to a branch just to see the build fail with a parsing error (valid YAML, but invalid pipeline definition) and then try to trial-and-error fix the problem.
It would be nice if the feedback loop could be made shorter, by analyzing and validating the pipeline definition locally; basically a linter with knowledge about the various resources etc that can be defined in an Azure pipline. However, I haven't been able to find any tool that does this.
Is there such a tool somewhere?
UPDATE: This functionality was removed in Issue #2479 in Oct, 2019
You can run the Azure DevOps agent locally with its YAML testing feature.
From the microsoft/azure-pipelines-agent project, to install an agent on your local machine.
Then use the docs page on Run local (internal only) to access the feature that is available within the agent.
This should get you very close to the type of feedback you would expect.
FYI - this feature has been removed in Issue #2479 - remove references to "local run" feature
Hopefully they'll bring it back later considering Github Actions has the ability to run actions locally
Azure DevOps has provided a run preview api endpoint that takes a yaml override and returns the expanded yaml. I added this support to the AzurePipelinePS powershell module. The command below will execute the pipeline with the id of 01 but with my yaml override and return the expanded yaml pipeline.
Preview - Preview
Service:
Pipelines
API Version:
6.1-preview.1
Queues a dry run of the pipeline and returns an object containing the final yaml.
# AzurePipelinesPS session
$session = 'myAPSessionName'
# Path to my local yaml
$path = ".\extension.yml"
# The id of an existing pipeline in my project
$id = 01
# The master branch of my repository
$resources = #{
repositories = #{
self = #{
refName = 'refs/heads/master'
}
}
}
Test-APPipelineYaml -Session $session -FullName $path -PipelineId $id -Resources
$resources
A pipeline described with YAML, and YAML can be validated if you have a schema with rules on how that YAML file should be composed. It will work as short feedback for the case you described, especially for syntax parsing errors. YAML Schema validation might be available for almost any IDE. So, we need:
YAML Schema - against what we will validate our pipelines
An IDE (VS Code as a popular example) - which will perform validation
on the fly
Configure two of the above to work together for the greater good
The schema might be found from many places, for this case, I'll suggest using https://www.schemastore.org/json/
It has Azure Pipelines schema (this schema contains some issues, like different types of values comparing to Microsoft documentation, but still cover the case of invalid syntax)
VS Code will require an additional plug-in to perform YAML text validation, there are also a bunch of those, who can validate schema. I'll suggest try YAML from RedHat (I know, the rating of the plugin is not the best, but it works for the validation and is also configurable)
In the settings of that VS Code plugin, you will see a section about validation (like on screenshot)
Now you can add to the settings required schema, even without downloading it to your machine:
"yaml.schemas": {
"https://raw.githubusercontent.com/microsoft/azure-pipelines-vscode/v1.174.2/service-schema.json" : "/*"
}
Simply save settings and restart your VS Code.
You will notice warnings about issues in your YAML Azure DevOps Pipeline files (if there is any). Failed validation for purpose on the screenshot below:
See more details with examples here as well
I can tell you how we manage this disconnect.
We use only pipeline-as-code, yaml.
We use ZERO yaml templates and strictly enforce one-file-pr-pipeline.
We use the azure yaml extension to vscode, to get linter-like behaviour in the editor.
Most of the actual things we do in the pipelines, we do by invoking PowerShell, that via sensible defaulting also can be invoked in the CLI, meaning we in essence can execute anything relevant locally.
Exceptions are Configurations of the agent - and actual pipeline-only stuff, such as download-artifact tasks and publish tasks etc.
Let me give some examples:
Here we have the step that builds our FrontEnd components:
Here we have that step running in the CLI:
I wont post a screenshot of the actual pipeline run, because it would take to long to sanitize it, but it basically is the same, plus some more trace information, provided by the run.ps1 call-wrapper.
Such tool does not exists at the moment - There are a couple existing issues in their feedback channels:
Github Issues - How to test YAML locally before commit
Developer Community - How to test YAML locally before commit
As a workaround - you can install azure devops build agent on your own machine, register as its own build pool and use it for building and validating yaml file correctness. See Jamie's answer in this thread
Of course this would mean that you will need to constantly switch between official build agents and your own build pool which is not good. Also if someone accidentally pushes some change via your own machine - you can suffer from all kind of problems, which can occur in normal build machine. (Like ui prompts, running hostile code on your own machine, and so on - hostile code could be even unintended virus infection because of 3rd party executable execution).
There are two approaches which you can take:
Use cake (frosten) to perform build locally as well as perform building on Azure Devops.
Use powershell to perform build locally as well as on Azure Devops.
Generally 1 versus 2 - 1 has more mechanics built-in, like publishing on Azure devops (supporting also other build system providers, like github actions, and so on...).
(I by myself would propose using 1st alternative)
As for 1:
Read for example following links to have slightly better understanding:
https://blog.infernored.com/cake-frosting-even-better-c-devops/
https://cakebuild.net/
Search for existing projects using "Cake.Frosting" on github to get some understanding how those projects works.
As for 2: it's possible to use powershell syntax to maximize the functionality done on build side and minimize functionality done on azure devops.
parameters:
- name: publish
type: boolean
default: true
parameters:
- name: noincremental
type: boolean
default: false
...
- task: PowerShell#2
displayName: invoke build
inputs:
targetType: 'inline'
script: |
# Mimic build machine
#$env:USERNAME = 'builder'
# Backup this script if need to troubleshoot it later on
$scriptDir = "$(Split-Path -parent $MyInvocation.MyCommand.Definition)"
$scriptPath = [System.IO.Path]::Combine($scriptDir, $MyInvocation.MyCommand.Name)
$tempFile = [System.IO.Path]::Combine([System.Environment]::CurrentDirectory, 'lastRun.ps1')
if($scriptPath -ne $tempFile)
{
Copy-Item $scriptPath -Destination $tempFile
}
./build.ps1 'build;pack' -nuget_servers #{
'servername' = #{
'url' = "https://..."
'pat' = '$(System.AccessToken)'
}
'servername2' = #{
'url' = 'https://...'
'publish_key' = '$(ServerSecretPublishKey)'
}
} `
-b $(Build.SourceBranchName) `
-addoperations publish=${{parameters.publish}};noincremental=${{parameters.noincremental}}
And on build.ps1 then handle all parameters as seems to be necessary.
param (
# Can add operations using simple command line like this:
# build a -add_operations c=true,d=true,e=false -v
# =>
# a c d
#
[string] $addoperations = ''
)
...
foreach ($operationToAdd in $addoperations.Split(";,"))
{
if($operationToAdd.Length -eq 0)
{
continue
}
$keyValue = $operationToAdd.Split("=")
if($keyValue.Length -ne 2)
{
"Ignoring command line parameter '$operationToAdd'"
continue
}
if([System.Convert]::ToBoolean($keyValue[1]))
{
$operationsToPerform = $operationsToPerform + $keyValue[0];
}
}
This will allow to run all the same operations on your own machine, locally and minimize amount of yaml file content.
Please notice that I have added also last execution .ps1 script copying as lastRun.ps1 file.
You can use it after build if you see some non reproducible problem - but you want to run same command on your own machine to test it.
You can use ` character to continue ps1 execution on next line, or in case it's complex structure already (e.g. #{) - it can be continued as it's.
But even thus yaml syntax is minimized, it still needs to be tested - if you want different build phases and multiple build machines in use. One approach is to have special kind of argument -noop, which does not perform any operation - but will only print what was intended to be executed. This way you can run your pipeline in no time and check that everything what was planned to be executed - will gets executed.
Related
I'm trying to write a file in my GitHub repo with GitHub Actions. When reading the docs, I stumbled across this:
Actions can communicate with the runner machine to set environment
variables, output values used by other actions, add debug messages to
the output logs, and other tasks.
Most workflow commands use the echo command in a specific format,
while others are invoked by writing to a file. For more information,
see "Environment files".
echo "::workflow-command parameter1={data},parameter2={data}::{command value}"
I don't know Ansible so I don't understand if this is YAML syntax or Ansible syntax.
I've tried to search Google and Stack Overflow but no results for double colon or ::
Can someone give me the link to the appropriate doc for :: or explain what this command does?
in other words, what does the example in my post throws in the shell? where are data and parameter1 and parameter2 defined if they are (in the yml, in the shell/env)? is command value a value i can reuse in the yml or in the shell?
The ::command can be logged to the console by any script or executable. They are special strings the GitHub runner will detect, interpret and then take the appropriate action on.
They are essentially the communication mechanism between the runner and the thing it's currently running. Anything that can write to the console can issue these strings.
It's totally up to you to build these stings, to inject any parameters these 'magic strings' require to function.
The docs you've found are the right docs on these to understand how to log there strings and what commands there are available to you.
If you're building a GitHub action using the JavaScript/Typescript toolkit, then it provides nice wrapper functions for these commands. The JavaScript SDK also gives you a sneak peak into how to composekthese strings.
If you're building a composite action, container task or are directly issueing commands from a script block in the workflow, then it's up to you to build the correct strings and log these to the console.
More details:
https://github.com/actions/toolkit/blob/main/packages/core/README.md
https://docs.github.com/en/actions/using-workflows/workflow-commands-for-github-actions (you had found that already)
Communicating through the console is the lowest common denominator between any tools running on just about any platform and requires no interprocess communication if any kind. It's the simplest way to communicate from a child process to it's parent.
You'd use the command to set an output variable.
echo "::set-output name=name::value"
To be able to reference the value cross at you'd reference any output variable from any action.
Or set an environment variable which will be set for the next job: echo "action_state=yellow" >> $GITHUB_ENV
See: https://stackoverflow.com/a/57989070/736079
Currently I have a large Bash script in my GitLab CI YAML file. The example below is how I am grouping my functions so that I can use them during my CI process.
.test-functions: &test-functions |
function write_one() { echo "Function 1" }
function write_two() { echo "Function 2"}
function write_three() { echo "Function 3"}
.plugin-nuget:
tags:
- test-es
- kubernetes
image: mygitlab-url.com:4567/containers/dotnet-sdk-2.2:latest
script:
- *test-functions
- write_one
- write_two
- write_three
The example below shows how we can include a YAML file inside another one:
include:
- project: MyGroup/GitlabCiPlugins/Dotnet
file: plugin-test.yml
ref: JayCiTest
I would like to do the same thing with my script. Instead of having the script in the same file as my YAML, I would like to include the file, so that my YAML has access to my script's functions. I would also like to use PowerShell instead of Bash if possible.
How can I do this?
Split shell scripts and GitLab CI syntax
GitLab CI has no feature "include file content to script block".
GitLab CI include feature doesn't import YAML anchors.
GitLab CI can't concat arrays. So you cant write before_script in one .gitlab-ci.yml file and then use it in before_script in another. You can only rewrite it, not concat.
Because of all of these problems you can't easily manage your scripts; split them, organize them and do another nice developer's decomposition stuff.
There are possible workarounds. You can store your scripts somewhere else; where a gitlab runner could reach them, and then inject them to your current job environment via source /ci-scripts/my-script.sh in before_script block.
Possible locations for storing ci scripts:
Special docker image with all your build/test/deploy utils and ci scripts
The same, but dedicated build server
You can deploy simple web page containing your scripts and download and import then in before_script. Just in case, make sure nobody, except gitlab runner could access it.
Using powershell
You can use powershell only if you installed your GitLab Runner on Windows. You can't use anything else in that case.
How can I run a powershell script after all stages have completed deployment? I have currently selected a deployment group job but am not 100% sure if this is what I need. I have included the script as part of the solution that is being deployed so that it will be available on all machines. Based on what I can find in the UI there seem to be 2 tasks that could work.
The first option would be to execute the task "Powershell Script" but it is asking for a path in the drop directory. The problem with this is that the file that I am interested in is in a zip file and there does not seem to be a way to specify a file in the zip file.
The other task I see is "PowerShell on Target Machines" and then it asks for a list of target machines. I am not sure what needs to be entered here as I want to run the powershell script on the current machine in the deploy group. It seems like this task was intended to run powershell scripts from the deployment machine to another remote machine. As a result this option does not seem like it fits my use case.
From looking the answers that I have come across talk about how to do this as part of an Azure site using something called "Kudu" (not relevant) or don't answer my other questions related to these tasks or seem like they are out of date.
A deployment group job will run on all of the servers specified in that deployment group. Based on what you have indicated, it sounds like that is what you are looking for.
Since you indicated that the file in question is a zip, you are actually going to need to use 2 separate tasks.
Extract Files - use this to extract the zip file so that you can execute the script
Powershell script - use this to execute the script. You can set the working directory for the script to execute in if necessary (under advanced options). Also remember that you don't have to use the file/folder selector 'helper' as it wont work in your case if the file is inside a zip. This is just used to populate the text box which you can manually do starting with the $(System.DefaultWorkingDirectory) variable and adding the necessary path of the script.
I am working in a quite complex code base of powershell build scripts
with a lot of dependencies on other PS scripts. Everything is dot-sourced, no module.
As we are refactoring the code into functions, a lot of issues are creeping up, mainly a liberal use of write-output for logging.
I try to enforce using write-verbose for logging, because the scripts will be deployed in release manager.
For some reason, as a build is executing, I don't see the verbose information. It is only shown afterwards when I inspect a specific step.
Write-Verbose usually outputs "Verbose:...." but in release manager I get "##[debug]Verbose" instead.
Is there a way to hide the [debug]Verbose prefix? Is there a better way to output logging info that would be shown in release manager?
This may due to you have enable Verbose Output in Team Foundation Release Logs
Navigate to the Variables tab and check if there is a variable named system.debug and its value set to true. If so, you will get a log with ##[debug] prefix such as below screenshot:
Set the value= false or directly delete the variable.
Regarding the Verbose prefix, it was a bug in our code. I have found that the Write-Host output is shown in the web portal release log with TFS 2017. This was not the case in release manager 2015. Now we can use Write-Host to output info to the users.
I have a build configuration in TFS 2013 that produces versioned build artifacts. This build uses the out of the box process template workflow. I want to destroy the build artifacts in the event that unit tests fail leaving only the log files. I have a Post-Test powershell script. How do I detect the test failure in this script?
Here is the relevant cleanup method in my post-test script:
function Clean-Files($dir){
if (Test-Path -path $dir) { rmdir $dir\* -recurse -force -exclude logs,"$NewVersion" }
if(0 -eq 1) { rmdir $dir\* -recurse -force -exclude logs }
}
Clean-Files "$Env:TF_BUILD_BINARIESDIRECTORY\"
How do I tests for test success in the function?
(Updated based on more information)
The way to do this is to use environment variables and read them in your PowerShell script. Unfortunately the powershell scripts are run in a new process each time so you can't rely on the environment variables being populated.
That said, there is a workaround so you can still get those values. It involves calling a small utility at the start of your powershell script as described in this blog post: http://blogs.msmvps.com/vstsblog/2014/05/20/getting-the-compile-and-test-status-as-environment-variables-when-extending-tf-build-using-scripts/
This isn't a direct answer, but... We just set the retention policy to only keep x number of builds. If tests fail, the artifacts aren't pushed out to the next step.
With our Jenkins setup, it wipes the artifacts every new build anyway, so that isn't a problem. Only the passing builds fire the step to push the artifacts to the Octopus NuGet server.
The simplest possible way (without customizing the build template, etc.) is do something like this in your post-test script:
$testRunSucceeded = (sqlcmd -S .\sqlexpress -U sqlloginname -P passw0rd -d Tfs_DefaultCollection -Q "select State from tbl_TestRun where BuildNumber='$Env:TF_BUILD_BUILDURI'" -h-1)[0].Trim() -eq "3"
Let's pull this apart:
sqlcmd.exe is required; it's installed with SQL Server and is in the path by default. If you're doing builds on a machine without SQL Server, install the Command Line Utilities for SQL Server.
-S parameter is server + instance name of your TFS server, e.g. "sqlexpress" instance on the local machine
Either use a SQL login name/password combo like my example, or give the TFS build account an account on SQL Server (preferred). Grant the account read-only access to the TFS instance database.
The TFS instance database is named something like "Tfs_DefaultCollection".
The "-h-1" part at the end of the sqlcmd statement tells sqlcmd to output the results of the query without headers; the [0] selects the first result; Trim() is required to remove leading spaces; State of "3" indicates all tests passed.
Maybe someday Microsoft will publish a nice REST API that will offer access to test run/result info. Don't hold your breath though -- I've been waiting six years so far. In the meantime, hitting up the TFS DB directly is a safe and reliable way to do it.
Hope this is of some use.