Powershell Release Manager Output - powershell

I am working in a quite complex code base of powershell build scripts
with a lot of dependencies on other PS scripts. Everything is dot-sourced, no module.
As we are refactoring the code into functions, a lot of issues are creeping up, mainly a liberal use of write-output for logging.
I try to enforce using write-verbose for logging, because the scripts will be deployed in release manager.
For some reason, as a build is executing, I don't see the verbose information. It is only shown afterwards when I inspect a specific step.
Write-Verbose usually outputs "Verbose:...." but in release manager I get "##[debug]Verbose" instead.
Is there a way to hide the [debug]Verbose prefix? Is there a better way to output logging info that would be shown in release manager?

This may due to you have enable Verbose Output in Team Foundation Release Logs
Navigate to the Variables tab and check if there is a variable named system.debug and its value set to true. If so, you will get a log with ##[debug] prefix such as below screenshot:
Set the value= false or directly delete the variable.

Regarding the Verbose prefix, it was a bug in our code. I have found that the Write-Host output is shown in the web portal release log with TFS 2017. This was not the case in release manager 2015. Now we can use Write-Host to output info to the users.

Related

Is there a tool to validate an Azure DevOps Pipeline locally? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed last month.
The community reviewed whether to reopen this question last month and left it closed:
Original close reason(s) were not resolved
Improve this question
When making changes to YAML-defined Azure DevOps Pipelines, it can be quite tedious to push changes to a branch just to see the build fail with a parsing error (valid YAML, but invalid pipeline definition) and then try to trial-and-error fix the problem.
It would be nice if the feedback loop could be made shorter, by analyzing and validating the pipeline definition locally; basically a linter with knowledge about the various resources etc that can be defined in an Azure pipline. However, I haven't been able to find any tool that does this.
Is there such a tool somewhere?
UPDATE: This functionality was removed in Issue #2479 in Oct, 2019
You can run the Azure DevOps agent locally with its YAML testing feature.
From the microsoft/azure-pipelines-agent project, to install an agent on your local machine.
Then use the docs page on Run local (internal only) to access the feature that is available within the agent.
This should get you very close to the type of feedback you would expect.
FYI - this feature has been removed in Issue #2479 - remove references to "local run" feature
Hopefully they'll bring it back later considering Github Actions has the ability to run actions locally
Azure DevOps has provided a run preview api endpoint that takes a yaml override and returns the expanded yaml. I added this support to the AzurePipelinePS powershell module. The command below will execute the pipeline with the id of 01 but with my yaml override and return the expanded yaml pipeline.
Preview - Preview
Service:
Pipelines
API Version:
6.1-preview.1
Queues a dry run of the pipeline and returns an object containing the final yaml.
# AzurePipelinesPS session
$session = 'myAPSessionName'
# Path to my local yaml
$path = ".\extension.yml"
# The id of an existing pipeline in my project
$id = 01
# The master branch of my repository
$resources = #{
repositories = #{
self = #{
refName = 'refs/heads/master'
}
}
}
Test-APPipelineYaml -Session $session -FullName $path -PipelineId $id -Resources
$resources
A pipeline described with YAML, and YAML can be validated if you have a schema with rules on how that YAML file should be composed. It will work as short feedback for the case you described, especially for syntax parsing errors. YAML Schema validation might be available for almost any IDE. So, we need:
YAML Schema - against what we will validate our pipelines
An IDE (VS Code as a popular example) - which will perform validation
on the fly
Configure two of the above to work together for the greater good
The schema might be found from many places, for this case, I'll suggest using https://www.schemastore.org/json/
It has Azure Pipelines schema (this schema contains some issues, like different types of values comparing to Microsoft documentation, but still cover the case of invalid syntax)
VS Code will require an additional plug-in to perform YAML text validation, there are also a bunch of those, who can validate schema. I'll suggest try YAML from RedHat (I know, the rating of the plugin is not the best, but it works for the validation and is also configurable)
In the settings of that VS Code plugin, you will see a section about validation (like on screenshot)
Now you can add to the settings required schema, even without downloading it to your machine:
"yaml.schemas": {
"https://raw.githubusercontent.com/microsoft/azure-pipelines-vscode/v1.174.2/service-schema.json" : "/*"
}
Simply save settings and restart your VS Code.
You will notice warnings about issues in your YAML Azure DevOps Pipeline files (if there is any). Failed validation for purpose on the screenshot below:
See more details with examples here as well
I can tell you how we manage this disconnect.
We use only pipeline-as-code, yaml.
We use ZERO yaml templates and strictly enforce one-file-pr-pipeline.
We use the azure yaml extension to vscode, to get linter-like behaviour in the editor.
Most of the actual things we do in the pipelines, we do by invoking PowerShell, that via sensible defaulting also can be invoked in the CLI, meaning we in essence can execute anything relevant locally.
Exceptions are Configurations of the agent - and actual pipeline-only stuff, such as download-artifact tasks and publish tasks etc.
Let me give some examples:
Here we have the step that builds our FrontEnd components:
Here we have that step running in the CLI:
I wont post a screenshot of the actual pipeline run, because it would take to long to sanitize it, but it basically is the same, plus some more trace information, provided by the run.ps1 call-wrapper.
Such tool does not exists at the moment - There are a couple existing issues in their feedback channels:
Github Issues - How to test YAML locally before commit
Developer Community - How to test YAML locally before commit
As a workaround - you can install azure devops build agent on your own machine, register as its own build pool and use it for building and validating yaml file correctness. See Jamie's answer in this thread
Of course this would mean that you will need to constantly switch between official build agents and your own build pool which is not good. Also if someone accidentally pushes some change via your own machine - you can suffer from all kind of problems, which can occur in normal build machine. (Like ui prompts, running hostile code on your own machine, and so on - hostile code could be even unintended virus infection because of 3rd party executable execution).
There are two approaches which you can take:
Use cake (frosten) to perform build locally as well as perform building on Azure Devops.
Use powershell to perform build locally as well as on Azure Devops.
Generally 1 versus 2 - 1 has more mechanics built-in, like publishing on Azure devops (supporting also other build system providers, like github actions, and so on...).
(I by myself would propose using 1st alternative)
As for 1:
Read for example following links to have slightly better understanding:
https://blog.infernored.com/cake-frosting-even-better-c-devops/
https://cakebuild.net/
Search for existing projects using "Cake.Frosting" on github to get some understanding how those projects works.
As for 2: it's possible to use powershell syntax to maximize the functionality done on build side and minimize functionality done on azure devops.
parameters:
- name: publish
type: boolean
default: true
parameters:
- name: noincremental
type: boolean
default: false
...
- task: PowerShell#2
displayName: invoke build
inputs:
targetType: 'inline'
script: |
# Mimic build machine
#$env:USERNAME = 'builder'
# Backup this script if need to troubleshoot it later on
$scriptDir = "$(Split-Path -parent $MyInvocation.MyCommand.Definition)"
$scriptPath = [System.IO.Path]::Combine($scriptDir, $MyInvocation.MyCommand.Name)
$tempFile = [System.IO.Path]::Combine([System.Environment]::CurrentDirectory, 'lastRun.ps1')
if($scriptPath -ne $tempFile)
{
Copy-Item $scriptPath -Destination $tempFile
}
./build.ps1 'build;pack' -nuget_servers #{
'servername' = #{
'url' = "https://..."
'pat' = '$(System.AccessToken)'
}
'servername2' = #{
'url' = 'https://...'
'publish_key' = '$(ServerSecretPublishKey)'
}
} `
-b $(Build.SourceBranchName) `
-addoperations publish=${{parameters.publish}};noincremental=${{parameters.noincremental}}
And on build.ps1 then handle all parameters as seems to be necessary.
param (
# Can add operations using simple command line like this:
# build a -add_operations c=true,d=true,e=false -v
# =>
# a c d
#
[string] $addoperations = ''
)
...
foreach ($operationToAdd in $addoperations.Split(";,"))
{
if($operationToAdd.Length -eq 0)
{
continue
}
$keyValue = $operationToAdd.Split("=")
if($keyValue.Length -ne 2)
{
"Ignoring command line parameter '$operationToAdd'"
continue
}
if([System.Convert]::ToBoolean($keyValue[1]))
{
$operationsToPerform = $operationsToPerform + $keyValue[0];
}
}
This will allow to run all the same operations on your own machine, locally and minimize amount of yaml file content.
Please notice that I have added also last execution .ps1 script copying as lastRun.ps1 file.
You can use it after build if you see some non reproducible problem - but you want to run same command on your own machine to test it.
You can use ` character to continue ps1 execution on next line, or in case it's complex structure already (e.g. #{) - it can be continued as it's.
But even thus yaml syntax is minimized, it still needs to be tested - if you want different build phases and multiple build machines in use. One approach is to have special kind of argument -noop, which does not perform any operation - but will only print what was intended to be executed. This way you can run your pipeline in no time and check that everything what was planned to be executed - will gets executed.

Run Publish-AzureRmVMDscConfiguration without logging in

How can I pack a configuration archive without logging in? I want to run this command in an automation context:
Publish-AzureRmVMDscConfiguration -Force -ConfigurationPath .\AppServer.ps1 -ConfigurationArchivePath .\AppServer.ps1.zip
There are no calls out to the network, I'm not performing any subscription operations that require authorization, I just want it to make me a nice zip file with the cmdlet. However, when I try, it fails:
Publish-AzureRmVMDscConfiguration : Run Login-AzureRmAccount to login.
This seems silly to me but I went ahead and checked Get-Help. I can't find anything that looks helpful though. How can I pack up my DSC configuration without having to run Login-AzureRmAccount?
I suspect this is a "design limitation" - you might start by filing an issue here: https://github.com/Azure/azure-powershell
As a workaround, depending on the "sophistication" of your DSC config, you can just create a zip file yourself. If you have additional/non-default modules or resources, you'd need to put those in the archive as well (which is what the cmdlet helps with)...

TFS2015 Release Management Powershell DSC Variable Use

I am using TFS2015 Release Management and Powershell DSC to manage the deployment of applications - previously I was using RM2013.
One thing I have noticed is that in RM2013, in my Powershell DSC scripts I was able to access variables such as $applicationPath - which was populated with the TFS Build Drop location, for use in the DSC scripts and MOF creation.
In RM2015 it doesn't appear that this works? I have tried using the variables listed here: https://msdn.microsoft.com/en-us/Library/vs/alm/Build/scripts/variables
However none of these ever seem to be populated?
Is there actually a way of using these RM2015 system & build variables from within a PS DSC script now?
Kind regards
Try to use the corresponding generated environment variables (for example $env:Build.DefinitionName).
If it not work try more ways such as $env:Build_DefinitionName or $(Build.BuildNumber) and $(Build_BuildNumber)
Just as the relevant documentation mentioned:
Any text input can reference a variable by using the $(variable_name)
syntax and will be substituted with the actual value at run-time. All
variables are also exported to the environment as upppercase and any .
are replaced with _. In scripts you can reference variables via the
environment, i.e. %VARIABLE_NAME%, $VARIABLE_NAME, $env:VARIABLE_NAME,
depeding on the operating system.

Where the TeamCity service messages should be written?

I'm a beginner on TeamCity, so forgive my dump question.
For some reason the coverage reporting for my solution is not working. So, to run the tests I run nunit-console in a command line step and then use the xml output file in a build feature of type [XML report processing]. Test results appear on the TeamCity GUI but no coverage statistics.
It seems to be that there a way to configure the tests reporting manually https://confluence.jetbrains.com/display/TCD8/Manually+Configuring+Reporting+Coverage but I don't know where to put these service messages:
teamcity[dotNetCoverage ='' ='' ...]
Just write them to standard output. It is captured by TeamCity and service messages from it will be processed.
Pay attention, however, to the syntax. Service message should begin with ##
As Oleg already stated you can dump them in standard output
Console.WriteLine(...) from C#
echo from command prompt or powershell,
...
Here is an example http://log.ld.si/2014/10/20/build-log-in-teamcity-using-psake
There is a psake helper module, https://github.com/psake/psake-contrib/wiki/teamcity.psm1 and source is available on https://github.com/psake/psake-contrib/blob/master/teamcity.psm1 (you can freely use this from powershell as well)
It has already implemented alot of Service Messages

How do I detect if the test run was successful in a Team Build 2013 Post-Test script?

I have a build configuration in TFS 2013 that produces versioned build artifacts. This build uses the out of the box process template workflow. I want to destroy the build artifacts in the event that unit tests fail leaving only the log files. I have a Post-Test powershell script. How do I detect the test failure in this script?
Here is the relevant cleanup method in my post-test script:
function Clean-Files($dir){
if (Test-Path -path $dir) { rmdir $dir\* -recurse -force -exclude logs,"$NewVersion" }
if(0 -eq 1) { rmdir $dir\* -recurse -force -exclude logs }
}
Clean-Files "$Env:TF_BUILD_BINARIESDIRECTORY\"
How do I tests for test success in the function?
(Updated based on more information)
The way to do this is to use environment variables and read them in your PowerShell script. Unfortunately the powershell scripts are run in a new process each time so you can't rely on the environment variables being populated.
That said, there is a workaround so you can still get those values. It involves calling a small utility at the start of your powershell script as described in this blog post: http://blogs.msmvps.com/vstsblog/2014/05/20/getting-the-compile-and-test-status-as-environment-variables-when-extending-tf-build-using-scripts/
This isn't a direct answer, but... We just set the retention policy to only keep x number of builds. If tests fail, the artifacts aren't pushed out to the next step.
With our Jenkins setup, it wipes the artifacts every new build anyway, so that isn't a problem. Only the passing builds fire the step to push the artifacts to the Octopus NuGet server.
The simplest possible way (without customizing the build template, etc.) is do something like this in your post-test script:
$testRunSucceeded = (sqlcmd -S .\sqlexpress -U sqlloginname -P passw0rd -d Tfs_DefaultCollection -Q "select State from tbl_TestRun where BuildNumber='$Env:TF_BUILD_BUILDURI'" -h-1)[0].Trim() -eq "3"
Let's pull this apart:
sqlcmd.exe is required; it's installed with SQL Server and is in the path by default. If you're doing builds on a machine without SQL Server, install the Command Line Utilities for SQL Server.
-S parameter is server + instance name of your TFS server, e.g. "sqlexpress" instance on the local machine
Either use a SQL login name/password combo like my example, or give the TFS build account an account on SQL Server (preferred). Grant the account read-only access to the TFS instance database.
The TFS instance database is named something like "Tfs_DefaultCollection".
The "-h-1" part at the end of the sqlcmd statement tells sqlcmd to output the results of the query without headers; the [0] selects the first result; Trim() is required to remove leading spaces; State of "3" indicates all tests passed.
Maybe someday Microsoft will publish a nice REST API that will offer access to test run/result info. Don't hold your breath though -- I've been waiting six years so far. In the meantime, hitting up the TFS DB directly is a safe and reliable way to do it.
Hope this is of some use.