Parent Powershell script doesn't print messages from child script in Azure Pipeline - powershell

We have a Powershell script (A), which is executing another Powershell script (B) as another user (domain user). The child script (B) executes a range of SQL queries against a database on different servers in the domain. While the child script (B) is executing, it prints several status messages using Write-Host, which then in turn are captured by the parent script (A) and printed to the console. This works perfectly, when I am executing the parent script (A) manually from the Powershell prompt on my development machine (in the same domain). And I get all the output from the child script (B) as well.
Below is the part of the code in the parent script (A) that executes the child script (B).
try{
$ProcessInfo = New-Object System.Diagnostics.ProcessStartInfo
$ProcessInfo.FileName = "powershell.exe"
$ProcessInfo.Domain = "jmres"
$ProcessInfo.UserName = $username
$ProcessInfo.Password = ConvertTo-SecureString $password -AsPlainText -Force
$ProcessInfo.RedirectStandardError = $true
$ProcessInfo.RedirectStandardOutput = $true
$ProcessInfo.UseShellExecute = $false
$ProcessInfo.Arguments = $file, $fileArguments
$Process = New-Object System.Diagnostics.Process
$Process.StartInfo = $ProcessInfo
$Process.Start() | Out-Null
$stdOutput = $Process.StandardOutput.ReadToEnd()
$stdError = $Process.StandardError.ReadToEnd()
Write-Host $stdOutput
Write-Host $stdError
$Process.WaitForExit()
}
catch{
Write-Host "Could not execute script."
Write-Error $Error[0]
Exit
}
Now in our Azure Release Pipeline we are executing the parent script (A) automatically, via a Powershell task, when deploying our application. The child script (B) runs fine as the specified user and I can verify, that SQL queries are executed correctly against the given SQL server.
Okay. So the problem here is, that status messages from the parent script (A) are printed to the Powershell console in the Pipeline. But status messages from the child script (B) are not. I have been struggling for days now trying to figure out why.
Things I have tried:
Replacing Write-Host with Echo in the child script (B) to redirect output to the pipeline instead of directly to the console as described here.
Tried to only print the StandardOutput stream, and omit the StandardError stream to avoid a potential deadlock as described on this site.
Tried turning off (set $false) redirection of the StandardOutput and StandardError streams and have the output sent to the child scripts (B) own console window, and then hope for it to show up in the Azure Pipeline console.
Nothing works. The output in the Azure Pipeline console is two empty lines, where I would expect the output from the child script (B) to be shown. My findings this far is, that the problem is these two lines.
$stdOutput = $Process.StandardOutput.ReadToEnd()
$stdError = $Process.StandardError.ReadToEnd()
The two variables are empty. Why doesn't ReadToEnd() get anything from the streams? And why does it work when executed manually from the Powershell prompt but not in Azure Pipeline? Does anyone have any ideas what to try next?
UPDATE 1 (important):
So this is embarrassing. This morning I discovered a huge brain fart of mine. I though I had verified, that SQL queries where fired against the database. I was wrong. The child script (B) is not executed at all. Hence no SQL queries are fired against the database, when the parent script (A) is executed from the Azure Pipeline. That answers why I get no output. Strange thing is, that it works when I execute the parent script (A) manually from the Powershell prompt.
This is clearly an issue regarding user context while executing scripts and not, as I thought, capturing output from the child script (B). I don't know whether to delete this question. An admin must decide.

Please check Logging commands to print to the console in pipeline. For example:
Set the variables:
- pwsh: |
Write-Host "##vso[task.setvariable variable=sauce;]crushed tomatoes"
Write-Host "##vso[task.setvariable variable=secretSauce;issecret=true]crushed tomatoes with garlic"
Write-Host "##vso[task.setvariable variable=outputSauce;isoutput=true]canned goods"
name: SetVars
Read the variables:
- pwsh: |
Write-Host "Non-secrets automatically mapped in, sauce is $env:SAUCE"
Write-Host "Secrets are not automatically mapped in, secretSauce is $env:SECRETSAUCE"
Write-Host "You can use macro replacement to get secrets, and they'll be masked in the log: $(secretSauce)"
Write-Host "Future jobs can also see $env:SETVARS_OUTPUTSAUCE"
write-Host "Future jobs can also see $(SetVars.outputSauce)"
Console output:
Non-secrets automatically mapped in, sauce is crushed tomatoes
Secrets are not automatically mapped in, secretSauce is
You can use macro replacement to get secrets, and they'll be masked in the log: ***
Future jobs can also see canned goods
Future jobs can also see canned goods

Related

How to catch unexpected error running Liquibase in PowerShell

I have a small CI/CD script which is written in PowerShell, but I don't know how to stop it, if in the script I get unexpected error running Liquibase. All scripts in SQL and work (preconditions in place where I need to add), but I want to have more opportunity to control CI/CD script. Now, if the script gets the exception, it continues execution. The script updates some schemes and some of them has an influence on each other, so order is important.
#update first scheme - ETL (tables, global temp tables, packages)
.\liquibase update --defaults-file=import.properties
#how to stop this script, if I get unexpected error running Liquibase?
#update second scheme - data (only tables and roles for data)
.\liquibase update --defaults-file=data.properties
#update third scheme - views, tables and other for export data
.\liquibase update --defaults-file=export.properties
Have you tried this?
$result = Start-Process -filepath 'path to liquibase' -ArgumentList "your liquibase arguments go here" -wait
if($result.ExitCode -ne 0){
write-host 'something went wrong'
}

Powershell AcceptTcpClient() cannot be interrupted by Ctrl-C

I am writing a simple TCP/IP server using Powershell. I notice that Ctrl-C cannot interrupt the AcceptTcpClient() call. Ctrl-C works fine after the call though. I have searched around, nobody reported similar problem so far.
The problem can be repeated by the following simple code. I am using Windows 10, latest patch, with the native Powershell terminal, not Powershell ISE.
$listener=new-object System.Net.Sockets.TcpListener([system.net.ipaddress]::any, 4444)
$listener.start()
write-host "listener started at port 4444"
$tcpConnection = $listener.AcceptTcpClient()
write-host "accepted a client"
This is what happens when I run it
ps1> .\test_ctrl_c.ps1
listener started at port 4444
(Ctrl-C doesn't work here)
After getting #mklement0's answer, I gave up my original clean code. I figured out a workaround. Now Ctrl-C can interrupt my program
$listener=new-object System.Net.Sockets.TcpListener([system.net.ipaddress]::any, 4444)
$listener.start()
write-host "listener started at port 4444"
while ($true) {
if ($listener.Pending()) {
$tcpConnection = $listener.AcceptTcpClient()
break;
}
start-sleep -Milliseconds 1000
}
write-host "accepted a client"
Now Ctrl-C works
ps1> .\test_ctrl_c.ps1
listener started at port 4444
(Ctrl-C works here)
(As of PowerShell 7.0) Ctrl-C only works while PowerShell code is executing, not during execution of a .NET method.
Since most .NET method calls execute quickly, the problem doesn't usually surface.
See this GitHub issue for a discussion and background information.
As for possible workarounds:
The best approach - if possible - is the one shown in your own answer:
Run in a loop that periodically polls for a condition, sleeping between tries, and only invoke the method when the condition being met implies that the method will then execute quickly instead of blocking indefinitely.
If this is not an option (if there is no such condition you can test for), you can run the blocking method in a background job, so that it runs in a child process that can be terminated on demand by the caller; do note the limitations of this approach, however:
Background jobs are slow and resource-intensive, due to needing to run a new PowerShell instance in a hidden child process.
Since cross-process marshaling of inputs to and outputs from the job is necessary:
Inputs and output won't be live objects.
Complex objects (objects other than instances of primitive .NET types and a few well-known types) will be emulations of the original objects; in essence, objects with static copies of the property values, and no methods - see this answer for background information.
Here's a simple demonstration:
# Start the long-running, blocking operation in a background job (child process).
$jb = Start-Job -ErrorAction Stop {
# Simulate a long-running, blocking .NET method call.
[Threading.Thread]::Sleep(5000)
'Done.'
}
$completed = $false
try {
Write-Host -ForegroundColor Yellow "Waiting for background job to finish. Press Ctrl-C to abort."
# Note: The output collected won't be *live* objects, and with complex
# objects will be *emulations* of the original objects that have
# static copies of their property values and no methods.
$output = Receive-Job -Wait -Job $jb
$completed = $true
}
finally { # This block is called even when Ctrl-C has been pressed.
if (-not $completed) { Write-Warning 'Aborting due to Ctrl-C.' }
# Remove the background job.
# * If it is still running and we got here due to Ctrl-C, -Force is needed
# to forcefully terminate it.
# * Otherwise, normal job cleanup is performed.
Remove-Job -Force $jb
# If we got here due to Ctrl-C, execution stops here.
}
# Getting here means: Ctrl-C was *not* pressed.
# Show the output received from the job.
Write-Host -ForegroundColor Yellow "Job output received:"
$output
If you execute the above script and do not press Ctrl-C, you'll see:
If you do press Ctrl-C, you'll see:

How to determine if Write-Host will work for the current host

Is there any sane, reliable contract that dictates whether Write-Host is supported in a given PowerShell host implementation, in a script that could be run against any reasonable host implementation?
(Assume that I understand the difference between Write-Host and Write-Output/Write-Verbose and that I definitely do want Write-Host semantics, if supported, for this specific human-readable text.)
I thought about trying to interrogate the $Host variable, or $Host.UI/$Host.UI.RawUI but the only pertinent differences I am spotting are:
in $Host.Name:
The Windows powershell.exe commandline has $Host.Name = 'ConsoleHost'
ISE has $Host.Name = 'Windows PowerShell ISE Host'
SQL Server Agent job steps have $Host.Name = 'Default Host'
I have none of the non-Windows versions installed, but I expect they are different
in $Host.UI.RawUI:
The Windows powershell.exe commandline returns values for all properties of $Host.UI.RawUI
ISE returns no value (or $null) for some properties of $Host.UI.RawUI, e.g. $Host.UI.RawUI.CursorSize
SQL Server Agent job steps return no values for all of $Host.UI.RawUI
Again, I can't check in any of the other platforms
Maintaining a list of $Host.Name values that support Write-Host seems like it would be bit of a burden, especially with PowerShell being cross-platform now. I would reasonably want the script to be able to be called from any host and just do the right thing.
Background
I have written a script that can be reasonably run from within the PowerShell command prompt, from within the ISE or from within a SQL Server Agent job. The output of this script is entirely textual, for human reading. When run from the command prompt or ISE, the output is colorized using Write-Host.
SQL Server jobs can be set up in two different ways, and both support capturing the output into the SQL Server Agent log viewer:
via a CmdExec step, which is simple command-line execution, where the Job Step command text is an executable and its arguments, so you invoke the powershell.exe executable. Captured output is the stdout/sterr of the process:
powershell.exe -Command x:\pathto\script.ps1 -Arg1 -Arg2 -Etc
via a PowerShell step, where the Job Step command text is raw PS script interpreted by its own embedded PowerShell host implementation. Captured output is whatever is written via Write-Output or Write-Error:
#whatever
Do-WhateverPowershellCommandYouWant
x:\pathto\script.ps1 -Arg1 -Arg2 -Etc
Due to some other foibles of the SQL Server host implementation, I find that you can emit output using either Write-Output or Write-Error, but not both. If the job step fails (i.e. if you throw or Write-Error 'foo' -EA 'Stop'), you only get the error stream in the log and, if it succeeds, you only get the output stream in the log.
Additionally, the embedded PS implementation does not support Write-Host. Up to at least SQL Server 2016, Write-Host throws a System.Management.Automation.Host.HostException with the message A command that prompts the user failed because the host program or the command type does not support user interaction.
To support all of my use-cases, so far, I took to using a custom function Write-Message which was essentially set up like (simplified):
$script:can_write_host = $true
$script:has_errors = $false
$script:message_stream = New-Object Text.StringBuilder
function Write-Message {
Param($message, [Switch]$iserror)
if ($script:can_write_host) {
$private:color = if ($iserror) { 'Red' } else { 'White' }
try { Write-Host $message -ForegroundColor $private:color }
catch [Management.Automation.Host.HostException] { $script:can_write_host = $false }
}
if (-not $script:can_write_host) {
$script:message_stream.AppendLine($message) | Out-Null
}
if ($iserror) { $script:has_errors = $true }
}
try {
<# MAIN SCRIPT BODY RUNS HERE #>
}
catch {
Write-Message -Message ("Unhandled error: " + ($_ | Format-List | Out-String)) -IsError
}
finally {
if (-not $script:can_write_host) {
if ($script:has_errors) { Write-Error ($script:message_stream.ToString()) -EA 'Stop' }
else { Write-Output ($script:message_stream.ToString()) }
}
}
As of SQL Server 2019 (perhaps earlier), it appears Write-Host no longer throws an exception in the embedded SQL Server Agent PS host, but is instead a no-op that emits nothing to either output or error streams. Since there is no exception, my script's Write-Message function can no longer reliably detect whether it should use Write-Host or StringBuilder.AppendLine.
The basic workaround for SQL Server Agent jobs is to use the more-mature CmdExec step type (where Write-Output and Write-Host both get captured as stdout), but I do prefer the PowerShell step type for (among other reasons) its ability to split the command reliably across multiple lines, so I am keen to see if there is a more-holistic, PowerShell-based approach to solve the problem of whether Write-Host does anything useful for the host I am in.
Just check if your host is UserInteractive or an service type environment.
$script:can_write_host = [Environment]::UserInteractive
Another way to track the output of a script in real time is to push that output to a log file and then monitor it in real time using trace32. This is just a workaround, but it might work out for you.
Add-Content -Path "C:\Users\username\Documents\PS_log.log" -Value $variablewithvalue

Stop a process running longer than an hour

I posted a question a couple ago, I needed a powershell script that would start a service if it was stopped, stop the process if running longer than an hour then start it again, and if running less than an hour do nothing. I was given a great script that really helped, but I'm trying to convert it to a "process". I have the following code (below) but am getting the following error
Error
"cmdlet Start-Process at command pipeline position 3
Supply values for the following parameters:
FilePath: "
Powershell
# for debugging
$PSDefaultParameterValues['*Process:Verbose'] = $true
$str = Get-Process -Name "Chrome"
if ($str.Status -eq 'stopped') {
$str | Start-Process
} elseif ($str.StartTime -lt (Get-Date).AddHours(-1)) {
$str | Stop-Process -PassThru | Start-Process
} else {
'Chrome is running and StartTime is within the past hour!'
}
# other logic goes here
Your $str is storing a list of all processes with the name "Chrome", so I imagine you want a single process. You'll need to specify an ID in Get-Process or use $str[0] to single out a specific process in the list.
When you store a single process in $str, if you try to print your $str.Status, you'll see that it would output nothing, because Status isn't a property of a process. A process is either running or it doesn't exist. That said, you may want to have your logic instead check if it can find the process and then start the process if it can't, in which case it needs the path to the executable to start the process. More info with examples can be found here: https://technet.microsoft.com/en-us/library/41a7e43c-9bb3-4dc2-8b0c-f6c32962e72c?f=255&MSPPError=-2147217396
If you're using Powershell ISE, try storing the process in a variable in the terminal, type the variable with a dot afterwards, and Intellisense (if it's on) should give a list of all its available properties.

How do you get powershell script output in the deployment log for a vNext Release Template?

This blog post is the only thing I have found that comes close to the problem but it doesn't explain how to configure the Deploy Using PS/DSC to run with the verbose option:
http://nakedalm.com/create-log-entries-release-management/
I can get this Agent-based Release Template to run the script:
Write-Debug "debug"
Write-Output "output"
Write-Verbose "verbose"
Write-Warning "warning"
The drilling down into deployment log for this release provides a log with the lines:
output
WARNING: warning
If I add -verbose to the Arguments field I also get a "VERBOSE: verbose" line in the log.
This is great, but I need the access to the System Variables ($Stage, $BuildNumber, etc). When I create a vNext template to run the same script (instructions are here: http://www.visualstudio.com/en-us/get-started/deploy-no-agents-vs.aspx), the log reports:
Copying recursively from \\vsalm\Drops2\TestBuild\TestBuild_20130710.3 to c:\Windows\DtlDownloads\my vnext component succeeded.
It is nice that this copying operation succeeded and all, but I'd like my script's output to be in this log as well. Does anyone have any idea about configuring a "Deploy Using PS/DSC" action so that the powershell script output is captured by Release Management?
For a vNext Release Template, try Write-Verbose with a -verbose switch if you want to see the powershell script output in the logs.
Eg. Write-Verbose "Some text" -verbose
Allow me to shamelessly plug my own blog article about this subject, because I found that it's not easy to get a script that does everything right.
The following script skeleton ensures that stdout output is logged without empty lines, and that processing is halted on the first error, in which case both the error details and the stdout output upto that point are visible in MSRM:
function Deploy()
{
$ErrorActionPreference = "Stop"
try
{
#
# Deployment actions go here.
#
}
catch
{
# Powershell tracks all Exceptions that occured so far in $Error
Write-Output "$Error"
# Signal failure to MSRM:
$ErrorActionPreference = "Continue"
Write-Error "Error: $Error"
}
}
pushd $Global:ApplicationPath
Deploy | Out-String | Write-Verbose -Verbose
popd
This is just the final result, the explanation behind it can be found here.