How to determine if Write-Host will work for the current host - powershell

Is there any sane, reliable contract that dictates whether Write-Host is supported in a given PowerShell host implementation, in a script that could be run against any reasonable host implementation?
(Assume that I understand the difference between Write-Host and Write-Output/Write-Verbose and that I definitely do want Write-Host semantics, if supported, for this specific human-readable text.)
I thought about trying to interrogate the $Host variable, or $Host.UI/$Host.UI.RawUI but the only pertinent differences I am spotting are:
in $Host.Name:
The Windows powershell.exe commandline has $Host.Name = 'ConsoleHost'
ISE has $Host.Name = 'Windows PowerShell ISE Host'
SQL Server Agent job steps have $Host.Name = 'Default Host'
I have none of the non-Windows versions installed, but I expect they are different
in $Host.UI.RawUI:
The Windows powershell.exe commandline returns values for all properties of $Host.UI.RawUI
ISE returns no value (or $null) for some properties of $Host.UI.RawUI, e.g. $Host.UI.RawUI.CursorSize
SQL Server Agent job steps return no values for all of $Host.UI.RawUI
Again, I can't check in any of the other platforms
Maintaining a list of $Host.Name values that support Write-Host seems like it would be bit of a burden, especially with PowerShell being cross-platform now. I would reasonably want the script to be able to be called from any host and just do the right thing.
Background
I have written a script that can be reasonably run from within the PowerShell command prompt, from within the ISE or from within a SQL Server Agent job. The output of this script is entirely textual, for human reading. When run from the command prompt or ISE, the output is colorized using Write-Host.
SQL Server jobs can be set up in two different ways, and both support capturing the output into the SQL Server Agent log viewer:
via a CmdExec step, which is simple command-line execution, where the Job Step command text is an executable and its arguments, so you invoke the powershell.exe executable. Captured output is the stdout/sterr of the process:
powershell.exe -Command x:\pathto\script.ps1 -Arg1 -Arg2 -Etc
via a PowerShell step, where the Job Step command text is raw PS script interpreted by its own embedded PowerShell host implementation. Captured output is whatever is written via Write-Output or Write-Error:
#whatever
Do-WhateverPowershellCommandYouWant
x:\pathto\script.ps1 -Arg1 -Arg2 -Etc
Due to some other foibles of the SQL Server host implementation, I find that you can emit output using either Write-Output or Write-Error, but not both. If the job step fails (i.e. if you throw or Write-Error 'foo' -EA 'Stop'), you only get the error stream in the log and, if it succeeds, you only get the output stream in the log.
Additionally, the embedded PS implementation does not support Write-Host. Up to at least SQL Server 2016, Write-Host throws a System.Management.Automation.Host.HostException with the message A command that prompts the user failed because the host program or the command type does not support user interaction.
To support all of my use-cases, so far, I took to using a custom function Write-Message which was essentially set up like (simplified):
$script:can_write_host = $true
$script:has_errors = $false
$script:message_stream = New-Object Text.StringBuilder
function Write-Message {
Param($message, [Switch]$iserror)
if ($script:can_write_host) {
$private:color = if ($iserror) { 'Red' } else { 'White' }
try { Write-Host $message -ForegroundColor $private:color }
catch [Management.Automation.Host.HostException] { $script:can_write_host = $false }
}
if (-not $script:can_write_host) {
$script:message_stream.AppendLine($message) | Out-Null
}
if ($iserror) { $script:has_errors = $true }
}
try {
<# MAIN SCRIPT BODY RUNS HERE #>
}
catch {
Write-Message -Message ("Unhandled error: " + ($_ | Format-List | Out-String)) -IsError
}
finally {
if (-not $script:can_write_host) {
if ($script:has_errors) { Write-Error ($script:message_stream.ToString()) -EA 'Stop' }
else { Write-Output ($script:message_stream.ToString()) }
}
}
As of SQL Server 2019 (perhaps earlier), it appears Write-Host no longer throws an exception in the embedded SQL Server Agent PS host, but is instead a no-op that emits nothing to either output or error streams. Since there is no exception, my script's Write-Message function can no longer reliably detect whether it should use Write-Host or StringBuilder.AppendLine.
The basic workaround for SQL Server Agent jobs is to use the more-mature CmdExec step type (where Write-Output and Write-Host both get captured as stdout), but I do prefer the PowerShell step type for (among other reasons) its ability to split the command reliably across multiple lines, so I am keen to see if there is a more-holistic, PowerShell-based approach to solve the problem of whether Write-Host does anything useful for the host I am in.

Just check if your host is UserInteractive or an service type environment.
$script:can_write_host = [Environment]::UserInteractive

Another way to track the output of a script in real time is to push that output to a log file and then monitor it in real time using trace32. This is just a workaround, but it might work out for you.
Add-Content -Path "C:\Users\username\Documents\PS_log.log" -Value $variablewithvalue

Related

Parent Powershell script doesn't print messages from child script in Azure Pipeline

We have a Powershell script (A), which is executing another Powershell script (B) as another user (domain user). The child script (B) executes a range of SQL queries against a database on different servers in the domain. While the child script (B) is executing, it prints several status messages using Write-Host, which then in turn are captured by the parent script (A) and printed to the console. This works perfectly, when I am executing the parent script (A) manually from the Powershell prompt on my development machine (in the same domain). And I get all the output from the child script (B) as well.
Below is the part of the code in the parent script (A) that executes the child script (B).
try{
$ProcessInfo = New-Object System.Diagnostics.ProcessStartInfo
$ProcessInfo.FileName = "powershell.exe"
$ProcessInfo.Domain = "jmres"
$ProcessInfo.UserName = $username
$ProcessInfo.Password = ConvertTo-SecureString $password -AsPlainText -Force
$ProcessInfo.RedirectStandardError = $true
$ProcessInfo.RedirectStandardOutput = $true
$ProcessInfo.UseShellExecute = $false
$ProcessInfo.Arguments = $file, $fileArguments
$Process = New-Object System.Diagnostics.Process
$Process.StartInfo = $ProcessInfo
$Process.Start() | Out-Null
$stdOutput = $Process.StandardOutput.ReadToEnd()
$stdError = $Process.StandardError.ReadToEnd()
Write-Host $stdOutput
Write-Host $stdError
$Process.WaitForExit()
}
catch{
Write-Host "Could not execute script."
Write-Error $Error[0]
Exit
}
Now in our Azure Release Pipeline we are executing the parent script (A) automatically, via a Powershell task, when deploying our application. The child script (B) runs fine as the specified user and I can verify, that SQL queries are executed correctly against the given SQL server.
Okay. So the problem here is, that status messages from the parent script (A) are printed to the Powershell console in the Pipeline. But status messages from the child script (B) are not. I have been struggling for days now trying to figure out why.
Things I have tried:
Replacing Write-Host with Echo in the child script (B) to redirect output to the pipeline instead of directly to the console as described here.
Tried to only print the StandardOutput stream, and omit the StandardError stream to avoid a potential deadlock as described on this site.
Tried turning off (set $false) redirection of the StandardOutput and StandardError streams and have the output sent to the child scripts (B) own console window, and then hope for it to show up in the Azure Pipeline console.
Nothing works. The output in the Azure Pipeline console is two empty lines, where I would expect the output from the child script (B) to be shown. My findings this far is, that the problem is these two lines.
$stdOutput = $Process.StandardOutput.ReadToEnd()
$stdError = $Process.StandardError.ReadToEnd()
The two variables are empty. Why doesn't ReadToEnd() get anything from the streams? And why does it work when executed manually from the Powershell prompt but not in Azure Pipeline? Does anyone have any ideas what to try next?
UPDATE 1 (important):
So this is embarrassing. This morning I discovered a huge brain fart of mine. I though I had verified, that SQL queries where fired against the database. I was wrong. The child script (B) is not executed at all. Hence no SQL queries are fired against the database, when the parent script (A) is executed from the Azure Pipeline. That answers why I get no output. Strange thing is, that it works when I execute the parent script (A) manually from the Powershell prompt.
This is clearly an issue regarding user context while executing scripts and not, as I thought, capturing output from the child script (B). I don't know whether to delete this question. An admin must decide.
Please check Logging commands to print to the console in pipeline. For example:
Set the variables:
- pwsh: |
Write-Host "##vso[task.setvariable variable=sauce;]crushed tomatoes"
Write-Host "##vso[task.setvariable variable=secretSauce;issecret=true]crushed tomatoes with garlic"
Write-Host "##vso[task.setvariable variable=outputSauce;isoutput=true]canned goods"
name: SetVars
Read the variables:
- pwsh: |
Write-Host "Non-secrets automatically mapped in, sauce is $env:SAUCE"
Write-Host "Secrets are not automatically mapped in, secretSauce is $env:SECRETSAUCE"
Write-Host "You can use macro replacement to get secrets, and they'll be masked in the log: $(secretSauce)"
Write-Host "Future jobs can also see $env:SETVARS_OUTPUTSAUCE"
write-Host "Future jobs can also see $(SetVars.outputSauce)"
Console output:
Non-secrets automatically mapped in, sauce is crushed tomatoes
Secrets are not automatically mapped in, secretSauce is
You can use macro replacement to get secrets, and they'll be masked in the log: ***
Future jobs can also see canned goods
Future jobs can also see canned goods

azure cli not stopping on error in PS script

Boiled down to the minimum I have a Powershell script that looks like this:
$ErrorActionPreference='Stop'
az group deployment create -g ....
# Error in az group
# More az cli commands
Even though there is an error in the az group deployment create, it continues to execute beyond the error. How do I stop the script from executing on error?
Normally, the first thing to try is to wrap everything in a try...catch block.
try {
$ErrorActionPreference='Stop'
az group deployment create -g ....
# Error in az group
# More az cli commands
}
catch {
Write-Host "ERROR: $Error"
}
Aaaaand it doesn't work.
This is when you scratch your head and realize that we are dealing with Azure CLI commands and not Azure PowerShell. They are not native PowerShell commands which would honor $ErrorActionPreference, instead, (as bad as it sounds), we have to treat each Azure CLI command independently as if we were running individual programs (in the back end, the Azure CLI is basically aliases which run python commands. Ironically most of Azure PowerShell commands are just PowerShell wrappers around Azure CLI commands ;-)).
Knowing that the Azure CLI will not throw a terminating error, instead, we have to treat it like a program, and look at the return code (stored in the variable $LASTEXITCODE) to see if it was successful or not. Once we evaluate that, we can then throw an error:
az group deployment create -g ....
if($LASTEXITCODE){
Write-Host "ERROR: in Az Group"
Throw "ERROR: in Az Group"
}
This then can be implemented into a try...catch block to stop the subsequent commands from running:
try {
az group deployment create -g ....
if($LASTEXITCODE){
Write-Host "ERROR: in Az Group"
Throw "ERROR: in Az Group"
}
# Error in az group
# More az cli commands
}
catch {
Write-Host "ERROR: $Error"
}
Unfortunately this means you have to evaluate $LASTEXITCODE every single time you execute an Azure CLI command.
You may use the automatic variable $?. This contains the result of the last execution, i.e. True if succeded or False if it failed: https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_automatic_variables?view=powershell-5.1#section-1
Your code would look something like this:
az group deployment create -g ....
if(!$?){
Write-Error "Your error message"
# Handle your error
}
Unfortunately, and same to #HAL9256 answer, you will need to add this code every single time you execute azure-cli
Update - PS Version 7 error trapping for Az CLI (or kubectl)
This behavior changed significantly in PowerShell v7:
https://github.com/PowerShell/PowerShell/issues/4002
https://github.com/PowerShell/PowerShell/pull/13361
I ended up using the following solution instead which is consistent with PowerShell v5/7. It has the advantage of accepting pipeline input for commands like kubectl where stdio can be used for applying configuration etc.
I used scriptblocks since this integrates nicely into existing scripts without breaking syntax:
# no error
# Invoke-Cli {cmd /c echo hi}
# throws terminating error
# (with psv7 you can now call Get-Error to see details)
# Invoke-Cli {az bad}
# outputs a json object
# Invoke-Cli {az account show} -AsJson
# applies input from pipeline to kubernetes cluster
# Read-Host "Enter some json" | Invoke-Cli {kubectl apply -f -}
function Invoke-Cli([scriptblock]$script, [switch]$AsJson)
{
$ErrorActionPreference = "Continue"
$jsonOutputArg = if ($AsJson)
{
"--output json"
}
$scriptBlock = [scriptblock]::Create("$script $jsonOutputArg 2>&1")
if ($MyInvocation.ExpectingInput)
{
Write-Verbose "Invoking with input: $script"
$output = $input | Invoke-Command $scriptBlock 2>&1
}
else
{
Write-Verbose "Invoking: $script"
$output = Invoke-Command $scriptBlock
}
if ($LASTEXITCODE)
{
Write-Error "$Output" -ErrorAction Stop
}
else
{
if ($AsJson)
{
return $output | ConvertFrom-Json
}
else
{
return $output
}
}
}
Handling command shell errors in PowerShell <= v5
Use $ErrorActionPreference = 'Stop' and append 2>&1 to the end of the statement.
# this displays regular output:
az account show
# this also works as normal:
az account show 2>&1
# this demonstrates that regular output is unaffected / still works:
az account show -o json 2>&1 | ConvertFrom-Json
# this displays an error as normal console output (but unfortunately ignores $ErrorActionPreference):
az gibberish
# this throws a terminating error like the OP is asking:
$ErrorActionPreference = 'Stop'
az gibberish 2>&1
Background
PowerShell native and non-native streams, while similar, do not function identically. PowerShell offers extended functionality with streams and concepts that are not present in the Windows command shell (such as Write-Warning or Write-Progress).
Due to the way PowerShell handles Windows command shell output, the error stream from a non-native PowerShell process is unable (by itself) to throw a terminating error in PowerShell. It will appear in the PowerShell runspace as regular output, even though in the context of Windows command shell, it is indeed writing to the error stream.
This can be demonstrated:
# error is displayed but appears as normal text
cmd /c "asdf"
# nothing is displayed since the stdio error stream is redirected to nul
cmd /c "asdf 2>nul"
# error is displayed on the PowerShell error stream as red text
(cmd /c asdf) 2>&1
# error is displayed as red text, and the script will terminate at this line
$ErrorActionPreference = 'Stop'
(cmd /c asdf) 2>&1
Explanation of 2>&1 workaround
Unless otherwise specified, PowerShell will redirect Windows command shell stdio errors to the console output stream by default. This happens outside the scope of PowerShell. The redirection applies before any errors reach the PowerShell error stream, making $ErrorActionPreference irrelevant.
The behavior changes when explicitly specified to redirect the Windows command shell error stream to any other location in PowerShell context. As a result, PowerShell is forced to remove the stdio error redirection, and the output becomes visible to the PowerShell error stream.
Once the output is on the PowerShell error stream, the $ErrorActionPreference setting will determine the outcome of how error messages are handled.
Further info on redirection and PowerShell streams
https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_redirection?view=powershell-7.2
The sample from HAL9256 didn't work for me either, but I did find a workaround:
az group deployment create -g ....
if($error){
Write-Host "ERROR: in Az Group"
#throw error or take corrective action
$error.clear() # optional - clear the error
}
# More az cli commands
I used this to try to deploy a keyvault, but in my environment, soft delete is enabled, so the old key vault is kept for a number of days. If the deployment failed, I'd run a key vault purge and then try the deployment again.
Building from HAL9256's and sam's answers, add this one-liner after each az cli or Powershell az command in a Powershell script to ensure that the catch block is hit when an error occurs:
if($error){ throw $error }
Then for the catch block, clear the error, e.g.
catch {
$ErrorMessage = $_.Exception.Message
$error.Clear()
Write-Warning "-- Operation Failed: $ErrorMessage"
}

Stop a process running longer than an hour

I posted a question a couple ago, I needed a powershell script that would start a service if it was stopped, stop the process if running longer than an hour then start it again, and if running less than an hour do nothing. I was given a great script that really helped, but I'm trying to convert it to a "process". I have the following code (below) but am getting the following error
Error
"cmdlet Start-Process at command pipeline position 3
Supply values for the following parameters:
FilePath: "
Powershell
# for debugging
$PSDefaultParameterValues['*Process:Verbose'] = $true
$str = Get-Process -Name "Chrome"
if ($str.Status -eq 'stopped') {
$str | Start-Process
} elseif ($str.StartTime -lt (Get-Date).AddHours(-1)) {
$str | Stop-Process -PassThru | Start-Process
} else {
'Chrome is running and StartTime is within the past hour!'
}
# other logic goes here
Your $str is storing a list of all processes with the name "Chrome", so I imagine you want a single process. You'll need to specify an ID in Get-Process or use $str[0] to single out a specific process in the list.
When you store a single process in $str, if you try to print your $str.Status, you'll see that it would output nothing, because Status isn't a property of a process. A process is either running or it doesn't exist. That said, you may want to have your logic instead check if it can find the process and then start the process if it can't, in which case it needs the path to the executable to start the process. More info with examples can be found here: https://technet.microsoft.com/en-us/library/41a7e43c-9bb3-4dc2-8b0c-f6c32962e72c?f=255&MSPPError=-2147217396
If you're using Powershell ISE, try storing the process in a variable in the terminal, type the variable with a dot afterwards, and Intellisense (if it's on) should give a list of all its available properties.

Determining when machine is in good state for Powershell Remoting?

Update - the original question claimed that I was able to successfully perform an Invoke-Command and then shortly after was unable to; I thought it was due to processes going on during login after a windows upgrade.
It turns out the PC was actually starting, running a quick batch/cmd file, and then restarting. This is what was leading to being able to do PS Remoting and then suddenly not. The restart was quick enough after first boot that I didn't realize it was happening. Sorry for the bad question.
For the curious, the machine was restarting because of a remnant of the Microsoft Deployment Toolkit in-place upgrade process. The way MDT completes its task-sequence post-upgrade is problematic for many reasons, and now I've got another to count.
Old details (no longer relevant, with incorrect assumption that machine was not restarting after first successful Invoke-Command):
I'm automating various things with VMs in Hyper-V using powershell and powershell remoting. I'll start up a VM and then want to run some commands on it via powershell.
I'm struggling with determining when I can safely start running the remote commands via things like Invoke-Command. I can't start immediately as I need to let the machine start up.
Right now I poll the VM with a one second sleep between calls until the following function returns $true:
function VMIsReady {
[CmdletBinding()]
Param(
[Parameter(Mandatory=$True)][object]$VM
)
$heartbeat = $vm.Heartbeat
Write-Host "vm heartbeat is $heartbeat"
if (($heartbeat -eq 'OkApplicationsHealthy') -or ($heartbeat -eq 'OkApplicationsUnknown'))
{
try
{
Invoke-Command -VMName $vm.Name -Credential $(GetVMCredentials) {$env:computername} | out-null
}
catch [System.Management.Automation.RuntimeException]
{
Write-Host 'Caught expected automation runtime exception'
return $false
}
Write-Host 'remoting ready'
return $true
}
}
This usually works well; however, after a windows upgrade has happened, there are issues. I'll get Hyper-V remoting errors of various sorts even after VMIsReady returns $true.
These errors are happening while the VM is in the process of first user login after upgrade (Windows going through "Hi;We've got some updates for your PC;This might take several minutes-Don't turn off your PC). VMIsReady returns true right as this sequence starts - I imagine I probably should be waiting until the sequence is done, but I've no idea how to know when that is.
Is there a better way of determining when the machine is in a state where I can expect remoting to work without issue? Perhaps a way to tell when a user is fully logged on?
You can use Test-WSMan.
Of run a script on the invoke that will receive a response from the server.
[bool]$Response | Out-Null
try{
$Response = Invoke-Command -ComputerName Test-Computer -ScriptBlock {return $true}
}catch{
return $false
}
if ($Response -ne $true){
return $false
}else{
return $true
}

Redirect Write-Host statements to a file

I have a PowerShell script that I am debugging and would like to redirect all Write-Host statements to a file. Is there an easy way to do that?
Until PowerShell 4.0, Write-Host sends the objects to the host. It does not return any objects.
Beginning with PowerShell 5.0 and newer, Write-Host is a wrapper for Write-Information, which allows to output to the information stream and redirect it with 6>> file_name.
http://technet.microsoft.com/en-us/library/hh849877.aspx
However, if you have a lot of Write-Host statements, replace them all with Write-Log, which lets you decide whether output to console, file or event log, or all three.
Check also:
Add-Content
redirection operators like >, >>, 2>, 2>, 2>&1
Write-Log
Tee-Object
Start-Transcript.
You can create a proxy function for Write-Host which sends objects to the standard output stream instead of merely printing them. I wrote the below cmdlet for just this purpose. It will create a proxy on the fly which lasts only for the duration of the current pipeline.
A full writeup is on my blog here, but I've included the code below. Use the -Quiet switch to suppress the console write.
Usage:
PS> .\SomeScriptWithWriteHost.ps1 | Select-WriteHost | out-file .\data.log # Pipeline usage
PS> Select-WriteHost { .\SomeScriptWithWriteHost.ps1 } | out-file .\data.log # Scriptblock usage (safer)
function Select-WriteHost
{
[CmdletBinding(DefaultParameterSetName = 'FromPipeline')]
param(
[Parameter(ValueFromPipeline = $true, ParameterSetName = 'FromPipeline')]
[object] $InputObject,
[Parameter(Mandatory = $true, ParameterSetName = 'FromScriptblock', Position = 0)]
[ScriptBlock] $ScriptBlock,
[switch] $Quiet
)
begin
{
function Cleanup
{
# Clear out our proxy version of write-host
remove-item function:\write-host -ea 0
}
function ReplaceWriteHost([switch] $Quiet, [string] $Scope)
{
# Create a proxy for write-host
$metaData = New-Object System.Management.Automation.CommandMetaData (Get-Command 'Microsoft.PowerShell.Utility\Write-Host')
$proxy = [System.Management.Automation.ProxyCommand]::create($metaData)
# Change its behavior
$content = if($quiet)
{
# In quiet mode, whack the entire function body,
# simply pass input directly to the pipeline
$proxy -replace '(?s)\bbegin\b.+', '$Object'
}
else
{
# In noisy mode, pass input to the pipeline, but allow
# real Write-Host to process as well
$proxy -replace '(\$steppablePipeline\.Process)', '$Object; $1'
}
# Load our version into the specified scope
Invoke-Expression "function ${scope}:Write-Host { $content }"
}
Cleanup
# If we are running at the end of a pipeline, we need
# to immediately inject our version into global
# scope, so that everybody else in the pipeline
# uses it. This works great, but it is dangerous
# if we don't clean up properly.
if($pscmdlet.ParameterSetName -eq 'FromPipeline')
{
ReplaceWriteHost -Quiet:$quiet -Scope 'global'
}
}
process
{
# If a scriptblock was passed to us, then we can declare
# our version as local scope and let the runtime take
# it out of scope for us. It is much safer, but it
# won't work in the pipeline scenario.
#
# The scriptblock will inherit our version automatically
# as it's in a child scope.
if($pscmdlet.ParameterSetName -eq 'FromScriptBlock')
{
. ReplaceWriteHost -Quiet:$quiet -Scope 'local'
& $scriptblock
}
else
{
# In a pipeline scenario, just pass input along
$InputObject
}
}
end
{
Cleanup
}
}
You can run your script in a secondary PowerShell shell and capture the output like this:
powershell -File 'Your-Script.ps1' > output.log
That worked for me.
Using redirection will cause Write-Host to hang. This is because Write-Host deals with various formatting issues that are specific to the current terminal being used. If you just want your script to have flexibility to output as normal (default to shell, with capability for >, 2>, etc.), use Write-Output.
Otherwise, if you really want to capture the peculiarities of the current terminal, Start-Transcript is a good place to start. Otherwise you'll have to hand-test or write some complicated test suites.
Try adding a asterisk * before the angle bracket > to redirect all streams:
powershell -File Your-Script.ps1 *> output.log
When stream redirection is requested, if no specific stream is indicated then by default only the Success Stream(1>) is redirected. Write-Host is an alias for Write-Information which writes to the Information Stream (6>). To redirect all streams use *>.
Powershell-7.1 supports redirection of multiple output streams:
Success Stream (#1): PowerShell 2.0 Write-Output
Error Stream (#2): PowerShell 2.0 Write-Error
Warning Stream (#3): PowerShell 3.0 Write-Warning
Verbose Stream (#4): PowerShell 3.0 Write-Verbose
Debug Stream (#5): PowerShell 3.0 Write-Debug
Information Stream (#6): PowerShell 5.0 Write-Information
All Streams (*): PowerShell 3.0
This worked for me in my first PowerShell script that I wrote few days back:
function logMsg($msg)
{
Write-Output $msg
Write-Host $msg
}
Usage in a script:
logMsg("My error message")
logMsg("My info message")
PowerShell script execution call:
ps> .\myFirstScript.ps1 >> testOutputFile.txt
It's not exactly answer to this question, but it might help someone trying to achieve both logging to the console and output to some log file, doing what I reached here :)
Define a function called Write-Host. Have it write to a file. You may have some trouble if some invocations use a weird set of arguments. Also, this will only work for invocations that are not Snapin qualified.
If you have just a few Write-Host statements, you can use the "6>>" redirector operator to a file:
Write-Host "Your message." 6>> file_path_or_file_name
This is the "Example 5: Suppress output from Write-Host" provided by Microsoft, modified accordingly to about_Operators.
I just added Start-Transcript at the top of the script and Stop-Transcript at the bottom.
The output file was intended to be named <folder where script resides>-<datestamp>.rtf, but for some reason the trace file was being put where I did not expect it — the desktop!
You should not use Write-Host if you wish to have the messages in a file. It is for writing to the host only.
Instead you should use a logging module, or Set/Add-Content.
I have found the best way to handle this is to have a logging function that will detect if there is a host UI and act accordingly. When the script is executed in interactive mode it will show the details in the host UI, but when it is run via WinRM or in a non-interactive mode it will fall back on the Write-Output so that you can capture it using the > or *> redirection operators
function Log-Info ($msg, $color = "Blue") {
if($host.UI.RawUI.ForegroundColor -ne $null) {
Write-Host "`n[$([datetime]::Now.ToLongTimeString())] $msg" -ForegroundColor $color -BackgroundColor "Gray"
} else {
Write-Output "`r`n[$([datetime]::Now.ToLongTimeString())] $msg"
}
}
In cases where you want to capture the full output with the Write-Host coloring, you can use the Get-ConsoleAsHtml.ps1 script to export the host's scrolling buffer to an HTML or RTF file.
Use Write-Output instead of Write-Host, and redirect it to a file like this:
Deploy.ps1 > mylog.log or Write-Output "Hello World!" > mylog.log
Try using Write-Output instead of Write-Host.
The output goes down the pipeline, but if this is the end of the pipe, it goes to the console.
> Write-Output "test"
test
> Write-Output "test" > foo.txt
> Get-Content foo.txt
test