In my powershell script, I am calling a cmdlet (Say "Connect-Database"). If the cmdlet needs additional parameter, it prompts the user to supply those values ( say, "Connect-Database" can not get credentials from registry , so it prompts for "Username" "password" on Powershell console and expects the user to give that value ).
I looked into the code of Powershell cmdlet. I found that it is using "CommandInvocationIntrinsics" to prompt the user (using "NewScriptBlock" and then "Invoke" method of "CommandInvocationIntrinsics" ).
Since I am calling this cmdlets from the Powershell script, I want that whenever such a prompting happened, it will be suppressed and an exception is thrown.
The code is something like this -
try
{
$db = Connect-Database <databasename>
#if username / password is prompted, it should be converted into error. But it should not be shown in powershell console.
if($Error.Count > 0)
{
throw $Error[0]
}
}
catch
{
#do something
}
My way of doing that, is first to enumerate mandatory parameters, and then initialized them with $null vars. So there is no more interaction and an error is thrown.
Easy solution, don't prompt. I feel that you should either always prompt (by using mandatory parameters), or never prompt during the execution of your command. If you don't have enough information, throw an error. If your command is being used interactively, the user can rerun with the correct information.
This is a part of the design of PowerShell and likely cannot be overcome.
If you are trying to be non-interactive, you should be ensuring that you have collected all the necessary information before trying to call the cmdlet (aka, you should not have missing parameters for your non-interactive calls). You are likely following an anti-pattern and should re-think your approach. Prompting at the beginning of your non-interactive script (aka, short startup interaction), having parameters for your non-interactive script (those will, in turn, prompt if missing when set to mandatory), or reading from a config file (and verifying the information first) are a few approaches. Something common you will notice is that they all involve failing fast/failing early, a very good design.
Related
When my PowerShell script tries, for example, to create a SQL Server object for a server that doesn't exist ("bla" in my case), PowerShell displays lots of PowerShell errors in red.
Since my script checks the value of $? after such calls, and displays and logs errors, I'd rather not have the several lines of PowerShell errors displayed as well.
How can I deactivate those being displayed for my script?
You have a couple of options. The easiest involve using the ErrorAction settings.
-Erroraction is a universal parameter for all cmdlets. If there are special commands you want to ignore you can use -erroraction 'silentlycontinue' which will basically ignore all error messages generated by that command. You can also use the Ignore value (in PowerShell 3+):
Unlike SilentlyContinue, Ignore does not add the error message to the $Error automatic variable.
If you want to ignore all errors in a script, you can use the system variable $ErrorActionPreference and do the same thing: $ErrorActionPreference= 'silentlycontinue'
See about_CommonParameters for more info about -ErrorAction.
See about_preference_variables for more info about $ErrorActionPreference.
Windows PowerShell provides two mechanisms for reporting errors: one mechanism for terminating errors and another mechanism for non-terminating errors.
Internal CmdLets code can call a ThrowTerminatingError method when an error occurs that does not or should not allow the cmdlet to continue to process its input objects. The script writter can them use exception to catch these error.
EX :
try
{
Your database code
}
catch
{
Error reporting/logging
}
Internal CmdLets code can call a WriteError method to report non-terminating errors when the cmdlet can continue processing the input objects. The script writer can then use -ErrorAction option to hide the messages, or use the $ErrorActionPreference to setup the entire script behaviour.
You can also append 2>$null to your command.
Example:
$rec = Resolve-DnsName $fqdn -Server $dns 2>$null
You're way off track here. Silencing errors is almost never a good idea, and manually checking $? explicitly after every single command is enormously cumbersome and easy to forget to do (error prone). Don't set yourself up to easily make a mistake. If you're getting lots and lots of red, that means your script kept going when it should have stopped instead. It can no longer do useful work if most of its commands are failing. Continuing a program when it and the system are in an unknown state will have unknown consequences; you could easily leave the system in a corrupt state.
The correct solution is to stop the algorithm on the first error. This principle is called "fail fast," and PowerShell has a built in mechanism to enable that behavior. It is a setting called the error preference, and setting it to the highest level will make your script (and the child scopes if they don't override it) behave this way:
$ErrorActionPreference = 'Stop'
This will produce a nice, big error message for your consumption and prevent the following commands from executing the first time something goes wrong, without having to check $? every single time you run a command. This makes the code vastly simpler and more reliable. I put it at the top of every single script I ever write, and you almost certainly should as well.
In the rare cases where you can be absolutely certain that allowing the script to continue makes sense, you can use one of two mechanisms:
catch: This is the better and more flexible mechanism. You can wrap a try/catch block around multiple commands, allowing the first error to stop the sequence and jump into the handler where you can log it and then otherwise recover from it or rethrow it to bubble the error up even further. You can also limit the catch to specific errors, meaning that it will only be invoked in specific situations you anticipated rather than any error. (For example, failing to create a file because it already exists warrants a different response than a security failure.)
The common -ErrorAction parameter: This parameter changes the error handling for one single function call, but you cannot limit it to specific types of errors. You should only use this if you can be certain that the script can continue on any error, not just the ones you can anticipate.
In your case, you probably want one big try/catch block around your entire program. Then your process will stop on the first error and the catch block can log it before exiting. This will remove a lot of duplicate code from your program in addition to cleaning up your log file and terminal output and making your program less likely to cause problems.
Do note that this doesn't handle the case when external executables fail (exit code nonzero, conventionally), so you do still need to check $LASTEXITCODE if you invoke any. Despite this limitation, the setting still saves a lot of code and effort.
Additional reliability
You might also want to consider using strict mode:
Set-StrictMode -Version Latest
This prevents PowerShell from silently proceeding when you use a non-existent variable and in other weird situations. (See the -Version parameter for details about what it restricts.)
Combining these two settings makes PowerShell much more of fail-fast language, which makes programming in it vastly easier.
I had a similar problem when trying to resolve host names using [system.net.dns]. If the IP wasn't resolved .Net threw a terminating error.
To prevent the terminating error and still retain control of the output, I created a function using TRAP.
E.G.
Function Get-IP
{PARAM ([string]$HostName="")
PROCESS {TRAP
{"" ;continue}
[system.net.dns]::gethostaddresses($HostName)
}
}
Add -ErrorAction SilentlyContinue to your script and you'll be good to go.
In some cases you can pipe after the command a Out-Null
command | Out-Null
To extend on Mikkel's answer.
If you still want to capture the error, you can use "-ErrorAction stop" combined with a try - catch.
"-ErrorAction silentlycontinue" will ignore the error.
For instance:
try
{
New-Item -Path "/somepath" -Name "somename" -ErrorAction Stop | Out-Null
}
catch
{
echo "You must run this command in an elevated mode."
}
NOTE: There is no "silentlyStop" action, and I believe Mickel's answer refers to the "stop" action. It is likely a typo.
The idea of using a try-catch combined with the "stop" action is to be able to not just dismiss eventual errors but to show something in case of errors.
If you want the powershell errormessage for a cmdlet suppressed, but still want to catch the error, use "-erroraction 'silentlyStop'"
I am using Powersehll Core.
My PowerShell script is supposed to run unattended with a batch process. It must not allow to stop and ask for user input. If that happens, the batch process stays on hold indefinitely.
I cannot test the script for all the permutation of conditions that might cause user prompts.
Is there any way to force exception when any script stop and ask for user input?
Here is an example of code that could cause a user prompt. I know I can change this code to throw an exception. But this is one example of many, and at this point I rather not change all the code and find an alternate approach that fail the script when there is a prompt for user input like below:
function Show-Example
{
param
(
[Parameter(Mandatory)]
[ValidateNotNullOrEmpty()]
[String]
$Text
)
#Do something
}
Please note that in above code [Parameter(Mandatory)] will prompt user for input when the caller misses does not pass the parameter. I have several of this situation.
I'm having the same issue!
In my case I'm executing a .exe from powershell and I had to make it enter a new line on input request or disable input request from the executable script.
I'm trying to use Putty's plink.exe as part of a Powershell script, and am having trouble teeing the output.
Some of the commands invoke an interactive response (eg: entering password). Specifically, I'm testing against an Isilon.
Example code:
$command = '&"C:\Program Files\Putty\plink.exe" root#10.0.0.141 -pw "password" -t -batch "isi auth users create testuser --set-password"'
iex $command
Expected result:
I get a prompt password:
I enter the password
I get a prompt confirm:
I enter the password again
Command ends
If I try to tee the output, using iex $command | tee-object -variable result or even just redirect with iex $command *>test.log, the prompt text doesn't show up until after I've responded to it. While still technically functional, if you don't know exactly what prompt to expect, it's useless.
I've tried using Start-Transcript, but that doesn't capture the output at all. I've also tried using plink's -sshlog argument, but that logs way too much, in a less than readable format.
Is there any way to have stdout be unbuffered in the console, and also have it stored in a variable?
To answer some potential questions:
-This is to be run in an environment that doesn't allow modules, so can't use Posh-SSH.
-The Powershell version available isn't new enough to use the built-in openssh functionality.
This is all about redirecting streams.
When you use redirection, all outputs are redirected from the streams, and passed to be written to file. When you execute:
Write-Host "Some Text" *>out.txt
You don't see any output and it is all redirected to the file.
Key Note: Redirection works on a (simplification) line by line basis, as
the redirection works by writing to the file one line at a time.
Similarly, when you use Tee-Object, all outputs are redirected from the stream and down the pipeline. This is passed to the cmdlet Tee-Object. Tee-Object takes the input, and then writes that input to both the variable/file you want and to the screen. This happens After the input has been gathered and processed.
This means that both redirection and the Tee-Object commands work on a line by line basis. This makes sense both redirection and the Tee-Object commands work this way because it is hard to deal with things like deleting characters, moving around and editing text dynamically while trying to edit and maintain an open file at the same time. It is only designed for a one-way once the statement is complete, output.
In this case, when running it interactively, the password: prompt is written to the screen and you can respond.
When redirecting/Teeing the output, the password: text prompt is redirected, and buffered, awaiting your response. This makes sense because the statement has not completed yet. You don't want to send half a statement that could change, or half an object down the pipeline. It is only after you complete the statement (e.g. entering in the password + enter) that the whole statement is passed down the stream/pipeline. Once the whole statement is sent, then it is redirected/output Tee'd and can be displayed.
#Bill_Stewart is correct, in the sense that you should pick either an interactive prompt, or a fully automated solution.
Edit: To add some more information from comments.
If we use Tee-Object it relies on the Pipeline. Pipelines can only pass complete objects down the pipeline (e.g. complete strings inc. New Line). Pipelines have to interact with other commands like ForEach-Object or Select-Object, and they can't handle passing incomplete data to them. That's how the PowerShell console works, and you can't change it.
Similarly, redirection works line by line. The underlying reason why, I will explain why in a moment.
So, if you want to interact with it character by character, then you are dealing with streams. And if you want to deal with streams directly, it's 100 times more complicated because you can't use the convenience of the PowerShell console, you have to directly run of the process manually and handle all the input and output yourself.
To start, you have to manually launch the process. To do this we use the System.Diagnostics.Process class. The Pseudocode looks something like this:
$p = [System.Diagnostics.Process]::New()
$p.StartInfo.RedirectStandardOutput = $true
$p.StartInfo.RedirectStandardError = $true
$p.StartInfo.RedirectStandardInput = $true
$p.StartInfo.UseShellExecute = $false
#$p.StartInfo.CreateNoWindow = $true
$p.StartInfo.FileName = "plink.exe"
$p.StartInfo.Arguments = 'root#10.0.0.141 -pw "password" -t -batch "isi auth users create testuser --set-password"'
$p.EnableRaisingEvents = $true
....
We essentially create the process, specify that we are going to redirect the stdout (StartInfo.RedirectStandardOutput = $true), as well as the stdin to something else for us to handle. How do we know when to read the data? Well, the class has the Process.OutputDataReceived Event. You bind to this event to read in the additional data. But:
The OutputDataReceived event indicates that the associated Process has
written a line, terminating with a newline character, to its
redirected StandardOutput stream.
Remarks
So even the process class revolves around newlines for streaming data. This is why even redirects *> work on a line by line basis. PowerShell, and cmd, etc. all use the Process class as a basis to run processes. They all bind to this same event and methods to do their processing. Hence, why everything revolves around newlines and statement completions.
(big breath) So. You still want to interactively work with things one character at a time? well then you can't use the convenience of events. You will have to fall back to using a Stream Reader and directly binding to the Process.StandardOutput Property. Unfortunately this is where I stop, and say that to accomplish this
is beyond the scope of SO, and will require much more research to accomplish.
I am writing a script that presents the user with a menu of functions, but I also want to be able to run the script automatically from task scheduler which would mean I would need to skip the menu portion. Is there a way to do this with flags or arguments when starting the script (like "script.ps1 -auto" to skip the coding containing the menu, or just "script.ps1" to start)
I've performed internet searches for this, but have not yet found anything that I think is applicable. I'm not even sure if this is possible given the lack of information I've found (or not found).
script.ps1
script.ps1 -auto
Not to the point where error messages are applicable
You can use the [switch] parameter type in your param block.
param( [switch] $auto )
if ($auto) {
# here goes the code if the parameter auto is set
}
else {
}
See also this answer on SO, on how to handle command-line parameters with PowerShell.
When my PowerShell script tries, for example, to create a SQL Server object for a server that doesn't exist ("bla" in my case), PowerShell displays lots of PowerShell errors in red.
Since my script checks the value of $? after such calls, and displays and logs errors, I'd rather not have the several lines of PowerShell errors displayed as well.
How can I deactivate those being displayed for my script?
You have a couple of options. The easiest involve using the ErrorAction settings.
-Erroraction is a universal parameter for all cmdlets. If there are special commands you want to ignore you can use -erroraction 'silentlycontinue' which will basically ignore all error messages generated by that command. You can also use the Ignore value (in PowerShell 3+):
Unlike SilentlyContinue, Ignore does not add the error message to the $Error automatic variable.
If you want to ignore all errors in a script, you can use the system variable $ErrorActionPreference and do the same thing: $ErrorActionPreference= 'silentlycontinue'
See about_CommonParameters for more info about -ErrorAction.
See about_preference_variables for more info about $ErrorActionPreference.
Windows PowerShell provides two mechanisms for reporting errors: one mechanism for terminating errors and another mechanism for non-terminating errors.
Internal CmdLets code can call a ThrowTerminatingError method when an error occurs that does not or should not allow the cmdlet to continue to process its input objects. The script writter can them use exception to catch these error.
EX :
try
{
Your database code
}
catch
{
Error reporting/logging
}
Internal CmdLets code can call a WriteError method to report non-terminating errors when the cmdlet can continue processing the input objects. The script writer can then use -ErrorAction option to hide the messages, or use the $ErrorActionPreference to setup the entire script behaviour.
You can also append 2>$null to your command.
Example:
$rec = Resolve-DnsName $fqdn -Server $dns 2>$null
You're way off track here. Silencing errors is almost never a good idea, and manually checking $? explicitly after every single command is enormously cumbersome and easy to forget to do (error prone). Don't set yourself up to easily make a mistake. If you're getting lots and lots of red, that means your script kept going when it should have stopped instead. It can no longer do useful work if most of its commands are failing. Continuing a program when it and the system are in an unknown state will have unknown consequences; you could easily leave the system in a corrupt state.
The correct solution is to stop the algorithm on the first error. This principle is called "fail fast," and PowerShell has a built in mechanism to enable that behavior. It is a setting called the error preference, and setting it to the highest level will make your script (and the child scopes if they don't override it) behave this way:
$ErrorActionPreference = 'Stop'
This will produce a nice, big error message for your consumption and prevent the following commands from executing the first time something goes wrong, without having to check $? every single time you run a command. This makes the code vastly simpler and more reliable. I put it at the top of every single script I ever write, and you almost certainly should as well.
In the rare cases where you can be absolutely certain that allowing the script to continue makes sense, you can use one of two mechanisms:
catch: This is the better and more flexible mechanism. You can wrap a try/catch block around multiple commands, allowing the first error to stop the sequence and jump into the handler where you can log it and then otherwise recover from it or rethrow it to bubble the error up even further. You can also limit the catch to specific errors, meaning that it will only be invoked in specific situations you anticipated rather than any error. (For example, failing to create a file because it already exists warrants a different response than a security failure.)
The common -ErrorAction parameter: This parameter changes the error handling for one single function call, but you cannot limit it to specific types of errors. You should only use this if you can be certain that the script can continue on any error, not just the ones you can anticipate.
In your case, you probably want one big try/catch block around your entire program. Then your process will stop on the first error and the catch block can log it before exiting. This will remove a lot of duplicate code from your program in addition to cleaning up your log file and terminal output and making your program less likely to cause problems.
Do note that this doesn't handle the case when external executables fail (exit code nonzero, conventionally), so you do still need to check $LASTEXITCODE if you invoke any. Despite this limitation, the setting still saves a lot of code and effort.
Additional reliability
You might also want to consider using strict mode:
Set-StrictMode -Version Latest
This prevents PowerShell from silently proceeding when you use a non-existent variable and in other weird situations. (See the -Version parameter for details about what it restricts.)
Combining these two settings makes PowerShell much more of fail-fast language, which makes programming in it vastly easier.
I had a similar problem when trying to resolve host names using [system.net.dns]. If the IP wasn't resolved .Net threw a terminating error.
To prevent the terminating error and still retain control of the output, I created a function using TRAP.
E.G.
Function Get-IP
{PARAM ([string]$HostName="")
PROCESS {TRAP
{"" ;continue}
[system.net.dns]::gethostaddresses($HostName)
}
}
Add -ErrorAction SilentlyContinue to your script and you'll be good to go.
In some cases you can pipe after the command a Out-Null
command | Out-Null
To extend on Mikkel's answer.
If you still want to capture the error, you can use "-ErrorAction stop" combined with a try - catch.
"-ErrorAction silentlycontinue" will ignore the error.
For instance:
try
{
New-Item -Path "/somepath" -Name "somename" -ErrorAction Stop | Out-Null
}
catch
{
echo "You must run this command in an elevated mode."
}
NOTE: There is no "silentlyStop" action, and I believe Mickel's answer refers to the "stop" action. It is likely a typo.
The idea of using a try-catch combined with the "stop" action is to be able to not just dismiss eventual errors but to show something in case of errors.
If you want the powershell errormessage for a cmdlet suppressed, but still want to catch the error, use "-erroraction 'silentlyStop'"