How to prevent a Powershell script from prompting for user input - powershell

I am using Powersehll Core.
My PowerShell script is supposed to run unattended with a batch process. It must not allow to stop and ask for user input. If that happens, the batch process stays on hold indefinitely.
I cannot test the script for all the permutation of conditions that might cause user prompts.
Is there any way to force exception when any script stop and ask for user input?
Here is an example of code that could cause a user prompt. I know I can change this code to throw an exception. But this is one example of many, and at this point I rather not change all the code and find an alternate approach that fail the script when there is a prompt for user input like below:
function Show-Example
{
param
(
[Parameter(Mandatory)]
[ValidateNotNullOrEmpty()]
[String]
$Text
)
#Do something
}
Please note that in above code [Parameter(Mandatory)] will prompt user for input when the caller misses does not pass the parameter. I have several of this situation.

I'm having the same issue!
In my case I'm executing a .exe from powershell and I had to make it enter a new line on input request or disable input request from the executable script.

Related

Can I tee unbuffered program output in Powershell?

I'm trying to use Putty's plink.exe as part of a Powershell script, and am having trouble teeing the output.
Some of the commands invoke an interactive response (eg: entering password). Specifically, I'm testing against an Isilon.
Example code:
$command = '&"C:\Program Files\Putty\plink.exe" root#10.0.0.141 -pw "password" -t -batch "isi auth users create testuser --set-password"'
iex $command
Expected result:
I get a prompt password:
I enter the password
I get a prompt confirm:
I enter the password again
Command ends
If I try to tee the output, using iex $command | tee-object -variable result or even just redirect with iex $command *>test.log, the prompt text doesn't show up until after I've responded to it. While still technically functional, if you don't know exactly what prompt to expect, it's useless.
I've tried using Start-Transcript, but that doesn't capture the output at all. I've also tried using plink's -sshlog argument, but that logs way too much, in a less than readable format.
Is there any way to have stdout be unbuffered in the console, and also have it stored in a variable?
To answer some potential questions:
-This is to be run in an environment that doesn't allow modules, so can't use Posh-SSH.
-The Powershell version available isn't new enough to use the built-in openssh functionality.
This is all about redirecting streams.
When you use redirection, all outputs are redirected from the streams, and passed to be written to file. When you execute:
Write-Host "Some Text" *>out.txt
You don't see any output and it is all redirected to the file.
Key Note: Redirection works on a (simplification) line by line basis, as
the redirection works by writing to the file one line at a time.
Similarly, when you use Tee-Object, all outputs are redirected from the stream and down the pipeline. This is passed to the cmdlet Tee-Object. Tee-Object takes the input, and then writes that input to both the variable/file you want and to the screen. This happens After the input has been gathered and processed.
This means that both redirection and the Tee-Object commands work on a line by line basis. This makes sense both redirection and the Tee-Object commands work this way because it is hard to deal with things like deleting characters, moving around and editing text dynamically while trying to edit and maintain an open file at the same time. It is only designed for a one-way once the statement is complete, output.
In this case, when running it interactively, the password: prompt is written to the screen and you can respond.
When redirecting/Teeing the output, the password: text prompt is redirected, and buffered, awaiting your response. This makes sense because the statement has not completed yet. You don't want to send half a statement that could change, or half an object down the pipeline. It is only after you complete the statement (e.g. entering in the password + enter) that the whole statement is passed down the stream/pipeline. Once the whole statement is sent, then it is redirected/output Tee'd and can be displayed.
#Bill_Stewart is correct, in the sense that you should pick either an interactive prompt, or a fully automated solution.
Edit: To add some more information from comments.
If we use Tee-Object it relies on the Pipeline. Pipelines can only pass complete objects down the pipeline (e.g. complete strings inc. New Line). Pipelines have to interact with other commands like ForEach-Object or Select-Object, and they can't handle passing incomplete data to them. That's how the PowerShell console works, and you can't change it.
Similarly, redirection works line by line. The underlying reason why, I will explain why in a moment.
So, if you want to interact with it character by character, then you are dealing with streams. And if you want to deal with streams directly, it's 100 times more complicated because you can't use the convenience of the PowerShell console, you have to directly run of the process manually and handle all the input and output yourself.
To start, you have to manually launch the process. To do this we use the System.Diagnostics.Process class. The Pseudocode looks something like this:
$p = [System.Diagnostics.Process]::New()
$p.StartInfo.RedirectStandardOutput = $true
$p.StartInfo.RedirectStandardError = $true
$p.StartInfo.RedirectStandardInput = $true
$p.StartInfo.UseShellExecute = $false
#$p.StartInfo.CreateNoWindow = $true
$p.StartInfo.FileName = "plink.exe"
$p.StartInfo.Arguments = 'root#10.0.0.141 -pw "password" -t -batch "isi auth users create testuser --set-password"'
$p.EnableRaisingEvents = $true
....
We essentially create the process, specify that we are going to redirect the stdout (StartInfo.RedirectStandardOutput = $true), as well as the stdin to something else for us to handle. How do we know when to read the data? Well, the class has the Process.OutputDataReceived Event. You bind to this event to read in the additional data. But:
The OutputDataReceived event indicates that the associated Process has
written a line, terminating with a newline character, to its
redirected StandardOutput stream.
Remarks
So even the process class revolves around newlines for streaming data. This is why even redirects *> work on a line by line basis. PowerShell, and cmd, etc. all use the Process class as a basis to run processes. They all bind to this same event and methods to do their processing. Hence, why everything revolves around newlines and statement completions.
(big breath) So. You still want to interactively work with things one character at a time? well then you can't use the convenience of events. You will have to fall back to using a Stream Reader and directly binding to the Process.StandardOutput Property. Unfortunately this is where I stop, and say that to accomplish this
is beyond the scope of SO, and will require much more research to accomplish.

How to skip part of PowerShell script by starting script with flags or arguments

I am writing a script that presents the user with a menu of functions, but I also want to be able to run the script automatically from task scheduler which would mean I would need to skip the menu portion. Is there a way to do this with flags or arguments when starting the script (like "script.ps1 -auto" to skip the coding containing the menu, or just "script.ps1" to start)
I've performed internet searches for this, but have not yet found anything that I think is applicable. I'm not even sure if this is possible given the lack of information I've found (or not found).
script.ps1
script.ps1 -auto
Not to the point where error messages are applicable
You can use the [switch] parameter type in your param block.
param( [switch] $auto )
if ($auto) {
# here goes the code if the parameter auto is set
}
else {
}
See also this answer on SO, on how to handle command-line parameters with PowerShell.

PowerShell " Parameter set cannot be resolved" When Calling Other Script

I'm struggling to call a second script in one of my PowerShell scripts at the moment using Invoke-Expression. It's currently producing an error:
"Parameter set cannot be resolved using the specified named parameters."
Annoyingly, it works fine for one switch (being -ServerDriveReport), but doesn't work for the other.
The first script (called DriveReport.ps1) is like:
[cmdletbinding()]
Param(
[Parameter(ParameterSetName="ServerDriveReport")]
[switch]$ServerDriveReport,
[Parameter(ParameterSetName="VMDriveReport")]
[switch]$VMDriveReport)
If($ServerDriveReport){
Invoke-Expression "& 'C:\Scripts\Drive Report\EmailDriveReport.ps1' -ServerDriveReport"}
If($VMDriveReport){
Invoke-Expression "& 'C:\Scripts\Drive Report\EmailDriveReport.ps1' -VMDriveReport"}
The "EmailDriveReport.ps1" script is like:
[cmdletbinding()]
Param(
[Parameter(ParameterSetName="ServerDriveReport")]
[switch]$ServerDriveReport,
[Parameter(ParameterSetName="VMDriveReport")]
[switch]$VMDriveReport)
If($ServerDriveReport){
# Send an email containing the server drive report}
If($VMDriveReport){
# Send an email contining the VM drive report}
When running "DriveReport.ps1 -ServerDriveReport" everything works as expected. But when running "DriveReport.ps1 -VMDriveReport", that's when I get the aforementioned error message.
Has anyone seen this before?
Any help would be greatly appreciated!
Without attempting to solve your immediate problem (which is not obvious to me from the code posted), consider using the automatic $PSBoundParameters variable via splatting to pass the parameters through to the 2nd script:
[cmdletbinding()]
Param(
[Parameter(ParameterSetName="ServerDriveReport")]
[switch]$ServerDriveReport,
[Parameter(ParameterSetName="VMDriveReport")]
[switch]$VMDriveReport)
)
& 'C:\Scripts\Drive Report\EmailDriveReport.ps1' #PSBoundParameters
Generally, Invoke-Expression should be avoided, because there are usually more robust solutions available and because it presents a security risk if invoked on untrusted strings.
Thanks for the help!
I managed to resolve this by carefully going over the script and finding out one of the Else statements was calling the file incorrectly. I've now changing this to:
& 'C:\Scripts\Drive Report\EmailDriveReport.ps1' as suggested.

Powershell - How to call cmdlet in non-interactive way in Powershell script?

In my powershell script, I am calling a cmdlet (Say "Connect-Database"). If the cmdlet needs additional parameter, it prompts the user to supply those values ( say, "Connect-Database" can not get credentials from registry , so it prompts for "Username" "password" on Powershell console and expects the user to give that value ).
I looked into the code of Powershell cmdlet. I found that it is using "CommandInvocationIntrinsics" to prompt the user (using "NewScriptBlock" and then "Invoke" method of "CommandInvocationIntrinsics" ).
Since I am calling this cmdlets from the Powershell script, I want that whenever such a prompting happened, it will be suppressed and an exception is thrown.
The code is something like this -
try
{
$db = Connect-Database <databasename>
#if username / password is prompted, it should be converted into error. But it should not be shown in powershell console.
if($Error.Count > 0)
{
throw $Error[0]
}
}
catch
{
#do something
}
My way of doing that, is first to enumerate mandatory parameters, and then initialized them with $null vars. So there is no more interaction and an error is thrown.
Easy solution, don't prompt. I feel that you should either always prompt (by using mandatory parameters), or never prompt during the execution of your command. If you don't have enough information, throw an error. If your command is being used interactively, the user can rerun with the correct information.
This is a part of the design of PowerShell and likely cannot be overcome.
If you are trying to be non-interactive, you should be ensuring that you have collected all the necessary information before trying to call the cmdlet (aka, you should not have missing parameters for your non-interactive calls). You are likely following an anti-pattern and should re-think your approach. Prompting at the beginning of your non-interactive script (aka, short startup interaction), having parameters for your non-interactive script (those will, in turn, prompt if missing when set to mandatory), or reading from a config file (and verifying the information first) are a few approaches. Something common you will notice is that they all involve failing fast/failing early, a very good design.

Using an answer file with a PowerShell script

I have a PowerShell script with a number of 'params' at the start:
param(
[switch] $whatif,
[string] $importPath = $(Read-Host "Full path to import tool"),
[string] $siteUrl = $(Read-Host "Enter URL to create or update"),
[int] $importCount = $(Read-Host "Import number")
)
Is there any way I can run this against an answer file to avoid entering the parameter values every time?
I am not getting the reason for the question. All you have to do to call your script is something like:
.\script.ps1 -whatif -importPath import_path -siteUrl google.com -importCount 1
The Read-Host are there as defaults, to be executed ( and then read and assign the values to the parameters ) only if you don't specify the values. As long you have the above comand ( saved in a file so that you can copy and paste into console or run from another script or whatever ), you don't have to enter the values again and again.
Start by setting the function or script up to accept pipeline input.
[CmdletBinding(SupportsShouldProcess=$True,ConfirmImpact='Low')]
param(
[Parameter(Mandatory=$True,ValueFromPipelineByPropertyName=$True)]
[string] $importPath,
[Parameter(Mandatory=$True,ValueFromPipelineByPropertyName=$True)]
[string] $siteUrl,
[Parameter(Mandatory=$True,ValueFromPipelineByPropertyName=$True)]
[int] $importCount
)
Notice that I removed your manually-created -whatif. No need for it - I'll get to it in a second. Also note that Mandatory=$True will make PowerShell prompt for a value if it isn't provided, so I removed your Read-Host.
Given the above, you could create an "answer file" that is a CSV file. Make an importPath column, a siteURL column, and an importCount column in the CSV file:
importPath,siteURL,importCount
"data","data",1
"x","y",2
Then do this:
Import-CSV my-csv-file.csv | ./My-Script
Assuming your script is My-Script.ps1, of course.
Now, to -whatif. Within the body of your script, do this:
if ($pscmdlet.shouldprocess($target)) {
# do whatever your action is here
}
This assumes you're doing something to $target, which might be a path, a computer name, a URL, or whatever. It's the thing you're modifying in your script. Put your modification actions/commands inside that if construct. Doing this, along with the SupportsShouldProcess() declaration at the top of the script, will enable -whatif and -confirm support. You don't need to code those parameters yourself.
What you're building is called an "Advanced Function," or if it's just a script than I guess it'd be an "Advanced Script." Utilizing pipeline input parameters in this fashion is the "PowerShell way of doing things."
To my knowledge, Powershell doesn't have a built-in understanding of answer files. You'll have to pass them in somehow or read them yourself from the answer file.
Wrapper. You could write another script that calls this script with the same parameters you want to use every time. You could also make a wrapper script that reads the values from the answer file, then pass them in.
Optional Parameters. Or you could change the parameters to use defaults that indicate no parameters were passed, then check for a file of a specific name to read values from. If the file isn't found, then prompt for the values.
If the format of the answer file is flexible, (i.e., you're only going to be using it with this Powershell script), you could get much closer to the behavior of an actual answer file by writing it as a Powershell script itself and dot-sourcing it.
if (test-path 'myAnswerfile'){
. 'myAnswerFile'
#process whatever was sourced from the answer file, if necessary
} else {
#prompt for values
}
It still requires removing the Read-Host calls from the parameters of the script.
Following on from Joel you could set up a different parameter set, based around the switch -answerfile.
If that's set the function will look for an answer file and parse though it - as he said you'll need to do that yourself. If it's not set and the others are then the functionis used with the parameters given. Minor benefit I see is that you can still have the parameters mandatory when used that way.
Matt