catch error from param in Powershell without using try-catch - powershell

I’m writing a Powershell script to call a file convert function(to execute ANYTRAN file).
I was told to put param() in a try-catch by my boss but that seems to cause an error.
Then how can I catch the error from param()?
I think it’s possible to use if statement in a parent shell.
Please give me some advice.
Below is the code.
$ErrorActionPreference = "Stop"
try{
#-----------------------------------------------------------
# 初期処理
#-----------------------------------------------------------
#---# 環境変数定義(define common env)
#---& ".\commonEnv.ps1"
# 共通関数インクルード(include common func)
. (Resolve-Path ".\commonFunc.ps1").path
# 引数取得(get parameter)
Param(
$inFile
,$outFile
,$flgZeroByte
)

Your issue is not in params passing . "param", unless the syntax is incorrect, logically can't crash (and you can't handle whether it could, unless with error handling at scope higher than the definition of function) .
function sc1 {
param ( $v_f = '' )
process {
Write-Host $v_f
}
}
sc1 '& 2'

Related

How can I ensure Dispose() is called on an advanced function's local variable on stop signal?

I have noticed that objects implementing IDisposable in advanced functions aren't reliably disposed of when a "stop" signal (eg. pressing CTRL+C) is sent during execution. This is a pain when the object holds a handle to, for example, a file. If the stop signal is received at an inopportune time, the handle doesn't get closed and the file remains locked until the PowerShell session is closed.
Consider the following class and functions:
class f : System.IDisposable {
Dispose() { Write-Host 'disposed' }
}
function g {
param( [Parameter(ValueFromPipeline)]$InputObject )
begin { $f = [f]::new() }
process {
try {
$InputObject
}
catch {
$f.Dispose()
throw
}
}
end {$f.Dispose()}
}
function throws {
param ( [Parameter(ValueFromPipeline)] $InputObject )
process { throw 'something' }
}
function blocks {
param ( [Parameter(ValueFromPipeline)] $InputObject )
process { Wait-Event 'bogus' }
}
Imagine $f holds a handle to a file and releases it when its Dispose() method is called. My goal is that the lifetime of $f matches the lifetime of g. $f is disposed correctly when g is invoked in each the following ways:
g
'o' | g
'o' | g | throws
I can tell as much because each of these outputs disposed.
When the stop signal is sent while execution is occuring downstream of g, however, $f is not disposed. To test that, I invoked
'o' | g | blocks
which blocks at the Wait-Event inside blocks, then I pressed Ctrl+C to stop execution. In that case, Dispose() does not seem to get called (or, at least disposed is not written to the console).
In C# implementations of such functions it is my understanding that StopProcessing() gets called on a stop signal to do such cleanup. However, there seems to be no analog to StopProcessing available for PowerShell implementations of advanced functions.
How can I ensure that $f is disposed in all cases including a stop signal?
You can't if the function accepts pipeline input.
I don't think a robust way of achieving this is possible if the function accepts pipeline input. The reason is that any of the following could occur while code is executing upstream in the pipeline:
break, continue, or throw
terminating error
stop signal received
When these occur upstream, no part of the function can be caused to intervene. The begin{} and process{} blocks have either run to completion or not run at all, and the end{} block may or may not be run. The closest to an on-point solution I have found is the following:
function g {
param (
[Parameter(ValueFromPipeline)]
$InputObject
)
begin { $f = [f]::new() } # The local IDisposable is created when the pipeline is established.
process {
try
{
# flags to keep track of why finally was run
$success = $false
$caught = $false
$InputObject # output an object to exercise the pipeline downstream
# if we get here, nothing unusual happened downstream
$success = $true
}
catch
{
# we get here if an exception was thrown
$caught = $true
# !!!
# This is bad news. It's possible the exception will be
# handled by an upstream process{} block. The pipeline would
# survive and the next invocation of process{} would occur
# after $f is disposed.
# !!!
$f.Dispose()
# rethrow the exception
throw
}
finally
{
# !!!
# This finally block is not invoked when the PowerShell instance receives
# a stop signal while executing code upstream in the pipeline. In that
# situation cleanup $f.Dispose() is not invoked.
# !!!
if ( -not $success -and -not $caught )
{
# dispose only if finally{} is the only block remaining to run
$f.Dispose()
}
}
}
end {$f.Dispose()}
}
However, per the comments there are still cases where $f.Dispose() is not invoked. You can step through this working example that includes such cases.
Consider a pattern like usingObject {} instead.
If we limit usage to the case where the function responsible for cleanup does not accept pipeline input, then we can factor out the lifetime-management logic into a helper function similar to C#'s using block. Here is a proof-of-concept that implements such a helper function called usingObject. This is an example of how g could be substantially simplified when using usingObject to achieve robust invokation of .Dispose():
# refactored function g
function g {
param (
[Parameter(ValueFromPipeline)]
$InputObject,
[Parameter(Mandatory)]
[f]
$f
)
process {
$InputObject
}
}
# usages of function g
usingObject { [f]::new() } {
g -f $_
}
usingObject { [f]::new() } {
'o' | g -f $_
}
try
{
usingObject { [f]::new() } {
'o' | g -f $_ | throws
}
}
catch {}
usingObject { [f]::new() } {
'o' | g -f $_ | blocks
}
Seems like you can just add a finally{} block and dispose it there. You might also want to consider setting your ErrorActionPreference, since you are doing custom error handling.
$ErrorActionPreference = 'SilentlyContinue'
try
{
try
{
1/0
}
catch
{
throw New-Object System.Exception("Exception!")
}
finally
{
"You can dispose here!"
}
}
catch
{
$_.Exception.Message | Write-Output
}

Multiple powershell switch parameters - can it be optimized?

I am trying to write a simple wrapper that accept one parameter for the output.
This is how it looks now
function Get-data{
param (
[switch]$network,
[switch]$profile,
[switch]$server,
[switch]$devicebay
)
if ($network.IsPresent) { $item = "network"}
elseif ($profile.IsPresent) {$item = "profile"}
elseif ($server.IsPresent) {$item = "server"}
elseif ($devicebay.IsPresent){$item = "devicebay"}
$command = "show $item -output=script2"
}
Clearly this could be optimize but I am struggling to wrap my head around on how I can achieve it .Is there some easy way to ensure only single parameter is accepted and used without resorting to multiple elseif statements?
Also I would like to provide array of paramters instead doing it the way it is done at the moment.
Another thing you could do instead of all those switch parameters is to use a [ValidateSet]
function Get-Data{
[cmdletbinding()]
param(
[Parameter(Mandatory=$true)]
[ValidateSet('Network','Profile','Server','DeviceBay')]
[string]$Item
)
Switch ($Item){
'network' {'Do network stuff'}
'profile' {'Do profile stuff'}
'server' {'Do server stuff'}
'devicebay' {'Do devicebay stuff'}
}
}
Probably not the most elegant solution, but using parametersets makes powershell do some of the work for you:
#requires -version 2.0
function Get-data {
[cmdletbinding()]
param(
[parameter(parametersetname="network")]
[switch]$network,
[parameter(parametersetname="profile")]
[switch]$profile,
[parameter(parametersetname="server")]
[switch]$server,
[parameter(parametersetname="devicebay")]
[switch]$devicebay
)
$item = $PsCmdlet.ParameterSetName
$command = "show $item -output=script2"
}
This example will error out if you don't provide one of the switches, but you could probably provide an extra switch that does nothing or errors more gracefully if you want to account for that case...
You can add the [cmdletbinding()] keyword so you get $PSBoundParameters, and use that for a switch pipeline:
function Get-data{
[cmdletbinding()]
param (
[switch]$network,
[switch]$profile,
[switch]$server,
[switch]$devicebay
)
Switch ($PSBoundParameters.GetEnumerator().
Where({$_.Value -eq $true}).Key)
{
'network' { 'Do network stuff' }
'profile' { 'Do profile stuff' }
'server' { 'Do server stuff' }
'devicebay' { 'Do devicebay stuff' }
}
}
Since you want only one switch to be enabled, an enum might help you.
This way, you're not using a switch but a standard parameter - still, the user of the cmdlet can use TAB to autocomplete the values that may be entered.
Just set the type of the parameter to your enum.

Pipeline parameter interferes with $args array

I'm trying to make use of the $args array with a pipeline parameter.
The function expects an arbitrary number of parameters (e.g. param0) following the first, pipelined parameter:
function rpt-params {
param (
[Parameter(ValueFromPipeline=$true,Position=0,Mandatory=$true)][CrystalDecisions.CrystalReports.Engine.ReportDocument]$reportDocument
)
try {
write-host "count: " $args.count
#TODO process args
}
catch [Exception] {
write-host $_.Exception
}
finally {
return $reportDocument
}
}
Attempts to call the function produce an error that reads "rpt-params : A parameter cannot be found that matches parameter name 'param0'.":
...
# syntax 0
rpt-params $rpt -param0 "mb-1" -param1 "me-1"
...
...
# syntax 1; explicitly naming the first parameter
rpt-params -reportDocument $rpt -param0 "mb-1" -param1 "me-1"
...
Is my syntax the issue or is it related to using a pipelined parameter?
Create another parameter, called it something like $rest and decorate it with [Parameter(ValueFromRemainingArguments = $true)].
When you use "[cmdletbinding()]" or "[Parameter()]", which is the case here, your Function turns into an Advanced Function. An Advanced Function can only take the Arguments that are specified under "Param" and no more. To make your Function act like before, like Keith recommends, you'll need to add [Parameter(ValueFromRemainingArguments = $true)]
For Example:
function rpt-params {
param (
[Parameter(ValueFromPipeline=$true,Position=0,Mandatory=$true)]
[CrystalDecisions.CrystalReports.Engine.ReportDocument]$reportDocument,
[Parameter(ValueFromRemainingArguments=$true)]$args
)
try {
write-host "count: " $args.count
#TODO Now args can have all remaining values
}
catch [Exception] {
write-host $_.Exception
}
finally {
return $reportDocument
}
}

Dot-sourcing functions from file to global scope inside of function

I want to import external function from file, not converting it to a module (we have hundreds of file-per-function, so treat all them as modules is overkill).
Here is code explanation. Please notice that I have some additional logic in Import-Function like adding scripts root folder and to check file existence and throw special error, to avoid this code duplication in each script which requires that kind of import.
C:\Repository\Foo.ps1:
Function Foo {
Write-Host 'Hello world!'
}
C:\InvocationTest.ps1:
# Wrapper func
Function Import-Function ($Name) {
# Checks and exception throwing are omitted
. "C:\Repository\$name.ps1"
# Foo function can be invoked in this scope
}
# Wrapped import
Import-Function -Name 'Foo'
Foo # Exception: The term 'Foo' is not recognized
# Direct import
. "C:\Repository\Foo.ps1"
Foo # 'Hello world!'
Is there any trick, to dot source to global scope?
You can't make the script run in a parent scope, but you can create a function in the global scope by explicitly scoping it.
Would something like this work for you?
# Wrapper func
Function Import-Function ($Path) {
# Checks and exception throwing are omitted
$script = Get-Content $Path
$Script -replace '^function\s+((?!global[:]|local[:]|script[:]|private[:])[\w-]+)', 'function Global:$1'
.([scriptblock]::Create($script))
}
The above regex only targets root functions (functions left justified; no white space to left of the word function). In order to target all functions, regardless of spacing (including sub-functions), change the $Script -replace line to:
$Script -replace '^\s*function\s+((?!global[:]|local[:]|script[:]|private[:])[\w-]+)','function Global:$1'
You can change the functions that are defined in the dot-sourced files so that they are defined in the global scope:
function Global:Get-SomeThing {
# ...
}
When you dot source that from within a function, the function defined in the dot sourced file will be global. Not saying this is best idea, just another possibility.
Just dot-source the function as well:
. Import-Function -Name 'Foo'
Foo # Hello world!
I can't remember a way to run a function in global scope right now. You could do something like this:
$name = "myscript"
$myimportcode= {
# Checks and exception throwing are omitted
. .\$name.ps1
# Foo function can be invoked in this scope
}
Invoke-Expression -Command $myimportcode.ToString()
When you convert the scriptblock to a string .ToString(), the variable will expand.

Calling a local function from a dot sourced file

I have a main script that I am running. What it does is read through a directory filled with other powershell scripts, dot includes them all and runs a predefined method in each made up of the first portion of the dot delimited file name. Example:
Run master.ps1
Master.ps1 dot sources .\resource\sub.ps1
Sub.ps1 has defined a function called 'dosub'
Master.ps1 runs 'dosub' using Invoke-Expression
Also defined in sub.ps1 is the function 'saysomething'. Implemented in'dosub' is a call to 'saysomething'.
My problem is I keep getting the error:
The term 'saysomething' is not recognized as the name of a cmdlet,
function, script file, or operable program. Check the spelling of the
name, or if a path was included, verify that the path is correct and
try again.
Why can't the method 'dosub' find the method 'saysomething' which is defined in the same file?
master.ps1:
$handlersDir = "handlers"
$handlers = #(Get-ChildItem $handlersDir)
foreach ( $handler in $handlers ) {
. .\$handlersDir\$handler
$fnParts = $handler.Name.split(".")
$exp = "do" + $fnParts[0]
Invoke-Expression $exp
}
sub.ps1:
function saysomething() {
Write-Host "I'm here to say something!"
}
function dosub() {
saysomething
Write-Host "In dosub!"
}
Your code works on my system. However you can simplify it a bit:
$handlersDir = "handlers"
$handlers = #(Get-ChildItem $handlersDir)
foreach ( $handler in $handlers )
{
. .\$handlersDir\$handler
$exp = "do" + $handler.BaseName
Write-Host "Calling $exp"
& $exp
}
Note the availability of the BaseName property. You also don't need to use Invoke-Expression. You can just call the named command ysing the call (&) operator.
What you have given works as needed. You probably don't have the directories etc proper on your machine. Or you are running something else and posting a different ( working!) code here.
You can also make following corrections:
. .\$handlersDir\$handler
instead of above you can do:
. $handler.fullname
Instead the splitting of the filename you can do:
$exp = "do" + $handler.basename