I have a dynamically loaded PowerShell script block that accepts multiple parameters (including Mandatory, type, default value,...). I also have a function that has its own parameter sets and internally, it calls the script block. The script block is dynamically loaded based on other parameters, and I'd like to allow the user to pass the parameters to my function and forward them to script block. For that, I'd like to read the possible parameters from the script block, which I can do using $block.Ast.ParamBlock.Parameters and then create matching parameters for my function using DynamicParam.
Example:
Script block:
$block = {
param([Parameter(Mandatory)][string]$test = 10)
echo $test
}
Function:
function Fn {
param($FnParam1, $FnParam2)
DynamicParam {
// what to write here
}
& $block // how to efficiently forward the parameters here?
}
It should be possible to call Fn like this:
Fn -FnParam1 value -FnParam2 value -test 20
However, DynamicParam is created using RuntimeDefinedParameter and related classes, and AST of the script block returns Management.Automation.Language.ParameterAst and other instances from the same namespace. I could manually create the DynamicParam parameters by converting the AST node-by-node, but it seems like there should be a simpler way to automatically convert these.
Related
I'm refactoring some of my older PS scripts to a) improve them b) clean up c) modularize.
In the script I'm working now there are 10-15 functions that work on specific directory - let's call it work directory. Currently it's defined globally and loaded from a config file; it never changes after initialization (does that make it a constant?).
I want to wrap some of the functions in a separate module. The question is: should I rewrite them so the variable is passed explicitly as a parameter, or can I leave it as is, with the assumption that every script I use this module (library?) in will have this variable initialized? If the latter, how to make sure the module can "detect" the variable is uninitialized and throw some error?
And, last but not least, currently it's just a variable - should I use some specific construct so that it's obvious it is global, and not to be modified?
should I rewrite them so the variable is passed explicitly as a parameter
As long as there's no legitimate use case for overriding it in a single call, I wouldn't pass it as a parameter.
If your functions are packaged as a module, I'd strongly recommend utilizing module-scoped variables rather than globals.
Assuming you're talking about a script module, this is as simple as:
Set-Variable -Scope Script -Name ModuleTargetDirectory -Value $config.TargetDirectory
from inside the module file or a module function that runs during import (the script: scope is the same as module-scope inside a module), and then in the consuming functions:
function Get-Something
{
# ...
$targetDirectory = $script:ModuleTargetDirectory
# ...
}
Or wrap the entire config storage in a private helper method:
# don't export this function
function Get-MyModuleConfig
{
# assuming we stored a dictionary or custom object with the config options in a module-scoped variable named `config`
return $script:config
}
And then always just call $config = Get-MyModuleConfig in the begin block of functions that need access to the config data
I've made a powershell script which validates some parameters. In the process of validation I need to create some strings. I also need these strings later in the script.
To avoid rebuilding the same strings again, can I reuse variables defined within validation blocks? Perhaps I can use functions in validation blocks somehow? Or maybe global variables? I'm not sure what's possible here, or what's good practice.
Example:
Test.ps1
Function Test {
param(
[string]
[Parameter(Mandatory=$true)]
$thing1
[string]
[Parameter(Mandatory=$true)]
$thing2
[string]
[Parameter(Mandatory=$true)]
[ValidateScript({
$a = Get-A $thing1
$b = Get-B $thing2
$c = $a + $b
$d = Get-D $c
if(-not($d -contains $_)) {
throw "$_ is not a valid value for the thing3 parameter."
}
return $true
})]
$thing3
)
# Here I'd like to use $c
# At worst, calling Get-A and Get-B again may be expensive
# Or it could just be annoying duplication of code
}
Bonus question, if this is possible, could I reuse those variables in a subsequent validation block?
You could use a byref varliable.
This will affect the variable being passed to it so you could both have a return value and a parameter affected by the execution of your function.
About Ref
You can pass variables to functions by reference or by value.
When you pass a variable by value, you are passing a copy of the data.
In the following example, the function changes the value of the
variable passed to it. In PowerShell, integers are value types so they
are passed by value. Therefore, the value of $var is unchanged outside
the scope of the function.
Function Test{
Param($thing1,$thing2,[ref]$c)
$c.Value = new-guid
return $true
}
#$ThisIsC = $null
test -c ([ref] $ThisIsC)
Write-Host $ThisIsC -ForegroundColor Green
Alternatively, you can use the $script or the $global scope.
For a simple script to quickly expose your variable, the $scriptscope will do just that. A byref parameter might be easier for the end-user if you intend to distribute your function by making it clear you need to pass a reference parameter.
See About Scopes documentation.
Scopes in PowerShell have both names and numbers. The named scopes
specify an absolute scope. The numbers are relative and reflect the
relationship between scopes.
Global: The scope that is in effect when PowerShell starts. Variables
and functions that are present when PowerShell starts have been
created in the global scope, such as automatic variables and
preference variables. The variables, aliases, and functions in your
PowerShell profiles are also created in the global scope.
Local: The current scope. The local scope can be the global scope or
any other scope.
Script: The scope that is created while a script file runs. Only the
commands in the script run in the script scope. To the commands in a
script, the script scope is the local scope.
Private: Items in private scope cannot be seen outside of the current
scope. You can use private scope to create a private version of an
item with the same name in another scope.
Numbered Scopes: You can refer to scopes by name or by a number that
describes the relative position of one scope to another. Scope 0
represents the current, or local, scope. Scope 1 indicates the
immediate parent scope. Scope 2 indicates the parent of the parent
scope, and so on. Numbered scopes are useful if you have created many
recursive scopes.
Environmental note: I'm currently targetting PowerShell 5.1 because 6 has unrelated limitations I can't work around yet.
In the Powershell module I'm writing, there is one main function that's sort of a conglomeration of a bunch of the smaller functions. The main function has a superset of the smaller function's parameters. The idea is that calling the main function will call each smaller function with the necessary parameters specified on the main. So for example:
function Main { [CmdletBinding()] param($A,$B,$C,$D)
Sub1 -A $A -B $B
Sub2 -C $C -D $D
}
function Sub1 { [CmdletBinding()] param($A,$B)
"$A $B"
}
function Sub2 { [CmdletBinding()] param($C,$D)
"$C $D"
}
Explicitly specifying the sub-function parameters is both tedious and error prone particularly with things like [switch] parameters. So I wanted to use splatting to make things easier. Instead of specifying each parameter on the sub-function, I'll just splat $PSBoundParameters from the parent onto each sub-function like this:
function Main { [CmdletBinding()] param($A,$B,$C,$D)
Sub1 #PSBoundParameters
Sub2 #PSBoundParameters
}
The immediate problem with doing this is that the sub-functions then start throwing an error for any parameter they don't have defined such as, "Sub1 : A parameter cannot be found that matches parameter name 'C'." If I remove the [CmdletBinding()] declaration, things work but I lose all the benefits of those subs being advanced functions.
So my current workaround is to add and additional parameter on each sub-function that uses the ValueFromRemainingArguments parameter attribute like this:
function Sub1 { [CmdletBinding()]
param($A,$B,[Parameter(ValueFromRemainingArguments)]$Extra)
"$A $B"
}
function Sub2 { [CmdletBinding()]
param($C,$D,[Parameter(ValueFromRemainingArguments)]$Extra)
"$C $D"
}
Technically, this works well enough. The sub-functions get their specific params and the extras just get ignored. If I was writing this just for me, I'd move on with my life and be done with it.
But for a module intended for public consumption, there's an annoyance factor with that -Extra parameter being there. Primarily, it shows up in Get-Help output which means I have to document it even if just to say, "Ignore this."
Is there an extra step I can take to make that extra parameter effectively invisible to end users? Or am I going about this all wrong and there's a better way to allow for extra parameters on an advanced function?
My usual approach is to export only "wrapper" functions that call internal (i.e., not user-facing) functions in the module.
Suppose I have a function into which a dependency and some parameters are injected like the following:
function Invoke-ACommandLaterOn
{
param
(
# ...
[string] $CommandName,
[object] $PipelineParams,
[object[]] $PositionalParams,
[hashtable]$NamedParams
# ...
)
Assert-ParameterBinding #PSBoundParameters
# ...
# Some complicated long-running call tree that eventually invokes
# something like
# $PipelineParams | & $CommandName #PositionalParams #NamedParams
# ...
}
I would like to immediately assert that binding of the parameters to $CommandName succeeds. That's what Assert-ParameterBinding is meant to do. I'm not exactly sure how to implement Assert-ParameterBinding, however.
Of course I could try to invoke $CommandName immediately, but in this case doing so has side-effects that cannot occur until a bunch of other long-running things are completed first.
How can I assert parameter binding to a function will succeed without invoking the function?
What if you did something like this (inside the Assert- function):
$cmd = Get-Command $CommandName
$meta = [System.Management.Automation.CommandMetadata]::new($cmd)
$proxy = [System.Management.Automation.ProxyCommand]::Create($meta)
$code = $proxy -ireplace '(?sm)(?:begin|process|end)\s*\{.*','begin{}process{}end{}'
$sb = [scriptblock]::Create($code)
$PipeLineParams | & $sb #PositionalParams #NamedParams
I'm actually not sure if it will work with the positional params or with splatting two different sets, off the top of my head (and I didn't do much testing).
Explanation
I had a few thoughts. For one, parameter binding can be very complex. And in the case of a pipeline call, binding happens differently as different blocks are hit.
So it's probably a good idea to let PowerShell handle this, by essentially recreating the same function but with a body that does nothing.
So I went with the built in way to generate a proxy function since it takes care of all that messy work, then brutally replaced the body so that it doesn't actually call the original.
Ideally then, you'll be making a call that follows all the regular parameter binding process but in the end accomplishes nothing.
Wrapping that in a try/catch or otherwise testing for errors should be a pretty good test of whether this was a successful call or not.
This even handles dynamic parameters.
There are probably edge cases where this won't quite work, but I think they will be rare.
Additionally, ValidateScript attributes and dynamic parameters could conceivably create side effects.
Problem Statement: I am trying to write MATLAB code for a main caller function (like run_experiment below) to specify which computations I want to execute where computations are made sequentially using other MATLAB functions. Those other functions are to be evaluated based on parameters passed with the main caller function. The said functions used in computations are to be specified with name of the scripts they are written in.
Example Desired Code Behavior: For example, a command like the following should run the preprocess_data, initialise_model and train_model scripts.
>> run_experiment('dataset_1_options', '|preprocess_data|initialise_model|train_model|');
And this command should run only the train_model script but also evaluates its performance:
>> run_experiment('dataset_1_options', '|train_model|evaluate_model|');
In the above examples "|" is used as a delimiter to specify separate function names to be evaluated. Those functions use the options specified with dataset_1_options. Please do not focus on how to separate that part of the input into meaningful function names; I know how to do it with strsplit.
Constraints and Specifications: The function names to be passed as input to the main caller function are NOT anonymous functions. The purpose is to be able to pass such multiple function names as input AND to evaluate them with the options like the above example. They return output to be evaluated in other parts of the research code (i.e. passing data matrices to other functions within the research code as results of the computations carried out within them.)
Question: Given the desired behavior and constraints mentioned above, can anybody help in how to pass the separate function names from another caller function along with options/parameter to those functions? How should the main caller function evaluate the function names passed in as input with the options specified during the call?
Thank you in advance.
You can pass functions to functions in matlab. You just need to use the # sign when you pass it. In your case it would be run_experiment('dataset_1_options', #train_model) inside a script. You could keep your options in a cell array or something. The run_experiment function would just be a regular function,
function [output] = run_experiment(options, train_model, ...);
train_model(options{1}, ...)
.
.
.
end
What you need to do this is create a cell array with your function names and another array with the corresponding options as below
% Function name array
fn_array = {#fn_1, #fn_2, ...};
% Option array
option_array = {{fn1_opt1, fn2opt2, ...}; {fn2_opt1, fn2_opt2, ...};, ...};
These two need to be passed to your run_experiment function which will evaluate them as below
function run_experiment(fn_array, option_array)
num_fn = length(fn_array); %Finds number of functions to evaluate
for ii = 1:num_fn %Evaluates each function
fn_array{ii}(option_array{ii}{:});
end