While using runspaces I would like to pass predictable arguments to my scriptblock from outside. In order to do this without utilizing $args (to avoid making the code less readable), I am storing my argument in a variable called $var and adding the variable to the InitialSessionState of the RunspacePool. To do this I am using the Add method of System.Management.Automation.Runspaces.InitialSessionState.Create().Variables
This works for my purposes, but I also noticed the PowerShell.AddParameter method of [PowerShell]::Create()which would add my parameter to each powershell process as it is created.
And finally there is the PowerShell.AddArgument which I could use like AddParameter, and use a param() block. this would work, however, each argument will be passed in order, and ends up working as well as $args.
So my question is what is the recommended way of passing arguments\parameters\variables into a scriptblock for the purposes I have defined? Is there a performance advantage of one over the other? Can you expand on what purpose you may use one over the other?
You can use cmdlet binding inside of the script block definition.
$ScriptBlock = {
Param (
[int]$NumberofHours,
[string]$ClientName
)
$LogPath = 'd:\Program Files\Exchange\Logging\RPC Client Access\*.LOG'
$today = (Get-Date).AddHours($NumberofHours)
}
Then define call these parameters when you execute the script block.
Invoke-Command -ComputerName $servers -ScriptBlock $ScriptBlock -ArgumentList -1,$cn -ErrorAction SilentlyContinue
Here's my $.02. I know this thread is old but I came across it unanswered while searching for something else.
This example uses a runspace pool to demonstrate how to pass parameters to the script block while using a runspace. The function this comes from passes a datatable to a script block for processing. Some variable names were changed to help with out-of-context readability.
$maxThreads = 5
$jobs = New-Object System.Collections.ArrayList
$rsp = [runspacefactory]::CreateRunspacePool(1,$maxThreads)
$rsp.open()
$params = New-Object 'System.Collections.Generic.Dictionary[string, string]'
$params.Add('Data', $myDataTable)
$params.Add('TableName', $myTableName)
$PS = [powershell]::Create()
$PS.Runspacepool = $rsp
$PS.AddScript($myScriptBlockCode)AddParameters($parameters)
Another option would be to forgo the iDictionary step and simply call .AddParameter repeatedly.
$PS.AddScript($myScriptBlockCode).AddParame'ter(Data',$myDataTable).AddParameter('tableName',$myTableName)
A third way is to pass the parameters as arguments in the order they are defined in your script block
$PS.AddScrip(t$myScriptBlockCode).AddArgument($myDataTable).AddArgument($myTableName)
I hope this helps someone out. I welcome comments as to ways to do this better. As long as the comment doesn't suggest turning it into some amateurish one-liner that no one can read....
Related
I'm trying to execute the Invoke-Sqlcmd command (from the SqlServer module) to run a query as a different AD user. I know there's the -Credential argument, but that doesn't seem to work.
Thus, I thought using Start-Job might be an option, as shown in the snippet below.
$username = 'dummy_domain\dummy_user'
$userpassword = 'dummy_pwd' | ConvertTo-SecureString -AsPlainText -Force
$credential = New-Object -TypeName System.Management.Automation.PSCredential ($username, $password)
$job = Start-Job -ScriptBlock {Import-Module SqlServer; Invoke-Sqlcmd -query "exec sp_who" -ServerInstance 'dummy_mssql_server' -As DataSet} -Credential $credential
$data = Receive-Job -Job $job -Wait -AutoRemoveJob
However, when looking at the variable type that the job returned, it isn't what I expected.
> $data.GetType().FullName
System.Management.Automation.PSObject
> $data.Tables[0].GetType().FullName
System.Collections.ArrayList
If I run the code in the ScriptBlock directly, these are the variable types that PS returns:
> $data.GetType().FullName
System.Data.DataSet
> $data.Tables[0].GetType().FullName
System.Data.DataTable
I tried casting the $data variable to [System.Data.DataSet], which resulted in the following error message:
Cannot convert value "System.Data.DataSet" to type "System.Data.DataSet".
Error: "Cannot convert the "System.Data.DataSet" value of type
"Deserialized.System.Data.DataSet" to type "System.Data.DataSet"."
Questions:
Is there a better way to run SQL queries under a different AD account, using the Invoke-Sqlcmd command?
Is there a way to get the correct/expected variable type to be returned when calling Receive-Job?
Update
When I run $data.Tables | Get-Member, one of the properties returned is:
Tables Property Deserialized.System.Data.DataTableCollection {get;set;}
Is there a way to get the correct/expected variable type to be returned when calling Receive-Job?
Due to using a background job, you lose type fidelity: the objects you're getting back are method-less emulations of the original types.
Manually recreating the original types is not worth the effort and may not even be possible - though perhaps working with the emulations is enough.
Update: As per your own answer, switching from working with System.DataSet to System.DataTable resulted in serviceable emulations for you.[1]
See the bottom section for more information.
Is there a better way to run SQL queries under a different AD account, using the Invoke-Sqlcmd command?
You need an in-process invocation method in order to maintain type fidelity, but I don't think that is possible with arbitrary commands if you want to impersonate another user.
For instance, the in-process (thread-based) alternative to Start-Job - Start-ThreadJob - doesn't have a -Credential parameter.
Your best bet is therefore to try to make Invoke-SqlCmd's -Credential parameter work for you or find a different in-process way of running your queries with a given user's credentials.
Serialization and deserialization of objects in background jobs / remoting / mini-shells:
Whenever PowerShell marshals objects across process boundaries, it employs XML-based serialization at the source, and deserialization at the destination, using a format known as CLI XML (Common Language Infrastructure XML).
This happens in the context of PowerShell remoting (e.g., Invoke-Command calls with the
-ComputerName parameter) as well as in background jobs (Start-Job) and so-called mini-shells (which are implicitly used when you call the PowerShell CLI from inside PowerShell itself with a script block; e.g., powershell.exe { Get-Item / }).
This deserialization maintains type fidelity only for a limited set of known types, as specified in MS-PSRP, the PowerShell Remoting Protocol Specification. That is, only instances of a fixed set of types are deserialized as their original type.
Instances of all other types are emulated: list-like types become [System.Collections.ArrayList] instances, dictionary types become [hasthable] instances, and other types become method-less (properties-only) custom objects ([pscustomobject] instances), whose .pstypenames property contains the original type name prefixed with Deserialized. (e.g., Deserialized.System.Data.DataTable), as well as the equally prefixed names of the type's base types (inheritance hierarchy).
Additionally, the recursion depth for object graphs of non-[pscustomobject] instances is limited to 1 level - note that this includes instance of PowerShell custom classes, created with the class keyword: That is, if an input object's property values aren't instance of well-known types themselves (the latter includes single-value-only types, including .NET primitive types such as [int], as opposed to types composed of multiple properties), they are replaced by their .ToString() representations (e.g., type System.IO.DirectoryInfo has a .Parent property that is another System.IO.DirectoryInfo instance, which means that the .Parent property value serializes as the .ToString() representation of that instance, which is its full path string); in short: Non-custom (scalar) objects serialize such that property values that aren't themselves instances of well-known types are replaced by their .ToString() representation; see this answer for a concrete example.
By contrast, explicit use of CLI XML serialization via Export-Clixml defaults to a depth of 2 (you can specify a custom depth via -Depth and you can similarly control the depth if you use the underlying System.Management.Automation.PSSerializer type directly).
Depending on the original type, you may be able to reconstruct instances of the original type manually, but that is not guaranteed.
(You can get the original type's full name by calling .pstypenames[0] -replace '^Deserialized\.' on a given custom object.)
Depending on your processing needs, however, the emulations of the original objects may be sufficient.
[1] Using System.DataTable results in usable emulated objects, because you get a System.Collections.ArrayList instance that emulates the table, and custom objects with the original property values for its System.DataRow instances. The reason this works is that PowerShell has built-in logic to treat System.DataTable implicitly as an array of its data rows, whereas the same doesn't apply to System.DataSet.
I can't say for question 2 as I've never used the job commands but when it comes to running the Invoke-Sqlcmd I always make sure that the account that runs the script has the correct access to run the SQL.
The plus to this is that you don't need to store the credentials inside the script, but is usually a moot point as the scripts are stored out of reach of most folks, although some bosses can be nit picky!
Out of curiosity how do the results compare if you pipe them to Get-Member?
For those interested, below is the code I implemented. Depending on whether or not $credential is passed, Invoke-Sqlcmd will either run directly, or using a background job.
I had to use -As DataTables instead of -As DataSet, as the latter seems to have issues with serialisation/deserialisation (see accepted answer for more info).
function Exec-SQL($server, $database, $query, $credential) {
$sqlData = #()
$scriptBlock = {
Param($params)
Import-Module SqlServer
return Invoke-Sqlcmd -ServerInstance $params.server -Database $params.database -query $params.query -As DataTables -OutputSqlErrors $true
}
if ($PSBoundParameters.ContainsKey("credential")) {
$job = Start-Job -ScriptBlock $scriptBlock -Credential $credential -ArgumentList $PSBoundParameters
$sqlData = Receive-Job -Job $job -Wait -AutoRemoveJob
} else {
$sqlData = & $scriptBlock -params $PSBoundParameters
}
return $sqlData
}
Given a properly defined variable
$test = New-Object System.Collections.ArrayList
.Add pollutes the pipeline with the count of items in the array, while .AddRange does not.
$test.Add('Single') will dump the count to the console. $test.AddRange(#('Single2')) will be clean with no extra effort. Why the different behavior? Is it just an oversight, or is there some intentional behavior I am not understanding?
Given that .AddRange requires coercing to an array when not using a variable (that is already an array) I am tending towards using [void]$variable.Add('String') when I know I need to only add one item, and [void]$test.AddRange($variable) when I am adding an array to an array, even when $variable only contains, or could only contain, a single item. The [void] here isn't required, but I wonder if it's just best practice to have it, depending of course on the answer above. Or am I missing something there too?
Why the different behavior? Is it just an oversight, or is there some intentional behavior I am not understanding?
Because many years ago, someone decided that's how ArrayList should behave!
Add() returns the index at which the argument was inserted into the list, which may indeed be useful and makes sense.
With AddRange() on the other hand, it's not immediately clear why it should return anything, and if yes, what? The index of the first item in the input arguments? The last? Or should it return a variable-sized array with all the insert indices? That would be awkward! So whoever implemented ArrayList decided not to return anything at all.
In C# or VB.NET, for which ArrayList was initially designed, "polluting the pipeline" doesn't really exist as a concept, the runtime would simply omit copying the return value back to the caller if someone invokes .Add() without assigning to a variable.
The [void] here isn't required, but I wonder if it's just best practice to have it, depending of course on the answer above. Or am I missing something there too?
No, it's completely unnecessary. AddRange() is not magically one day gonna change to output anything.
If you don't ever need to know the insert index, use a [System.Collections.Generic.List[psobject]] instead:
$list = [System.Collections.Generic.List[psobject]]::new()
# this won't return anything, no need for `[void]`
$list.Add(123)
If for some reason you must use an ArrayList, you can "silence" it by overriding the Add() method:
function New-SilentArrayList {
# Create a new ArrayList
$newList = [System.Collections.ArrayList]::new()
# Create a new `Add()` method, then return the list
$newAdd = #{
InputObject = $newList
MemberType = 'ScriptMethod'
Name = 'Add'
Value = {param($obj) $this.AddRange(#($obj))}
}
Write-Output $(
Add-Member #newAdd -Force -PassThru
) -NoEnumerate
}
Now your ArrayList's Add() will never make a peep again!
PS C:\> $list = New-SilentArrayList
PS C:\> $list.Add(123)
PS C:\> $list
123
Apparently I didn't quiet understand where you where heading to.
"Add pollutes the pipeline", at a second thought is a correct statement but .Net methods like $variable.Add('String') do not use the PowerShell pipeline by itself (until the moment you output the array using the Write-Output command which is the default command if you do not assign it to a variable).
The Write-Output cmdlet is typically used in scripts to display
strings and other objects on the console. However, because the default
behavior is to display the objects at the end of a pipeline, it is
generally not necessary to use the cmdlet.
The point is that Add method of ArrayList returns a [Int32] "The ArrayList index at which the value has been added" and the AddRange doesn't return anything. Meaning if you don't assign the results to something else (which includes $Null = $test.Add('Single')) it will indeed be output to the PowerShell Pipeline.
Instead you might also consider to use the Add method of the List class which also doesn't return anything, see also: ArrayList vs List<> in C#.
But in general, I recommend to use native PowerShell commands that do use the Pipeline
(I can't give you a good example as it is not clear what output you expect but I noticed another question you removed and from that question, I presume that this Why should I avoid using the increase assignment operator (+=) to create a collection answer might help you further)
I've made a powershell script which validates some parameters. In the process of validation I need to create some strings. I also need these strings later in the script.
To avoid rebuilding the same strings again, can I reuse variables defined within validation blocks? Perhaps I can use functions in validation blocks somehow? Or maybe global variables? I'm not sure what's possible here, or what's good practice.
Example:
Test.ps1
Function Test {
param(
[string]
[Parameter(Mandatory=$true)]
$thing1
[string]
[Parameter(Mandatory=$true)]
$thing2
[string]
[Parameter(Mandatory=$true)]
[ValidateScript({
$a = Get-A $thing1
$b = Get-B $thing2
$c = $a + $b
$d = Get-D $c
if(-not($d -contains $_)) {
throw "$_ is not a valid value for the thing3 parameter."
}
return $true
})]
$thing3
)
# Here I'd like to use $c
# At worst, calling Get-A and Get-B again may be expensive
# Or it could just be annoying duplication of code
}
Bonus question, if this is possible, could I reuse those variables in a subsequent validation block?
You could use a byref varliable.
This will affect the variable being passed to it so you could both have a return value and a parameter affected by the execution of your function.
About Ref
You can pass variables to functions by reference or by value.
When you pass a variable by value, you are passing a copy of the data.
In the following example, the function changes the value of the
variable passed to it. In PowerShell, integers are value types so they
are passed by value. Therefore, the value of $var is unchanged outside
the scope of the function.
Function Test{
Param($thing1,$thing2,[ref]$c)
$c.Value = new-guid
return $true
}
#$ThisIsC = $null
test -c ([ref] $ThisIsC)
Write-Host $ThisIsC -ForegroundColor Green
Alternatively, you can use the $script or the $global scope.
For a simple script to quickly expose your variable, the $scriptscope will do just that. A byref parameter might be easier for the end-user if you intend to distribute your function by making it clear you need to pass a reference parameter.
See About Scopes documentation.
Scopes in PowerShell have both names and numbers. The named scopes
specify an absolute scope. The numbers are relative and reflect the
relationship between scopes.
Global: The scope that is in effect when PowerShell starts. Variables
and functions that are present when PowerShell starts have been
created in the global scope, such as automatic variables and
preference variables. The variables, aliases, and functions in your
PowerShell profiles are also created in the global scope.
Local: The current scope. The local scope can be the global scope or
any other scope.
Script: The scope that is created while a script file runs. Only the
commands in the script run in the script scope. To the commands in a
script, the script scope is the local scope.
Private: Items in private scope cannot be seen outside of the current
scope. You can use private scope to create a private version of an
item with the same name in another scope.
Numbered Scopes: You can refer to scopes by name or by a number that
describes the relative position of one scope to another. Scope 0
represents the current, or local, scope. Scope 1 indicates the
immediate parent scope. Scope 2 indicates the parent of the parent
scope, and so on. Numbered scopes are useful if you have created many
recursive scopes.
Environmental note: I'm currently targetting PowerShell 5.1 because 6 has unrelated limitations I can't work around yet.
In the Powershell module I'm writing, there is one main function that's sort of a conglomeration of a bunch of the smaller functions. The main function has a superset of the smaller function's parameters. The idea is that calling the main function will call each smaller function with the necessary parameters specified on the main. So for example:
function Main { [CmdletBinding()] param($A,$B,$C,$D)
Sub1 -A $A -B $B
Sub2 -C $C -D $D
}
function Sub1 { [CmdletBinding()] param($A,$B)
"$A $B"
}
function Sub2 { [CmdletBinding()] param($C,$D)
"$C $D"
}
Explicitly specifying the sub-function parameters is both tedious and error prone particularly with things like [switch] parameters. So I wanted to use splatting to make things easier. Instead of specifying each parameter on the sub-function, I'll just splat $PSBoundParameters from the parent onto each sub-function like this:
function Main { [CmdletBinding()] param($A,$B,$C,$D)
Sub1 #PSBoundParameters
Sub2 #PSBoundParameters
}
The immediate problem with doing this is that the sub-functions then start throwing an error for any parameter they don't have defined such as, "Sub1 : A parameter cannot be found that matches parameter name 'C'." If I remove the [CmdletBinding()] declaration, things work but I lose all the benefits of those subs being advanced functions.
So my current workaround is to add and additional parameter on each sub-function that uses the ValueFromRemainingArguments parameter attribute like this:
function Sub1 { [CmdletBinding()]
param($A,$B,[Parameter(ValueFromRemainingArguments)]$Extra)
"$A $B"
}
function Sub2 { [CmdletBinding()]
param($C,$D,[Parameter(ValueFromRemainingArguments)]$Extra)
"$C $D"
}
Technically, this works well enough. The sub-functions get their specific params and the extras just get ignored. If I was writing this just for me, I'd move on with my life and be done with it.
But for a module intended for public consumption, there's an annoyance factor with that -Extra parameter being there. Primarily, it shows up in Get-Help output which means I have to document it even if just to say, "Ignore this."
Is there an extra step I can take to make that extra parameter effectively invisible to end users? Or am I going about this all wrong and there's a better way to allow for extra parameters on an advanced function?
My usual approach is to export only "wrapper" functions that call internal (i.e., not user-facing) functions in the module.
Suppose I have a function into which a dependency and some parameters are injected like the following:
function Invoke-ACommandLaterOn
{
param
(
# ...
[string] $CommandName,
[object] $PipelineParams,
[object[]] $PositionalParams,
[hashtable]$NamedParams
# ...
)
Assert-ParameterBinding #PSBoundParameters
# ...
# Some complicated long-running call tree that eventually invokes
# something like
# $PipelineParams | & $CommandName #PositionalParams #NamedParams
# ...
}
I would like to immediately assert that binding of the parameters to $CommandName succeeds. That's what Assert-ParameterBinding is meant to do. I'm not exactly sure how to implement Assert-ParameterBinding, however.
Of course I could try to invoke $CommandName immediately, but in this case doing so has side-effects that cannot occur until a bunch of other long-running things are completed first.
How can I assert parameter binding to a function will succeed without invoking the function?
What if you did something like this (inside the Assert- function):
$cmd = Get-Command $CommandName
$meta = [System.Management.Automation.CommandMetadata]::new($cmd)
$proxy = [System.Management.Automation.ProxyCommand]::Create($meta)
$code = $proxy -ireplace '(?sm)(?:begin|process|end)\s*\{.*','begin{}process{}end{}'
$sb = [scriptblock]::Create($code)
$PipeLineParams | & $sb #PositionalParams #NamedParams
I'm actually not sure if it will work with the positional params or with splatting two different sets, off the top of my head (and I didn't do much testing).
Explanation
I had a few thoughts. For one, parameter binding can be very complex. And in the case of a pipeline call, binding happens differently as different blocks are hit.
So it's probably a good idea to let PowerShell handle this, by essentially recreating the same function but with a body that does nothing.
So I went with the built in way to generate a proxy function since it takes care of all that messy work, then brutally replaced the body so that it doesn't actually call the original.
Ideally then, you'll be making a call that follows all the regular parameter binding process but in the end accomplishes nothing.
Wrapping that in a try/catch or otherwise testing for errors should be a pretty good test of whether this was a successful call or not.
This even handles dynamic parameters.
There are probably edge cases where this won't quite work, but I think they will be rare.
Additionally, ValidateScript attributes and dynamic parameters could conceivably create side effects.