Passing an object from one script to another - powershell

I am having an issue passing an array member to another script. I have a VM build script that pulls from a CSV, I end up with a $VM object with .Name, .CPU, .RAM, .IP, etc. I want to pass that VM object to another script (inside the new server) which can then act on it, but am unable to do so. I have been testing the correct syntax just to pass a simple array, like below, but am still not successful:
CSV:
Name,NumCPU,MemoryGB,IPAddress
JLTest01,2,4,172.24.16.25
Script1:
Function TestMe {
[CmdLetBinding()]
Param (
[Parameter(Mandatory, Position=1)]
[array]$arr
)
$arr | Out-GridView
}
TestMe
Calling Script:
$aVMs = Import-Csv -Path "PathToCsv"
foreach($VM in $aVMs) {
$command = "<path>\TestMe.ps1 " + "-arr $($VM)"
Invoke-Expression $command
}
This produces an error, which seems to be on parsing the array. The error states:
The term 'JLTest01' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again.
At line:1 char:48 + ... \Desktop\TestMe.ps1 -arr #{Name=JLTest01; NumCPU ...
Just trying to figure out what I am doing wrong exactly, and what I need to do to pass the object to the second script.

Don't use Invoke-Expression (which is rarely the right tool and should generally be avoided for security reasons):
The stringification of the custom objects output by Import-Csv performed by $($VM) does not preserve the original objects; it results in a hashtable-like representation that isn't suitable for programmatic processing and breaks the syntax of the command line you're passing to Invoke-Expression.
Instead, just invoke your script directly:
$aVMs = Import-Csv -Path "PathToCsv"
.\TestMe.ps1 -arr $aVMs
Note that I'm passing $aVMs as a whole to your script, given that your -arr parameter is array-typed.
If you'd rather process the objects one by one, stick with the foreach approach (but then you should declare the type of your $arr parameter as [pscustomobject] rather than [array]):
$aVMs = Import-Csv -Path "PathToCsv"
foreach ($VM in $aVMs) {
.\TestMe.ps1 -arr $VMs
}
Another option is to declare $arr as accepting pipeline input, add a process block to your script, and then pipe $aVMs to your script ($aVMs | .\TestMe.ps1).
Also, don't nest a function of the same name inside your .ps1 script that you call from within the script, especially not without passing the arguments through; scripts can directly declare parameters, just like functions:
[CmdLetBinding()]
Param (
[Parameter(Mandatory, Position=1)]
[array]$arr
)
$arr | Out-GridView

Related

Correct param type for piping from Get-Content (PowerShell)?

What is the correct type for piping all of the content from Get-Content?
My script:
param(
[Parameter(ValueFromPipeline)]
[???]$content
)
Write-Output $content
According to the docs for PowerShell 5.1, Get-Content returns "a collection of objects", but I'm not sure how to specify that in PowerShell. Without specifying a type for the [???], only the last line of the file is output.
Regarding your last comment:
When I specify [string] or [string[]], it is still only printing out the last line of the file. Is this related to missing the process block?
This is correct, otherwise your function, scriptblock or script is executed in the end block, if you want them to be able to process input from pipeline, you must add your logic in the process block.
Note, this assumes you want to use ValueFromPipeline which, in effect, converts it into an advanced function.
If you want it be able to process input from pipeline but also be compatible with positional binding and named parameter you would use [string[]] and a loop:
param(
[Parameter(ValueFromPipeline)]
[string[]] $Content
)
process {
foreach($line in $Content) {
$line
}
}
Then, assuming the above is called myScript.ps1, you would be able to:
Get-Content .\test.txt | .\myScript.ps1
And also:
.\myScript.ps1 (Get-Content .\test.txt)

How to pass a custom function inside a ForEach-Object -Parallel

I can't find a way to pass the function. Just variables.
Any ideas without putting the function inside the ForEach loop?
function CustomFunction {
Param (
$A
)
Write-Host $A
}
$List = "Apple", "Banana", "Grape"
$List | ForEach-Object -Parallel {
Write-Host $using:CustomFunction $_
}
The solution isn't quite as straightforward as one would hope:
# Sample custom function.
function Get-Custom {
Param ($A)
"[$A]"
}
# Get the function's definition *as a string*
$funcDef = ${function:Get-Custom}.ToString()
"Apple", "Banana", "Grape" | ForEach-Object -Parallel {
# Define the function inside this thread...
${function:Get-Custom} = $using:funcDef
# ... and call it.
Get-Custom $_
}
Note: This answer contains an analogous solution for using a script block from the caller's scope in a ForEach-Object -Parallel script block.
Note: If your function were defined in a module that is placed in one of the locations known to the module-autoloading feature, your function calls would work as-is with ForEach-Object -Parallel, without extra effort - but each thread would incur the cost of (implicitly) importing the module.
The above approach is necessary, because - aside from the current location (working directory) and environment variables (which apply process-wide) - the threads that ForEach-Object -Parallel creates do not see the caller's state, notably neither with respect to variables nor functions (and also not custom PS drives and imported modules).
As of PowerShell 7.2.x, an enhancement is being discussed in GitHub issue #12240 to support copying the caller's state to the parallel threads on demand, which would make the caller's functions automatically available.
Note that redefining the function in each thread via a string is crucial, as an attempt to make do without the aux. $funcDef variable and trying to redefine the function with ${function:Get-Custom} = ${using:function:Get-Custom} fails, because ${function:Get-Custom} is a script block, and the use of script blocks with the $using: scope specifier is explicitly disallowed in order to avoid cross-thread (cross-runspace) issues.
However, ${function:Get-Custom} = ${using:function:Get-Custom} would work with Start-Job; see this answer for an example.
It would not work with Start-ThreadJob, which currently syntactically allows you to do & ${using:function:Get-Custom} $_, because ${using:function:Get-Custom} is preserved as a script block (unlike with Start-Job, where it is deserialized as a string, which is itself surprising behavior - see GitHub issue #11698), even though it shouldn't. That is, direct cross-thread use of [scriptblock] instances causes obscure failures, which is why ForEach-Object -Parallel prevents it in the first place.
A similar loophole that leads to cross-thread issues even with ForEach-Object -Parallel is using a command-info object obtained in the caller's scope with Get-Command as the function body in each thread via the $using: scope: this too should be prevented, but isn't as of PowerShell 7.2.7 - see this post and GitHub issue #16461.
${function:Get-Custom} is an instance of namespace variable notation, which allows you to both get a function (its body as a [scriptblock] instance) and to set (define) it, by assigning either a [scriptblock] or a string containing the function body.
I just figured out another way using get-command, which works with the call operator. $a ends up being a FunctionInfo object.
EDIT: I'm told this isn't thread safe, but I don't understand why.
function hi { 'hi' }
$a = get-command hi
1..3 | foreach -parallel { & $using:a }
hi
hi
hi
So I figured out another little trick that may be useful for people trying to add the functions dynamically, particularly if you might not know the name of it beforehand, such as when the functions are in an array.
# Store the current function list in a variable
$initialFunctions=Get-ChildItem Function:
# Source all .ps1 files in the current folder and all subfolders
Get-ChildItem . -Recurse | Where-Object { $_.Name -like '*.ps1' } |
ForEach-Object { . "$($_.FullName)" }
# Get only the functions that were added above, and store them in an array
$functions = #()
Compare-Object $initialFunctions (Get-ChildItem Function:) -PassThru |
ForEach-Object { $functions = #($functions) + #($_) }
1..3 | ForEach-Object -Parallel {
# Pull the $functions array from the outer scope and set each function
# to its definition
$using:functions | ForEach-Object {
Set-Content "Function:$($_.Name)" -Value $_.Definition
}
# Call one of the functions in the sourced .ps1 files by name
SourcedFunction $_
}
The main "trick" of this is using Set-Content with Function: plus the function name, since PowerShell essentially treats each entry of Function: as a path.
This makes sense when you consider the output of Get-PSDrive. Since each of those entries can be used as a "Drive" in the same way (i.e., with the colon).

Why Doesn't the Command Line Argument Work When Put Directly Into a Command in a Script?

I am struggling to understand why using $args[$i] directly in this command doesn't work. It gives a completely wrong answer.
$memory=$(Get-Process | Where-Object {$_.id -eq "$($args[$i])"} | select -expand VirtualMemorySize64)
However, putting the command line argument into another variable and using that one works.
$id=($args[$i])
$memory=$(Get-Process | Where-Object {$_.id -eq "$id"} | select -expand VirtualMemorySize64)
An explanation on why this is would be great.
Every script block ({ ... }) in PowerShell has its own copy of the automatic $args array in which positionally passed arguments are automatically collected.
Therefore, $args inside {$_.id -eq "$($args[$i])"} is not the same as $args at the script level, so you indeed need to save the script-level value in an auxiliary variable first, as in your 2nd snippet, which can be streamlined as follows:
# Must use aux. variable to access the script-level $args inside
# the Where-Object script block.
$id = $args[$i]
$memory = Get-Process | Where-Object { $_.id -eq $id } |
Select-Object -ExpandProperty VirtualMemorySize64
Note the absence of superfluous (...) and $(...), and the removal of quoting around "$id", given that the .Id property of process object is a number (type [int]).
Taking a step back, I suggest declaring parameters in your script, which is preferable to using $args - the variables holding the values of such parameters can be used without a problem in Where-Object script blocks.
Generally:
It is only meaningful to access $args inside a script block that you've invoked with arguments, which is not the case in a script block passed to Where-Object, where the input to the script block comes (only) from the pipeline, via the automatic $_ variable
By contrast, you can pass arguments to a script block, if you invoke it with &, the call operator, for instance: & { "[$args]" } 'foo' yields [foo].

Why does PowerShell code behave different when called from a function?

Experienced C# developer learning PowerShell here so I am sure I am just missing something silly. What I am trying to do is write a function that simply writes it's input to a temporary file in JSON format. I have code that works fine if I run it 'inline', but that same code writes an empty file when called in a function.
Here is the code:
function Dump-File {
param (
[Parameter(Mandatory=$true)]
$Input
)
$tmp = New-TemporaryFile
$Input | ConvertTo-Json | Out-File $tmp.FullName
Write-Output "Dump file written: $($tmp.FullName)"
}
$args = #{}
$args.Add('a', 1)
$args.Add('b', 2)
$args.Add('c', 3)
$args.Add('d', 4)
# results in json written to temp file
$tmp = New-TemporaryFile
$args | ConvertTo-Json | Out-File $tmp.FullName
Write-Output "args dumped: $($tmp.FullName)"
# results in empty temp file
Dump-File $args
Can anyone help me understand why the code called inline works but the same code does not work when I wrap it up as a function?
$Input is an automatic variable.
Changing the name of your Dump-File parameter to $somethingelse will resolve your problem. Never use $input as a parameter or variable name.
Automatic variables are to be considered read-only.
About Automatic Variables
SHORT DESCRIPTION
Describes variables that store state information for
PowerShell. These variables are created and maintained by PowerShell.
LONG DESCRIPTION
Conceptually, these variables are considered to be
read-only. Even though they can be written to, for backward
compatibility they should not be written to.
Here is a list of the automatic variables in PowerShell:
...
$INPUT
Contains an enumerator that enumerates all input that is passed to a function. The $input variable is available only to functions and script blocks (which are unnamed functions). In the Process block of a function, the $input variable enumerates the object that is currently in the pipeline. When the Process block completes, there are no objects left in the pipeline, so the $input variable enumerates an empty collection. If the function does not have a Process block, then in the End block, the $input variable enumerates the collection of all input to the function.
Source: About_Automatic_Variables
This information is also avaible through Get-help command
Get-Help about_Automatic_Variables

Pass a variable to another script

I have two PowerShell scripts.
The first script has the following code:
$var = "abc"
$DIR = "C:\"
$SCRIPT_NAME = "abc.ps1"
&"${DIR}\${SCRIPT_NAME}" #execute the second script
If I want to pass the variable $var to the second script, how do I achieve that? What code do I need to put in both the first and the second script?
Parameters (Recommended): Use parameters to pass values to the second script.
Step2.ps1:
param ($myparameter)
write-host $myparameter
Step1.ps1:
$var = "abc"
$DIR = "C:\"
$SCRIPT_NAME = "step2.ps1"
&"${DIR}\${SCRIPT_NAME}" -myparameter $var
Alternative: You could also have used arguments $args (extra values not linked to a parameter). You can specify the first argument using $args[0]. I would however always recommend parameters as arguments needs to be in a specific order (if multiple arguments are passed) etc.
Step2.ps1:
write-host $args[0]
Step1.ps1:
$var = "abc"
$DIR = "C:\"
$SCRIPT_NAME = "step2.ps1"
&"${DIR}\${SCRIPT_NAME}" $var
There are several ways to do what you want, two of which have already been suggested by #FrodeF..
Pass the variable as a (named) parameter:
# script1.ps1
$var = 'foo'
$dir = 'C:\some\folder'
$scriptname = "script2.ps1"
& "${dir}\${scriptname}" -Foo $var
# script2.ps1
Param($foo)
Write-Output $foo
This is the cleanest solution. You have a well-defined interface and pass the variable in a clear-cut way from one script to another.
Parameter definitions will also allow you to make a parameter mandatory (so that the script will ask the user to provide input if the parameter was omitted), require a particular data type, easily incorporate validation routines, or add comment-based help.
# script2.ps1
<#
.SYNOPSIS
Short description of the script or function.
.DESCRIPTION
Longer description of what the script or function actually does.
.PARAMETER Foo
Description of the parameter Foo.
#>
[CmdletBinding()]
Param(
[Parameter(Mandatory=$true, Position=0, ValueFromPipeline=$true)]
[ValidateRange(2,42)]
[int]$foo
)
Write-Output $foo
See Get-Help about_Function_Advanced_Parameters for more information.
Pass the variable as an unnamed argument:
# script1.ps1
$var = 'foo'
$dir = 'C:\some\folder'
$scriptname = "script2.ps1"
& "${dir}\${scriptname}" $var
# script2.ps1
Write-Output $args[0]
This is the second best approach, because you still pass the variable in a clear-cut way, but the interface isn't as well defined as before.
Define the variable as an environment variable:
# script1.ps1
$env:var = 'foo'
$dir = 'C:\some\folder'
$scriptname = "script2.ps1"
& "${dir}\${scriptname}"
# script2.ps1
Write-Output $env:var
This is a less clean approach than the argument-based ones, as the variable is passed using a "side-channel" (the process environment, which is inherited by child processes).
Just define the variable in the first script and use it in the second one:
# script1.ps1
$var = 'foo'
$dir = 'C:\some\folder'
$scriptname = "script2.ps1"
& "${dir}\${scriptname}"
# script2.ps1
Write-Output $var
This will work as well, because by using the call operator (&) the second script is run in the same context as the first script and thus has access to the same variables. However, "passing" a variable like this will easily break if someone runs the second script in a different context/scope or modies it without being aware of the implicit dependency.
If you want to go this route it's usually better to use the first script for variable (and function) definitions only, and dot-source it in the second script, so that the definitions are imported into the scope of the second script:
# script1.ps1
$var = 'foo'
# script2.ps1
. 'C:\path\to\script1.ps1'
Write-Output $var
Technically, passing values via a file would be another option. However, I would recommend against using this approach for several reasons:
it's prone to errors due to improper permissions (could be mitigated by creating the file in the $env:TEMP folder),
it's prone to littering the filesystem if you don't clean up the file afterwards,
it needlessly generates disk I/O when simple in-memory operations provided by the language would suffice.