How do I share variables among scripts in Powershell? - powershell

I have difficulty trying to find an answer that solves this issue online.
I have a script which runs all my other scripts in a particular order.
$x=0;
$z=0;
cd *file path* ;
.\filename1.ps1 ; write-host "$x text $z text";
.\filename2.ps1 ; write-host "$x text $z text";
In each of these scripts I have options that will add 1 to either variable $x or variable $z
$a=Read-Host
if ($a -eq "Option One") {$x = $x+1}
elseif ($a -eq "Option Two") {$z = $z+1}
else {Write-Host "Not a valid option" ; .\filenameX.ps1}
The issue is that the script that runs all these scripts won't recognise the change in variable. How do I fix this?

The naïve answer is to "dot-source" these scripts, i.e. to invoke them with operator . 
Using .  executes the scripts in the caller's variable scope, so that top-level modifications of $x an $z will be visible even after .\filename1.ps1 and .\filename2.ps1 have completed.
# Note the `. ` preceding the file path - the space after "." is mandatory
. .\filename1.ps1 ; "$x text $z text"
. .\filename2.ps1 ; "$x text $z text"
Note, however, that all top-level variables created or modified in . -invoked scripts will be visible to the caller.
For more on variable scopes in PowerShell, see this answer.
Better encapsulated options are to either (a) output modified values or, less commonly, (b) use of [ref] parameters to pass by-reference variables to scripts - whose parameters must be declared and assigned to accordingly.

If you define you x (and z) variable with a global scope outside your scripts like this:
$global:x=0.
You can increment it inside your scripts like this:
$global:x = $global:x + 1

Related

Why does the scope of variables change depending on if it's a .ps1 or .psm1 file, and how can this be mitigated?

I have a function that executes a script block. For convenience, the script block does not need to have explicitly defined parameters, but instead can use $_ and $A to refer to the inputs.
In the code, this is done as such:
$_ = $Value
$A = $Value2
& $ScriptBlock
This whole thing is wrapped in a function. Minimal example:
function F {
param(
[ScriptBlock]$ScriptBlock,
[Object]$Value
[Object]$Value2
)
$_ = $Value
$A = $Value2
& $ScriptBlock
}
If this function is written in a PowerShell script file (.ps1), but imported using Import-Module, the behaviour of F is as expected:
PS> F -Value 7 -Value2 1 -ScriptBlock {$_ * 2 + $A}
15
PS>
However, when the function is written in a PowerShell module file (.psm1) and imported using Import-Module, the behaviour is unexpected:
PS> F -Value 7 -Value2 1 -ScriptBlock {$_ * 2 + $A}
PS>
Using {$_ + 1} instead gives 1. It seems that $_ has a value of $null instead. Presumably, some security measure restricts the scope of the $_ variable or otherwise protects it. Or, possibly, the $_ variable is assigned by some automatic process. Regardless, if only the $_ variable was affected, the first unsuccessful example would return 1.
Ideally, the solution would involve the ability to explicitly specify the environment in which a script block is run. Something like:
Invoke-ScriptBlock -Variables #{"_" = $Value; "A" = $Value2} -InputObject $ScriptBlock
In conclusion, the questions are:
Why can't script blocks in module files access variables defined in functions from which they were called?
Is there a method for explicitly specifying the variables accessible by a script block when invoking it?
Is there some other way of solving this that does not involve including an explicit parameter declaration in the script block?
Out of order:
Is there some other way of solving this that does not involve including an explicit parameter declaration in the script block?
Yes, if you just want to populate $_, use ForEach-Object!
ForEach-Object executes in the caller's local scope, which helps you work around the issue - except you won't have to, because it also automatically binds input to $_/$PSItem:
# this will work both in module-exported commands and standalone functions
function F {
param(
[ScriptBlock]$ScriptBlock,
[Object]$Value
)
ForEach-Object -InputObject $Value -Process $ScriptBlock
}
Now F will work as expected:
PS C:\> F -Value 7 -ScriptBlock {$_ * 2}
Ideally, the solution would involve the ability to explicitly specify the environment in which a script block is run. Something like:
Invoke-ScriptBlock -Variables #{"_" = $Value; "A" = $Value2} -InputObject $ScriptBlock
Execute the scripblock using ScriptBlock.InvokeWithContext():
$functionsToDefine = #{
'Do-Stuff' = {
param($a,$b)
Write-Host "$a - $b"
}
}
$variablesToDefine = #(
[PSVariable]::new("var1", "one")
[PSVariable]::new("var2", "two")
)
$argumentList = #()
{Do-Stuff -a $var1 -b two}.InvokeWithContext($functionsToDefine, $variablesToDefine, $argumentList)
Or, wrapped in a function like your original example:
function F
{
param(
[scriptblock]$ScriptBlock
[object]$Value
)
$ScriptBlock.InvokeWithContext(#{},#([PSVariable]::new('_',$Value)),#())
}
Now you know how to solve your problem, let's get back to the question(s) about module scoping.
At first, it's worth noting that you could actually achieve the above using modules, but sort of in reverse.
(In the following, I use in-memory modules defined with New-Module, but the module scope resolution behavior describe is the same as when you import a script module from disk)
While module scoping "bypasses" normal scope resolution rules (see below for explanation), PowerShell actually supports the inverse - explicit execution in a specific module's scope.
Simply pass a module reference as the first argument to the & call operator, and PowerShell will treat the subsequent arguments as a command to be invoked in said module:
# Our non-module test function
$twoPlusTwo = { return $two + $two }
$two = 2
& $twoPlusTwo # yields 4
# let's try it with explicit module-scoped execution
$myEnv = New-Module {
$two = 2.5
}
& $myEnv $twoPlusTwo # Hell froze over, 2+2=5 (returns 5)
Why can't script blocks in module files access variables defined in functions from which they were called?
If they can, why can't the $_ automatic variable?
Because loaded modules maintain state, and the implementers of PowerShell wanted to isolate module state from the caller's environment.
Why might that be useful, and why might one preclude the other, you ask?
Consider the following example, a non-module function to test for odd numbers:
$two = 2
function Test-IsOdd
{
param([int]$n)
return $n % $two -ne 0
}
If we run the above statements in a script or an interactive prompt, subsequently invocating Test-IsOdd should yield the expected result:
PS C:\> Test-IsOdd 123
True
So far, so great, but relying on the non-local $two variable bears a flaw in this scenario - if, somewhere in our script or in the shell we accidentally reassign the local variable $two, we might break Test-IsOdd completely:
PS C:\> $two = 1 # oops!
PS C:\> Test-IsOdd 123
False
This is expected since, by default, variable scope resolution just wanders up the call stack until it reaches the global scope.
But sometimes you might require state to be kept across executions of one or more functions, like in our example above.
Modules solve this by following slightly different scope resolution rules - module-exported functions defer to something we call module scope (before reaching the global scope).
To illustrate how this solves our problem from before, considering this module-exported version of the same function:
$oddModule = New-Module {
function Test-IsOdd
{
param([int]$n)
return $n % $two -ne 0
}
$two = 2
}
Now, if we invoke our new module-exported Test-IsOdd, we predictably get the expected result, regardless of "contamination" in the callers scope:
PS C:\> Test-IsOdd 123
True
PS C:\> $two = 1
PS C:\> Test-IsOdd 123 # still works
True
This behavior, while maybe surprising, basicly serves to solidify the implicit contract between the module author and the user - the module author doesn't need to worry too much about what's "out there" (the callers session state), and the user can expect whatever going on "in there" (the loaded module's state) to work correctly without worrying about what they assign to variables in the local scope.
Module scoping behavior poorly documented in the help files, but is explained in some depth in chapter 8 of Bruce Payette's "PowerShell In Action" (ISBN:9781633430297)

What is the "correct" way of passing parameters to powershell functions

Has anyone written the "PowerShell Gotchas for VBA Coders" guide? I am attempting to teach myself Powershell, I have some experience in VBA. What is the "correct" method of defining and passing parameters to functions.
Here is my test code:
Function Pass-Parameters1
{param( $s1, $s2)
write-host Pass-Parameters1 s1: $s1
Write-Host Pass-Parameters1 s2: $s2
return $s1 + $s2
}
Function Pass-Parameters2($ss1, $ss2){
Write-Host Pass-Parameters2 ss1: $ss1
Write-Host Pass-Parameters2 ss2: $ss2
return $ss1 + $ss2
}
$x = "Hello "
$y = "There!!"
$z = Pass-Parameters1 -s1 $x -s2 $y
$zz = Pass-Parameters2 $x, $y
$zzz = Pass-Parameters2 $x $y
Write-Host 1..Z = $z
write-host 1.ZZ = $zz
Write-Host 1ZZZ = $zzz
Here are the results:
Pass-Parameters1 s1: Hello
Pass-Parameters1 s2: There!!
Pass-Parameters2 ss1: Hello There!!
Pass-Parameters2 ss2:
Pass-Parameters2 ss1: Hello
Pass-Parameters2 ss2: There!!
1..Z = Hello There!!
1.ZZ = Hello There!!
1ZZZ = Hello There!!
Which is the recommended method, example 1 or example 2? I have a lot to learn about Powershell as $zz = Pass-Parameters2 $x, $y did not do what I expected, which is the way I would call the function in VBA. I am assuming $z = Pass-Parameters1 -s1 $x -s2 $y is the recommended method of calling the function as there is no ambiguity.
Any comments or suggestions welcome!
For specifying the parameters to a function I would choose the 2 form because to me 1 is overly verbose/I'm well used to .net languages where it's functionname(argument1,argument2)/the majority of C-like programming languages don't have a separate line inside the function that describes the parameters, but this is personal preference
You can provide a name of an argument, prefixed by hyphen, and provide the arguments in any order:
$z = Pass-Parameters1 -s1 $x -s2 $y
$z = Pass-Parameters1 -s2 $y -s1 $x
You can separate the arguments with spaces and provide the arguments in order:
$zzz = Pass-Parameters2 $x $y
Either of these is correct, and most languages have positional and named arguments. The advantage to using a named argument approach is if you don't want to specify all parameters. There is also the need to consider that power shell developers can force some arguments to be positional and others as named so sometimes you'll need to specify names
For example Write-Host can take an array of things to output as it's first argument and has parameters for what to separate them with as a named argument (third, as written). If you wanted to pass the array but not specify the second arg (as written, which controls new line behavior) at all you need to Write-Host $someArray -separator ":" - it mixes positional and named And needs to be presented thus because of the way positional/named has been specified by (Microsoft), otherwise write will just end up being given more things to output
If you're specifying all arguments, or only need to specify eg the first 3 of a 5 argument function, use whichever is more terse but readable. If you have a (terrible) habit of naming your string variables s1, s2, s3 then calling Add-Person -name $s1 -address $s2 -ssn $s3 keeps things readable. Or use good variable names, and Add-Person $name $address $ssn because it's more terse/redundant to specify parameter names when the variable does a pretty good job of describing the data. Consider using names if you're passing string literals: Add-Person "Markus Crescent" "Lee Street" "12345" - which is the name and which is the address (ok, its a stretch, but consider if these strings are just paths like "file to keep" and "file to delete")
This one turned your X and Y into an array, passed it into the first parameter and passed nothing for the second.. which Write-Host then duly printed the array contents on one line:
$zz = Pass-Parameters2 $x, $y
Your PowerShell code looks very unusual to PowerShell users imho. Here is an example, how I would format your functions:
function Pass-Parameters1 {
Param(
$s1,
$s2
)
Write-Host "Pass-Parameters1 s1: $s1"
Write-Host "Pass-Parameters1 s2: $s2"
return $s1 + $s2
}
function Pass-Parameters2($ss1, $ss2) {
Write-Host "Pass-Parameters2 ss1: $ss1"
Write-Host "Pass-Parameters2 ss2: $ss2"
return $ss1 + $ss2
}
Both versions are valid. In my experience, the first method is more common, because it is more clear, if you have many parameters with type definitions, switches, mandatory tags, default values, constraints, positional info or other additional info. Check out this documentation, to see how complex the definition of even one parameter can be.
You may be used to the syntax like function Pass-Parameters2($ss1, $ss2) from other languages, but as you will call functions in PowerShell like Pass-Parameters2 -ss1 "String1" -ss2 "String2" you won't stick to the also used call syntax from other languages like Pass-Parameters2("String1", "String2") anyway.
Thank you both Caius Jard and Thomas for your input. The vital piece of information I have discovered is Pass-Parameters2 $x, $y passes $x and $y as an array to the first variable in the function, whereas Pass-Parameters2 $x $y passes the two variables.
I need to read a few more basic tutorials on Powershell to learn the finer points of syntax and not think in VBA mode.
The arguments that you pass into the function are added to an array called 'args'. This args[] array can be used as below:
Function TestFunction(){
# access your arguments in the args array
Write-Host args[0]
}
TestFunction $param1 $param2 $param3
Or you can explicitly name the arguments:
Function TestFunction($test1, $test2, $test3){
# access your arguments in the args array
Write-Host $test1
}
TestFunction $param1 $param2 $param3

Environment variable names with parentheses, like %ProgramFiles(x86)%, in PowerShell?

In a PowerShell script, how does one get the value of an environment variable whose name contains parentheses?
To complicate matters, some variables' names contains parentheses while others have similar names without parentheses. For example (using cmd.exe):
C:\>set | find "ProgramFiles"
CommonProgramFiles=C:\Program Files\Common Files
CommonProgramFiles(x86)=C:\Program Files (x86)\Common Files
ProgramFiles=C:\Program Files
ProgramFiles(x86)=C:\Program Files (x86)
We see that %ProgramFiles% is not the same as %ProgramFiles(x86)%.
My PowerShell code is failing in a weird way because it's ignoring the part of the environment variable name after the parentheses. Since this happens to match the name of a different, but existing, environment variable I don't fail, I just get the right value of the wrong variable.
Here's a test function in the PowerShell scripting language to illustrate my problem:
function Do-Test
{
$ok = "C:\Program Files (x86)" # note space between 's' and '('
$bad = "$Env:ProgramFiles" + "(x86)" # uses %ProgramFiles%
$d = "${ Env:ProgramFiles(x86) }" # fail (2), LINE 6
# $d = "$Env:ProgramFiles(x86)" # fail (1)
if ( $d -eq $ok ) {
Write-Output "Pass"
} elseif ( $d -eq $bad ) {
Write-Output "Fail: (1) %ProgramFiles% used instead of %ProgramFiles(x86)%"
} else {
Write-Output "Fail: (2) some other reason"
}
}
And here's the output:
PS> Do-Test
Fail: (2) some other reason
Is there a simple change I can make to line 6 above to get the correct value of %ProgramFiles(x86)%?
NOTE: In the text of this post I am using batch file syntax for environment variables as a convenient shorthand. For example %SOME_VARIABLE% means "the value of the environment variable whose name is SOME_VARIABLE". If I knew the properly escaped syntax in PowerShell, I wouldn't need to ask this question.
Simple. Change line 6 to remove the spaces inside the brackets:
$d = "${Env:ProgramFiles(x86)}" # LINE 6 (NO spaces inside brackets)
You just have to wrap the variable that contains () with {}. No spaces inside the brackets.

What does $script: do in PowerShell?

I've seen this syntax on a variable before and not quite sure exactly what it is:
$script:Foo = "Bar"
The syntax $script:Foo is most commonly used to modify a script-level variable, in this case $Foo. When used to read the variable, usually $Foo is sufficient. For example rather than write this:
verbose-script.ps1
$script:foo = ''
function f { $script:foo }
I would write this (less verbose and functionally equivalent):
script.ps1
$foo = ''
function f { $foo }
Where $script:Foo is crucial is when you want to modify a script-level variable from within another scope such as a function or an anonymous scriptblock e.g.:
PS> $f = 'hi'
PS> & { $f; $f = 'bye';$f }
hi
bye
PS> $f
hi
Notice that $f outside the scriptblock did not change even though we modified it to bye within the scriptblock. What happened is that we only modified a local copy of $f. When you don't apply a modifier like script: (or global:), PowerShell will perform a copy-on-write on the higer-scoped variable into a local variable with the same name.
Given the example above, if we really wanted to make a permanent change to $f, we would then use a modifier like script: or global: e.g.:
PS> $f = 'hi'
PS> & { $f; $global:f = 'bye';$f }
hi
bye
PS> $f
bye
The script: prefix causes the name on the right hand side to be looked up in the script scope. Essentially data which is local to the script itself. Other valid scopes include global, local and private.
The help section for scope contains a bit of detail on this subject.
help about_Scopes

PowerShell - How do I test for a variable value with "Set-StrictMode -version latest"?

I've just started doing some PowerShell scripting, and I'm running into a problem testing variables for a value. I try to run everything with all warnings enabled, especially while I'm learning, in order to catch dumb mistakes. So, I'm using CTPV3 and setting strict mode on with "set-strictmode -version latest". But I'm running into a road block with checking incoming variables for a value. These variables may or may not already be set.
# all FAIL if $var is undefined under "Set-StrictMode -version latest"
if ( !$var ) { $var = "new-value"; }
if ( $var -eq $null ) { $var = "new-value"; }
I can't find a way to test if a variable has a value that doesn't cause warnings when the variable is missing unless I turn off strict mode. And I don't want to turn strict mode on and off all over the place just to test the variables. I'm sure I'd forget to turn it back on somewhere and it looks terribly cluttered. That can't be right. What am I missing?
You're really testing for two things here, existence and value. And the existence test is the one causing the warnings under the strict mode operation. So, separate the tests. Remembering that PowerShell sees variables as just another provider (just like a file or registry provider) and that all PowerShell variables exist as files in the root folder of the drive called 'variable:', it becomes obvious that you can use the same mechanism that you would ordinarily use to test for any other file existence. Hence, use test-path:
if (!(test-path variable:\var)) {$var = $null} # test for EXISTENCE & create
if ( !$var ) { $var = "new-value"; } # test the VALUE
Note that the current strict mode can be changed in child scopes without affecting the parent scope (eg, in script-blocks). So, you could write a script block that encapsulates removing strict mode and setting the variable without affecting the surrounding program's strictness. It's a bit tricky because of variable scoping. Two possibilities I can think of:
#1 - return the value from the script block
$var = & { Set-StrictMode -off; switch( $var ) { $null { "new-value" } default { $var } }}
or #2 - use scope modifiers
& { Set-StrictMode -off; if (!$var) { set-variable -scope 1 var "new-value" }}
Probably the worst part about these are the error-prone, repetitive use of $var (both with and without the leading $). It seems very error prone. So, instead I'd use a subroutine:
function set-Variable-IfMissingOrNull ($name, $value)
{
$isMissingOrNull = !(test-path ('variable:'+$name)) -or ((get-variable $name -value) -eq $null)
if ($isMissingOrNull) { set-variable -scope 1 $name $value }
}
set-alias ?? set-Variable-IfMissingOrNull
#...
## in use, `var` must not have a leading $ or the shell attempts to read the possibly non-existant $var
set-Variable-IfMissingOrNull var "new-value"
?? varX 1
This last is probably the way I'd script it.
EDIT: After thinking about your question for a bit longer, I came up with a simpler function that more closely matches your coding style. Try this function:
function test-variable
{# return $false if variable:\$name is missing or $null
param( [string]$name )
$isMissingOrNull = (!(test-path ('variable:'+$name)) -or ((get-variable -name $name -value) -eq $null))
return !$isMissingOrNull
}
set-alias ?-var test-variable
if (!(?-var var)) {$var = "default-value"}
Hope that helps.
Firstly I love Roy's answer, complete and succinct. I just wanted to mention that it seems like you're trying to only set a variable if it's been set already. This seems like a job for read-only variables, or constants.
To make a variable read-only, a constant, use
Set-Variable
From get-help Set-Variable -full
-- ReadOnly: Cannot be deleted or changed without the Force parameter.
-- Constant: Cannot be deleted or changed. Constant is valid only when
creating a new variable. You cannot set the Constant option on an
existing variable.