In a PowerShell script, how does one get the value of an environment variable whose name contains parentheses?
To complicate matters, some variables' names contains parentheses while others have similar names without parentheses. For example (using cmd.exe):
C:\>set | find "ProgramFiles"
CommonProgramFiles=C:\Program Files\Common Files
CommonProgramFiles(x86)=C:\Program Files (x86)\Common Files
ProgramFiles=C:\Program Files
ProgramFiles(x86)=C:\Program Files (x86)
We see that %ProgramFiles% is not the same as %ProgramFiles(x86)%.
My PowerShell code is failing in a weird way because it's ignoring the part of the environment variable name after the parentheses. Since this happens to match the name of a different, but existing, environment variable I don't fail, I just get the right value of the wrong variable.
Here's a test function in the PowerShell scripting language to illustrate my problem:
function Do-Test
{
$ok = "C:\Program Files (x86)" # note space between 's' and '('
$bad = "$Env:ProgramFiles" + "(x86)" # uses %ProgramFiles%
$d = "${ Env:ProgramFiles(x86) }" # fail (2), LINE 6
# $d = "$Env:ProgramFiles(x86)" # fail (1)
if ( $d -eq $ok ) {
Write-Output "Pass"
} elseif ( $d -eq $bad ) {
Write-Output "Fail: (1) %ProgramFiles% used instead of %ProgramFiles(x86)%"
} else {
Write-Output "Fail: (2) some other reason"
}
}
And here's the output:
PS> Do-Test
Fail: (2) some other reason
Is there a simple change I can make to line 6 above to get the correct value of %ProgramFiles(x86)%?
NOTE: In the text of this post I am using batch file syntax for environment variables as a convenient shorthand. For example %SOME_VARIABLE% means "the value of the environment variable whose name is SOME_VARIABLE". If I knew the properly escaped syntax in PowerShell, I wouldn't need to ask this question.
Simple. Change line 6 to remove the spaces inside the brackets:
$d = "${Env:ProgramFiles(x86)}" # LINE 6 (NO spaces inside brackets)
You just have to wrap the variable that contains () with {}. No spaces inside the brackets.
Related
I have a folder with multiple PDFs I need to print to different printers. I've created variables for each shared printer and depending on the first 2 letters of the PDF the printing will go to the matching printer.
I'm having trouble concatenating 2 strings to form an existing variable to use it later in the printing call.
This is what I have now (all PDFs in the dir starts with 01 for now):
# SumatraPDF path
$SumatraExe = "C:\Users\Administrador.WIN-FPFTEJASDVR\AppData\Local\SumatraPDF\SumatraPDF.exe"
# PDFs to print path
$PDF = "C:\Program Files (x86)\CarrascocreditosPrueba2\CarrascocreditosPrueba2\DTE\BOL"
# Shared printers list
$01 = '\\192.168.1.70\epson'
$02 = '\\192.168.1.113\EPSON1050'
cd $PDF
While ($true) {
Get-ChildItem | Where {!$_.PsIsContainer} | Select-Object Name | %{
$Boleta = $_.Name
$CodSucursal = $Boleta.Substring(0,2)
$CodImpresora = '$' + $CodSucursal
Write-Host $CodImpresora -> This shows literal $01 on PS ISE
Write-Host $01 -> This show the shared printer path
}
Start-Sleep -Seconds 5
}
# Actual PDF printing...
#& $SumatraExe -print-to $CodImpresora $PDF
So basically I need to call an existing variable based on 2 strings. Probably this could be achieved with a Switch but that will be too extensive.
concatenating 2 strings to form an existing variable
That won't work in PowerShell, variable tokens are always treated literally.
I'd suggest you use a hashtable instead:
# Shared printers table
$Impresoras = #{
'01' = '\\192.168.1.70\epson'
'02' = '\\192.168.1.113\EPSON1050'
}
Then inside the loop:
$Boleta = $_.Name
$CodSucursal = $Boleta.Substring(0,2)
$Impresora = $Impresoras[$CodSucursal]
Although the language syntax don't support variable variable names, you can resolve variables by name using either the Get-Variable cmdlet:
# Returns a PSVariable object describing the variable $01
Get-Variable '01'
# Returns the raw value currently assigned to $01
Get-Variable '01' -ValueOnly
... or by querying the Variable: PSDrive:
# Same effect as `Get-Variable 01`
Get-Item Variable:\01
While these alternatives exist, I'd strongly suggest staying clear of using them in scripts - they're slow, makes the code more complicated to read, and I don't think I've ever encountered a situation in which using a hashtable or an array wasn't ultimately easier :)
I have a function that executes a script block. For convenience, the script block does not need to have explicitly defined parameters, but instead can use $_ and $A to refer to the inputs.
In the code, this is done as such:
$_ = $Value
$A = $Value2
& $ScriptBlock
This whole thing is wrapped in a function. Minimal example:
function F {
param(
[ScriptBlock]$ScriptBlock,
[Object]$Value
[Object]$Value2
)
$_ = $Value
$A = $Value2
& $ScriptBlock
}
If this function is written in a PowerShell script file (.ps1), but imported using Import-Module, the behaviour of F is as expected:
PS> F -Value 7 -Value2 1 -ScriptBlock {$_ * 2 + $A}
15
PS>
However, when the function is written in a PowerShell module file (.psm1) and imported using Import-Module, the behaviour is unexpected:
PS> F -Value 7 -Value2 1 -ScriptBlock {$_ * 2 + $A}
PS>
Using {$_ + 1} instead gives 1. It seems that $_ has a value of $null instead. Presumably, some security measure restricts the scope of the $_ variable or otherwise protects it. Or, possibly, the $_ variable is assigned by some automatic process. Regardless, if only the $_ variable was affected, the first unsuccessful example would return 1.
Ideally, the solution would involve the ability to explicitly specify the environment in which a script block is run. Something like:
Invoke-ScriptBlock -Variables #{"_" = $Value; "A" = $Value2} -InputObject $ScriptBlock
In conclusion, the questions are:
Why can't script blocks in module files access variables defined in functions from which they were called?
Is there a method for explicitly specifying the variables accessible by a script block when invoking it?
Is there some other way of solving this that does not involve including an explicit parameter declaration in the script block?
Out of order:
Is there some other way of solving this that does not involve including an explicit parameter declaration in the script block?
Yes, if you just want to populate $_, use ForEach-Object!
ForEach-Object executes in the caller's local scope, which helps you work around the issue - except you won't have to, because it also automatically binds input to $_/$PSItem:
# this will work both in module-exported commands and standalone functions
function F {
param(
[ScriptBlock]$ScriptBlock,
[Object]$Value
)
ForEach-Object -InputObject $Value -Process $ScriptBlock
}
Now F will work as expected:
PS C:\> F -Value 7 -ScriptBlock {$_ * 2}
Ideally, the solution would involve the ability to explicitly specify the environment in which a script block is run. Something like:
Invoke-ScriptBlock -Variables #{"_" = $Value; "A" = $Value2} -InputObject $ScriptBlock
Execute the scripblock using ScriptBlock.InvokeWithContext():
$functionsToDefine = #{
'Do-Stuff' = {
param($a,$b)
Write-Host "$a - $b"
}
}
$variablesToDefine = #(
[PSVariable]::new("var1", "one")
[PSVariable]::new("var2", "two")
)
$argumentList = #()
{Do-Stuff -a $var1 -b two}.InvokeWithContext($functionsToDefine, $variablesToDefine, $argumentList)
Or, wrapped in a function like your original example:
function F
{
param(
[scriptblock]$ScriptBlock
[object]$Value
)
$ScriptBlock.InvokeWithContext(#{},#([PSVariable]::new('_',$Value)),#())
}
Now you know how to solve your problem, let's get back to the question(s) about module scoping.
At first, it's worth noting that you could actually achieve the above using modules, but sort of in reverse.
(In the following, I use in-memory modules defined with New-Module, but the module scope resolution behavior describe is the same as when you import a script module from disk)
While module scoping "bypasses" normal scope resolution rules (see below for explanation), PowerShell actually supports the inverse - explicit execution in a specific module's scope.
Simply pass a module reference as the first argument to the & call operator, and PowerShell will treat the subsequent arguments as a command to be invoked in said module:
# Our non-module test function
$twoPlusTwo = { return $two + $two }
$two = 2
& $twoPlusTwo # yields 4
# let's try it with explicit module-scoped execution
$myEnv = New-Module {
$two = 2.5
}
& $myEnv $twoPlusTwo # Hell froze over, 2+2=5 (returns 5)
Why can't script blocks in module files access variables defined in functions from which they were called?
If they can, why can't the $_ automatic variable?
Because loaded modules maintain state, and the implementers of PowerShell wanted to isolate module state from the caller's environment.
Why might that be useful, and why might one preclude the other, you ask?
Consider the following example, a non-module function to test for odd numbers:
$two = 2
function Test-IsOdd
{
param([int]$n)
return $n % $two -ne 0
}
If we run the above statements in a script or an interactive prompt, subsequently invocating Test-IsOdd should yield the expected result:
PS C:\> Test-IsOdd 123
True
So far, so great, but relying on the non-local $two variable bears a flaw in this scenario - if, somewhere in our script or in the shell we accidentally reassign the local variable $two, we might break Test-IsOdd completely:
PS C:\> $two = 1 # oops!
PS C:\> Test-IsOdd 123
False
This is expected since, by default, variable scope resolution just wanders up the call stack until it reaches the global scope.
But sometimes you might require state to be kept across executions of one or more functions, like in our example above.
Modules solve this by following slightly different scope resolution rules - module-exported functions defer to something we call module scope (before reaching the global scope).
To illustrate how this solves our problem from before, considering this module-exported version of the same function:
$oddModule = New-Module {
function Test-IsOdd
{
param([int]$n)
return $n % $two -ne 0
}
$two = 2
}
Now, if we invoke our new module-exported Test-IsOdd, we predictably get the expected result, regardless of "contamination" in the callers scope:
PS C:\> Test-IsOdd 123
True
PS C:\> $two = 1
PS C:\> Test-IsOdd 123 # still works
True
This behavior, while maybe surprising, basicly serves to solidify the implicit contract between the module author and the user - the module author doesn't need to worry too much about what's "out there" (the callers session state), and the user can expect whatever going on "in there" (the loaded module's state) to work correctly without worrying about what they assign to variables in the local scope.
Module scoping behavior poorly documented in the help files, but is explained in some depth in chapter 8 of Bruce Payette's "PowerShell In Action" (ISBN:9781633430297)
I am porting a script from bash to PowerShell, and I would like to keep the same support for argument parsing in both. In the bash, one of the possible arguments is -- and I want to also detect that argument in PowerShell. However, nothing I've tried so far has worked. I cannot define it as an argument like param($-) as that causes a compile error. Also, if I decide to completely forego PowerShell argument processing, and just use $args everything appears good, but when I run the function, the -- argument is missing.
Function Test-Function {
Write-Host $args
}
Test-Function -- -args go -here # Prints "-args go -here"
I know about $PSBoundParameters as well, but the value isn't there, because I can't bind a parameter named $-. Are there any other mechanisms here that I can try, or any solution?
For a bit more context, note that me using PowerShell is a side effect. This isn't expected to be used as a normal PowerShell command, I have also written a batch wrapper around this, but the logic of the wrapper is more complex than I wanted to write in batch, so the batch wrapper just calls the PowerShell function, which then does the more complex processing.
I found a way to do so, but instead of double-hyphen you have to pass 3 of them.
This is a simple function, you can change the code as you want:
function Test-Hyphen {
param(
${-}
)
if (${-}) {
write-host "You used triple-hyphen"
} else {
write-host "You didn't use triple-hyphen"
}
}
Sample 1
Test-Hyphen
Output
You didn't use triple-hyphen
Sample 2
Test-Hyphen ---
Output
You used triple-hyphen
As an aside: PowerShell allows a surprising range of variable names, but you have to enclose them in {...} in order for them to be recognized; that is, ${-} technically works, but it doesn't solve your problem.
The challenge is that PowerShell quietly strips -- from the list of arguments - and the only way to preserve that token is you precede it with the PSv3+ stop-parsing symbol, --%, which, however, fundamentally changes how the arguments are passed and is obviously an extra requirement, which is what you're trying to avoid.
Your best bet is to try - suboptimal - workarounds:
Option A: In your batch-file wrapper, translate -- to a special argument that PowerShell does preserve and pass it instead; the PowerShell script will then have to re-translate that special argument to --.
Option B: Perform custom argument parsing in PowerShell:
You can analyze $MyInvocation.Line, which contains the raw command line that invoked your script, and look for the presence of -- there.
Getting this right and making it robust is nontrivial, however.
Here's a reasonably robust approach:
# Don't use `param()` or `$args` - instead, do your own argument parsing:
# Extract the argument list from the invocation command line.
$argList = ($MyInvocation.Line -replace ('^.*' + [regex]::Escape($MyInvocation.InvocationName)) -split '[;|]')[0].Trim()
# Use Invoke-Expression with a Write-Output call to parse the raw argument list,
# performing evaluation and splitting it into an array:
$customArgs = if ($argList) { #(Invoke-Expression "Write-Output -- $argList") } else { #() }
# Print the resulting arguments array for verification:
$i = 0
$customArgs | % { "Arg #$((++$i)): [$_]" }
Note:
There are undoubtedly edge cases where the argument list may not be correctly extracted or where the re-evaluation of the raw arguments causes side effect, but for the majority of cases - especially when called from outside PowerShell - this should do.
While useful here, Invoke-Expression should generally be avoided.
If your script is named foo.ps1 and you invoked it as ./foo.ps1 -- -args go -here, you'd see the following output:
Arg #1: [--]
Arg #2: [-args]
Arg #3: [go]
Arg #4: [-here]
I came up with the following solution, which works well also inside pipelines multi-line expressions. I am using the PowerShell Parser to parse the invocation expression string (while ignoring any incomplete tokens, which might be present at the end of $MyInfocation.Line value) and then Invoke-Expression with Write-Output to get the actual argument values:
# Parse the whole invocation line
$code = [System.Management.Automation.Language.Parser]::ParseInput($MyInvocation.Line.Substring($MyInvocation.OffsetInLine - 1), [ref]$null, [ref]$null)
# Find our invocation expression without redirections
$myline = $code.Find({$args[0].CommandElements}, $true).CommandElements | % { $_.ToString() } | Join-String -Separator ' '
# Get the argument values
$command, $arguments = Invoke-Expression ('Write-Output -- ' + $myline)
# Fine-tune arguments to be always an array
if ( $arguments -is [string] ) { $arguments = #($arguments) }
if ( $arguments -eq $null ) { $arguments = #() }
Please be aware that the original values in the function call are reevaluated in Invoke-Expression, so any local variables might shadow values of the actual arguments. Because of that, you can also use this (almost) one-liner at the top of your function, which prevents the pollution of local variables:
# Parse arguments
$command, $arguments = Invoke-Expression ('Write-Output -- ' + ([System.Management.Automation.Language.Parser]::ParseInput($MyInvocation.Line.Substring($MyInvocation.OffsetInLine - 1), [ref]$null, [ref]$null).Find({$args[0].CommandElements}, $true).CommandElements | % { $_.ToString() } | Join-String -Separator ' '))
# Fine-tune arguments to be always an array
if ( $arguments -is [string] ) { $arguments = #($arguments) }
if ( $arguments -eq $null ) { $arguments = #() }
I have difficulty trying to find an answer that solves this issue online.
I have a script which runs all my other scripts in a particular order.
$x=0;
$z=0;
cd *file path* ;
.\filename1.ps1 ; write-host "$x text $z text";
.\filename2.ps1 ; write-host "$x text $z text";
In each of these scripts I have options that will add 1 to either variable $x or variable $z
$a=Read-Host
if ($a -eq "Option One") {$x = $x+1}
elseif ($a -eq "Option Two") {$z = $z+1}
else {Write-Host "Not a valid option" ; .\filenameX.ps1}
The issue is that the script that runs all these scripts won't recognise the change in variable. How do I fix this?
The naïve answer is to "dot-source" these scripts, i.e. to invoke them with operator .
Using . executes the scripts in the caller's variable scope, so that top-level modifications of $x an $z will be visible even after .\filename1.ps1 and .\filename2.ps1 have completed.
# Note the `. ` preceding the file path - the space after "." is mandatory
. .\filename1.ps1 ; "$x text $z text"
. .\filename2.ps1 ; "$x text $z text"
Note, however, that all top-level variables created or modified in . -invoked scripts will be visible to the caller.
For more on variable scopes in PowerShell, see this answer.
Better encapsulated options are to either (a) output modified values or, less commonly, (b) use of [ref] parameters to pass by-reference variables to scripts - whose parameters must be declared and assigned to accordingly.
If you define you x (and z) variable with a global scope outside your scripts like this:
$global:x=0.
You can increment it inside your scripts like this:
$global:x = $global:x + 1
I have the following code
$scriptpath = "C:\Test"
$scriptname = "mount.bat"
$myimage = Read-Host 'Enter the file name of your image'
if (Test-Path $scriptpath\$scriptname) {
Remove-Item $scriptpath\$scriptname
}
Add-Content $scriptpath\$scriptname ':Loop 'n "C:\Program
Files\file.exe" -f \\host\"Shared Folders"\$myimage -m V: `n if not
%errorlevel% equal 0 goto :Loop'
I can't get powershell to output the variable correctly in the output batch file it just says "$myimage" and not the file name. I have tried using the break ` ' symbols but no luck. I also cannot get powershell to export onto a separate line. If anyone could help that would be great.
Since you're using (a) variable references ($myimage) and (b) escape sequences such as `n (to represent a newline) in your string, you must use double quotes to get the expected result.
(Single-quoted strings treat their contents literally - no interpolation takes place.[1])
Furthermore, since your string has embedded double quotes, they must be escaped as `"
Here's a fixed version; note that I've used actual line breaks for readability (rather than `n escape sequences in a single-line string):
$myImage = 'test.png'
":Loop
`"C:\ProgramFiles\file.exe`" -f \\host\`"Shared Folders`"\$myimage -m V:
if not %errorlevel% equal 0 goto :Loop"
[1] The only interpretation that takes place is to recognize escape sequence '' as an embedded '.