I have come across the strangest behaviour that has been driving me nuts when writing scripts. It is impossible sometimes to remove the value of a variable in Powershell. I have tried:
Remove-Variable -Force
Also tried making it equal to an empty string or making it $null but the variable value and type remains.
Anyone have an idea how this can happen?
I am using Powershell version 5 on Windows Server 2016.
Here some screenshots:
To remove a variable, pass its name without the $ sigil to the Remove-Variable cmdlet's
-Name parameter (which is positionally implied); using the example of a variable $date:
Using an argument:
# Note the required absence of $ in the name; quoting the var. name is
# optional in this case.
Remove-Variable -Force -Name date
Using the pipeline would require you to specify objects whose .Name property contains the name of the variable to delete, because these property values implicitly bind to Remove-Variable's -Name parameter; the simplest way to achieve that is to use the Get-Variable cmdlet, which too requires specifying the name without the $:
# Works, but is inefficient.
Get-Variable -Name date | Remove-Variable -Force
However, this is both more verbose and less efficient than directly passing the name(s) as an argument.
As for what you tried:
You variable-removal command is conceptually flawed:
$date | Remove-Variable -Force
Except as the LHS of an assignment ($date = ...), referring to a variable with the $ sigil returns its value, not the variable itself.
That is, since your $date variable contains a [datetime] instance, it is that instance that is sent through the pipeline, and since only strings are supported as input - that is, variable names - the command fails.
In effect, your call is equivalent to the following, which predictably fails:
PS> Get-Date | Remove-Variable -Force
Remove-Variable : The input object cannot be bound to any parameters for the command
either because the command does not take pipeline input
or the input and its properties do not match any of the parameters that take pipeline input.
What the somewhat verbose, general error message is implying in this case is that the input object was of the wrong type (because only objects with a .Name property are accepted, which [datetime] doesn't have).
Contexts in which you need refer to a variable itself rather than to its value:
What these contexts have in common is that you need to specify the variable name without the $ sigil.
Two notable examples:
All *-Variable cmdlets expect the names of variables to operate on, such as the Get-Variable cmdlet that returns objects representing variables, of type System.Management.Automation.PSVariable; these objects include the name, value, and other attributes of a PowerShell variable.
# Gets an object describing variable $date
$varObject = Get-Variable date # -Name parameter implied
When you pass the name of an output variable to a -*Variable common parameter
# Prints Get-Date's output while also capturing the output
# in variable $date.
Get-Date -OutVariable date
As implied, above, assigning to a variable with = is the only exception: there you do use the $ sigil, e.g. $date = Get-Date.
Note that this differs from POSIX-compatible shells such as bash, where you do not use $ in assignments (and must not have whitespace around =); e.g., date=$(date).
Related
Can anyone tell me why this command works fine in the Powershell console, returning a single thumbprint, but when run as a script it just returns all the certificate's thumbprints:
$crt = (Get-ChildItem -Path Cert:\LocalMachine\WebHosting\ | Where-Object {$_.Subject.Contains($certcn)}).thumbprint
$certcn is a string containing a domain. eg "www.test.com"
I figured it out. $certcn was derived from $args[0]. It turns out $args[0] is not a string, and even though PS would quite happily use it as a string in other commands, it would not do this with Where-Object.
Not sure what type $args[0] actually is, but doing $certcn = $args[0].tostring() fixed it.
The only explanation for your symptom is that the value of variable $certcn:
either: is the empty string ('') because 'someString'.Contains('') returns $true for any input string.
or: is implicitly converted to the empty string, though that wouldn't happen often in practice; here are some examples (see GitHub issue # for the [pscustomobject] stringification bug mentioned below):
# An empty array stringifies to ''
'someString'.Contains(#()) # -> $true
# A single-element array containing $null stringifies to ''
'someString'.Contains(#($null)) # -> $true
# Due to a longstanding bug, [pscustomobject] instances, when
# stringified via .ToString(), convert to the empty string.
# This makes the command equivalent to `.Contains(#(''))`, which is again
# the same as `.Contains('')`
'someString'.Contains(#([pscustomobject] #{ foo=1 })) # -> $true
$args[0] is not a string
The automatic $args variable is an array that contains all positional arguments that weren't bound to declared parameters, if any.
$args can contain elements of any data type, and what that type is is solely determined by the caller.
However, if you formally declare a parameter, you can type it, which means that if the caller passes an argument of a different data type, an attempt is made to convert the argument to the parameter's type (which may fail, but at least the failure will be "loud", and the reason obvious).
A robust solution for your script:
param(
[Parameter(Mandatory)] # Ensure that the caller passes a value.
[string] $CertCN # Type-constrain to a string.
# , ... declare other parameters as needed
)
# $CertCN is now guaranteed to be a *string* that is *non-empty*.
$crt =
(Get-ChildItem -Path Cert:\LocalMachine\WebHosting |
Where-Object { $_.Subject.Contains($CertCN) }).thumbprint
Note:
The use of the [Parameter()] attribute in the parameter declaration block (param(...)) makes your script an advanced one, which means that $args isn't supported, requiring all arguments to bind to explicitly declared parameters; however, you can define a catch-all parameter with [Parameter(ValueFromRemaningArguments)], if needed. (The other thing that makes a script or function an advanced one is use of the [CmdletBinding()] attribute above the param(...) block as a whole.)
[Parameter(Mandatory)], in addition to ensuring that the caller passes a value for the parameter, implicitly also prevents passing the empty string (or $null) - though you could explicitly allow that with [AllowEmptyString()]
Additionally, advanced scripts and functions automatically prevent passing arrays to [string]-typed parameters, which is desirable. (By contrast, simple functions and scripts simply stringify arrays, as would happen in expandable strings (string interpolation); e.g., & { param([string] $foo) $foo } 1, 2 binds '1 2', which is also what you'd get with "$(1, 2)")
Caveat:
When passing a value to a [string]-typed parameter, PowerShell accepts a scalar (non-collection) of any type, and any non-string type is automatically converted to a string, via .ToString(). This is usually desirable and convenient, but can result in useless stringifications; e.g.:
& { param([string] $str) $str } #{} # -> 'System.Collections.Hashtable'
Instances of hashtables (#{ ... }) stringify to their type name, which is unhelpful, and this behavior is the default behavior for any type that doesn't explicitly implement a meaningful string representation by overriding the .ToString() method.
If that is a concern, you can modify your script to ensure that the argument value being passed already is a string, using a [ValidateScript()] attribute.
param(
[Parameter(Mandatory)]
# Ensure that the argument value is a [string] to begin with.
# Note: The `ErrorMessage` property requires PowerShell (Core) 7+
[ValidateScript({ $_ -is [string] }, ErrorMessage='Please pass a *string* argument.')]
$CertCN # Do not type-constrain, so that the original type can be inspected.
# , ... declare other parameters as needed
)
# ...
As stated in the code comments, use of the ErrorMessage property in the requires PowerShell (Core) 7+, unfortunately. In Windows PowerShell a standard error message is invariably shown, which isn't user-friendly at all.
I am using Get-WindowsAutopilotInfo to generate a computer's serial number and a hash code and export that info as a CSV.
Here is the code I usually use:
new-item "c:\Autopilot_Export" -type directory -force
Set-Location "C:\Autopilot_Export"
Get-WindowsAutopilotInfo.ps1 -OutputFile Autopilot_CSV.csv
Robocopy C:\Autopilot_Export \\Zapp\pc\Hash_Exports /copyall
This outputs a CSV file named "Autopilot_CSV.csv" to the C:\Autopilot_Export directory and then Robocopy copies it to the Network Share drive inside of the Hash_Exports folder listed above. In the above code, if I manually type in "test", "123", "ABC", etc. and replace "Autopilot_CSV" it will output a CSV under all of those names as well. So it looks like Get-WindowsAutopilotInfo will create a CSV file and save the file name with whatever string you pass into it. Great.
However, I am running this script on multiple different computers and I would like the exported CSV to be named something unique to the machine it is running on so I can easily identify it once it's copied. I am trying to pass the value of the environmental variable $env:computername as a string input for the CSV and it isn't working.
Here's the code I'm trying to use:
new-item "c:\Autopilot_Export" -type directory -force
Set-Location "C:\Autopilot_Export"
Get-WindowsAutopilotInfo.ps1 -OutputFile $env:computername.csv
Robocopy C:\Autopilot_Export C:\Users\danieln\Desktop /copyall
This code does not generate the CSV file at all. Why not?
Do I just have a basic misunderstanding of how environmental variables are used in Powershell? $env:computername appears to return a string of the computer's name, but I cannot pass it into Get-WindowsAutopilotInfo and have it save, but it will work if I manually type a string in as the input.
I have also tried setting it to a variable $computername = [string]$env:computername and just passing $computername in before the .CSV and that doesn't appear to work either. And per the docmentation, environmental variables are apparently already strings.
Am I missing something?
Thanks!
js2010's answer shows the effective solution: use "..."-quoting, i.e. an expandable string explicitly.
It is a good habit to form to use "..." explicitly around command arguments that are strings containing variable references (e.g. "$HOME/projects") or subexpressions (e.g., "./folder/$(Get-Date -Format yyyy-MM)")
While such compound string arguments generally do not require double-quoting[1] - because they are implicitly treated as if they were "..."-enclosed - in certain cases they do, and when they do is not obvious and therefore hard to remember:
This answer details the surprisingly complex rules, but here are two notable pitfalls if you do not use "...":
If a compound argument starts with a subexpression ($(...)), its output is passed as a separate argument; e.g. Write-Output $(Get-Location)/folder passes two arguments to Write-Output: the result of $(Get-Location) and literal /folder
If a compound argument starts with a variable reference and is followed by what syntactically looks like either (a) a property access (e.g., $PSVersionTable.PsVersion) or (b) a method call (e.g., $PSHome.Tolower()) it is executed as just that, i.e. as an expression (rather than being considered a variable reference followed by a literal part).
Aside #1: Such an argument then isn't necessarily a string, but whatever data type the property value or method-call return value happens to be.
Aside #2: A compound string that starts with a quoted string (whether single- or double-quoted) does not work, because the quoted string at the start is also considered an expression, like property access and method calls, so that what comes after is again passed as separate argument(s). Therefore, you can only have a compound strings consisting of a mix of quoted and unquoted substrings if the compound string starts with an unquoted substring or a non-expression variable reference.[2]
The latter is what happened in this case:
Unquoted $env:computername.csv was interpreted as an attempt to access a property named csv on the object stored in (environment) variable $env:computername, and since the [string] instance stored there has no csv property, the expression evaluated to $null.
By forcing PowerShell to interpret this value as an expandable string via "...", the usual interpolation rules apply, which means that the .csv is indeed treated as a literal (because property access requires use of $(...) in expandable strings).
[1] Quoting is required for strings that contain metacharacters such as spaces.
For values to be treated verbatim, '...' quoting is the better choice (see the bottom section of this answer for an overview of PowerShell string literals).
Also, using neither double- nor single-quoting and individually `-escaping metacharacters is another option (e.g. Write-Output C:\Program` Files.
[2] For instance, Write-Output a"b`t"'`c' and Write-Output $HOME"b`t"'`c' work as intended, whereas Write-Output "b`t"'`c' and Write-Output $HOME.Length"b`t"'`c' do not (the latter two each pass 3 arguments). The workaround is to either use a single "..." string with internal `-escaping as necessary (e.g. Write-Output "${HOME}b`t``c") or to use string-concatenation expression with + enclosed in (...) (e.g. Write-Output ($HOME + "b`t" + '`c'))
Doublequoting seems to work. The colon is special in powershell parameters.
echo hi | set-content "$env:computername.csv"
dir
Directory: C:\users\me\foo
Mode LastWriteTime Length Name
---- ------------- ------ ----
-a---- 2/11/2021 1:02 PM 4 COMP001.csv
The colon is special. For example in switch parameters:
get-childitem -force:$true
Actually, it's trying to find a property named csv:
echo hi | Set-Content $env:COMPUTERNAME.length
dir
Directory: C:\Users\me\foo
Mode LastWriteTime Length Name
---- ------------- ------ ----
-a---- 2/11/2021 3:04 PM 4 8
Basically pass it a string rather than the variable:
write-host $env:computername.csv
# output: (no output - looking for a variable called "computername.csv" instead of "computername")
Try the following syntax:
$filename = "$($env:computername).csv"
write-host $filename
# output: MYPCNAME.csv
I would like for the resulting value in $s to be "now is then for today"
PS H:\> $s = "now is $({if (1 -eq 1){'then'}}) for today"
PS H:\> $s
now is if (1 -eq 1){'then'} for today
It's definitely possible, and pretty easy with subexpressions
You were close, just need to remove the outer set of curly braces
$s = "now is $(if (1 -eq 1){'then'}) for today"
$s
Delay-bind script-block arguments are an implicit feature that:
only works with parameters that are designed to take pipeline input,
of any type except the following, in which case regular parameter binding happens[1]:
[scriptblock]
[object] ([psobject], however, does work, and therefore [pscustomobject] too)
(no type specified), which is effectively the same as [object]
whether such parameters accept pipeline input by value (ValueFromPipelineBy) or by property name (ValueFromPipelineByPropertyName), is irrelevant.
enables per-input-object transformations via a script block passed instead of a type-appropriate argument; the script block is evaluated for each pipeline object, which is accessible inside the script block as $_, as usual, and the script block's output - which is assumed to be type-appropriate for the parameter - is used as the argument.
Since such ad-hoc script blocks by definition do not match the type of the parameter you're targeting, you must always use the parameter name explicitly when passing them.
Delay-bind script blocks unconditionally provide access to the pipeline input objects, even if the parameter would ordinarily not be bound by a given pipeline object, if it is defined as ValueFromPipelineByPropertyName and the object lacks a property by that name.
This enables techniques such as the following call to Rename-Item, where the pipeline input from Get-Item is - as usual - bound to the -LiteralPath parameter, but passing a script block to -NewName - which would ordinarily only bind to input objects with a .NewName property - enables access to the same pipeline object and thus deriving the destination filename from the input filename:
Get-Item file | Rename-Item -NewName { $_.Name + '1' } # renames 'file' to 'file1'; input binds to both -LiteralPath (implicitly) and the -NewName script block.
Note: Unlike script blocks passed to ForEach-Object or Where-Object, for example, delay-bind script blocks run in a child variable scope[2], which means that you cannot directly modify the caller's variables, such as incrementing a counter across input objects.
As a workaround, use a [ref]-typed variable declared in the caller's scope and access its .Value property inside the script block - see this answer for an example.
I'm looking at the documentation of PowerShell's Rename-Item cmdlet and there is an example like this.
Get-ChildItem *.txt | Rename-Item -NewName { $_.name -Replace '\.txt','.log' }
This example shows how to use the Replace operator to rename multiple
files, even though the NewName parameter does not accept wildcard
characters.
This command renames all of the .txt files in the current directory to
.log.
The command uses the Get-ChildItem cmdlet to get all of the files in
the current folder that have a .txt file name extension. Then, it
uses the pipeline operator (|) to send those files to Rename-Item .
The value of NewName is a script block that runs before the value is
submitted to the NewName parameter.
Note the last sentence:
The value of NewName is a script block that runs before the value is submitted to the NewName parameter.
Actually NewName is a string:
[-NewName] <String>
So does that means I can always use a script block when the required parameter type is a string?
# Delay-bind script-block argument:
# The code inside { ... } is executed for each input object ($_) and
# the output is passed to the -NewName parameter.
... | Rename-Item -NewName { $_.Name -replace '\.txt$','.log' }
The call above shows an application of a delay-bind script-block ({ ... }) argument, which is an implicit feature that:
only works with parameters that are designed to take pipeline input,
of any type except the following, in which case regular parameter binding happens[1]:
[scriptblock]
[object] ([psobject], however, does work, and therefore the equivalent [pscustomobject] too)
(no type specified), which is effectively the same as [object]
Whether such parameters accept pipeline input by value (ValueFromPipeline) or by property name (ValueFromPipelineByPropertyName), is irrelevant.
See this answer for how to discover a given cmdlet's pipeline-binding parameters; in the simplest case, e.g.:
Get-Help Rename-Item -Parameter * | Where pipelineInput -like True*
enables per-input-object transformations via a script block passed instead of a type-appropriate argument; the script block is evaluated for each pipeline object, which is accessible inside the script block as $_, as usual, and the script block's output - which is assumed to be type-appropriate for the parameter - is used as the argument.
Since such ad-hoc script blocks by definition do not match the type of the parameter you're targeting, you must always use the parameter name explicitly when passing them.
Delay-bind script blocks unconditionally provide access to the pipeline input objects, even if the parameter would ordinarily not be bound by a given pipeline object, if it is defined as ValueFromPipelineByPropertyName and the object lacks a property by that name.
This enables techniques such as the following call to Rename-Item, where the pipeline input from Get-Item is - as usual - bound to the -LiteralPath parameter, but passing a script block to -NewName - which would ordinarily only bind to input objects with a .NewName property - enables access to the same pipeline object and thus deriving the destination filename from the input filename:
Get-Item file | Rename-Item -NewName { $_.Name + '1' } # renames 'file' to 'file1'; input binds to both -LiteralPath (implicitly) and the -NewName script block.
Note: Unlike script blocks passed to ForEach-Object or Where-Object, for example, delay-bind script blocks run in a child variable scope[2], which means that you cannot directly modify the caller's variables, such as incrementing a counter across input objects.
As a workaround, use Get-Variable to gain access to a caller's variable and access its .Value property inside the script block - see this answer for an example.
[1] Error conditions:
If you mistakenly attempt to pass a script block to a parameter that is either not pipeline-binding or is [scriptblock]- or [object]-typed (untyped), regular parameter-binding occurs:
The script block is passed once, before pipeline-input processing begins, if any.
That is, the script block is passed as a (possibly converted) value, and no evaluation happens.
For parameters of type [object] or [scriptblock] / a delegate type such as System.Func that is convertible to a script block, the script block will bind as-is.
In the case of a (non-pipeline-binding) [string]-typed parameter, the script block's literal contents is passed as the string value.
For all other types, parameter binding - and therefore the command as a whole - will simply fail, since conversion from a script block is not possible.
If you neglect to provide pipeline input while passing a delay-bind script block to a pipeline-binding parameter that does support them, you'll get the following error:
Cannot evaluate parameter '<name>' because its argument is specified as a script block and there is no input. A script block cannot be evaluated without input.
[2] This discrepancy is being discussed in GitHub issue #7157.
So does that means I can always use a script block when the required
parameter type is a string? : NO
Here the technique is called Delay Binding, which is very useful this scenario.
What happens when you do delay binding ?
PowerShell ParameteBinder will understand the usage of delay binding and will execute the ScriptBlock first and the output is then converted to respective parameter's expected type, here it is string.
Below is an example.
#Working one
'Path'|Join-Path -Path {$_} -ChildPath 'File'
#Not working one
Join-Path -Path {'path'} -ChildPath 'File'
Join-Path : Cannot evaluate parameter 'Path' because its argument is specified as a script block and there is no input. A script block cannot be evaluated without input.
To know more about ParameterBinding, you can do a Trace-Command like below.
Trace-Command ParameterBinding -Expression {'Path'|Join-Path -Path {$_} -ChildPath 'File'} -PSHost
With Delay Binding, the parameter can receive a value from the pipeline using a scriptblock instead of the actual data type of the parameter.
In the scriptblock, $_ stands for the piped value.
It is only available when there is input coming on the pipeline.
Here's a test script I'm trying to use, and I'm calling it from a separate process and attempting to pass parameters to it. The idea is that I have a user interface that allows a user to select a CmdLet and then populate another dropdown with the properties/methods of that CmdLet.
My problem seems to be that the script is rendering the input parameter as a string, and is thusly creating a text file with the methods and properties of any arbitrary string to which you've applied a "Get-Member" to, such as "Clone", or "CompareTo". The only property as such is "Length".
Is there any way to have that input parameter be brought over as a usable CmdLet instead of a string? Perhaps I'm missing something, or perhaps what I'm attempting to do isn't possible.
param([string]$inputCmdLet = "Get-NetAdapter");
$wrkgDir = "D:\Distribution\Operational";
# Get Properties and Methods for CmdLet Input Parameter
$propertyNames = $inputCmdLet | Get-Member -MemberType Property;
$methodNames = $inputCmdLet | Get-Member -MemberType Method;
# Sort Arrays
$propertyNames = $propertyNames | Sort-Object Name;
$methodNames = $methodNames | Sort-Object Name;
# Output Results to Text Files
$propertyNames.Name | Out-File $wrkgDir\$inputCmdLet.Properties.txt;
$methodNames.Name | Out-File $wrkgDir\$inputCmdLet.Methods.txt;
EDIT FOR MORE INFO:
The output I'm hoping for, in the example of Get-NetAdapter, is the list of properties in one output file and methods in the other. What I'm getting now is this:
Left list is expected (partial) result, right list is actual result.
I'm uncertain how to achieve the result list on the left (in the image) programmatically. I'm able to get the proper output by typing it out statically:
$mbrNameStatic = Get-NetAdapter | Get-Member;
$mbrNameStatic.Name | Out-File $wrkgDir\$inputCmdLet.Strings.txt;
But when i use the input parameter, it merges the value in as a string, so it seems the actual runtime code looks more like this:
$propertyNames = "Get-NetAdapter" | Get-Member -MemberType Property;
So the addition of the quotes renders the cmdlet as a string (makes sense i suppose, since my input parameter is a string), which returns the properties and methods of a string instead of the cmdlet. Is there any way to have the cmdlet render out without the quotes?
Please do let me know if I'm not making sense with this, either with my description, or with the idea altogether.
Thanks!
In order to execute a command whose name (only) is stored in a variable or whose name is specified in single or double quotes, you must use &, the call operator.
# WRONG: The token is interpreted as an *expression* that outputs a *string*
"Get-NetAdapter" # outputs the [string] literal
# WRONG: ditto, via a variable
$name = "Get-NetAdapter"
$name # outputs the contents of the [string] variable
# OK: Use of & tells Powershell to interpret the next token as a *command* to *invoke*.
& "Get-NetAdapter"
& $name
As for your general approach:
Note that not all cmdlets produce output when invoked without parameters, so your current code (even with &) won't work with all cmdlets.
Conversely, those cmdlets that do produce output when given no parameters may produce a lot of them, which is unnecessary, so consider something like & $inputCmdlet | Select-Object -First 1.
Generically, you can use something like (Get-Command Get-NetAdapter).OutputType to obtain a cmdlet's output type(s), but note that:
Declaring output types is optional, so not all cmdlets may return a value.
If you start with a type rater than an instance of that type, you cannot use Get-Member to discover the instance members (you can only obtain the static members via -Static).