PowerShell bizarre variable process - powershell

I've been banging my head, about this ...
Created this simple PowerShell script :
test.ps1:
Write-Host $args[0]
Write-Host $($args[0])
Write-Host "$($args[0])"
Now, run the script twice, with these parameters :
powershell .\test.ps1 0E2C
powershell .\test.ps1 0E2D
The first run returns :
0E2C
0E2C
0E2C
The second run returns :
0E2D
0
0
Why is that ? Why is powershell converting 0E2D to 0 ? Is it considering that 0E2D is an hexadecimal value and trying to convert it somehow ? But why, and why doesn't it do the same with value 0E2C ?
EDIT :Thank you all.
I solved the issue by using a variable, and forcing it to string
So, changing my TEST.PS1 :
Write-Host $args[0]
Write-Host $($args[0])
Write-Host "$($args[0])"
[string] $myvar = $args[0]
Write-Host "$myvar"
Now, when I run :
powershell .\test.ps1 0E2D
I now get :
0E2D
0
0
0E2D

Powershell interprets 0E2D as a Decimal 0E2 which is 0*10^2 which is 0
The d suffix is documented here:
https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_numeric_literals?view=powershell-7.2#real-literals

tl;dr
An unquoted argument such as 0E2D is inherently ambiguous - it can be interpreted as either a string or a number. By default it is parsed as a number, but the original representation is cached - see next section.
To avoid ambiguity:
Either: quote an argument you want to be interpreted as a string, e.g. .\test.ps1 '0E2D'
This puts the responsibility on the caller to know when there's ambiguity.
Or, preferably: explicitly declare typed parameters for your script instead of relying on the automatic $args variable, such as param([string] $StringValue) or param([decimal] $Value)
This is preferable, because the caller is then free to pass the argument unquoted (unless it is a string and contains shell metacharacters), without having to worry about potential ambiguity.
The existing, helpful answers explain that 0E2D is interpreted as a number literal, namely a decimal value in scientific (exponential) notation, typed as a [decimal], due to suffix D (d would work too, as type-specifier suffixes are generally case-insensitive).
Let me complement them by explaining why $args[0] still showed the argument's original string representation, at least when used as-is.
In argument parsing mode, i.e when arguments are passed to command, simple string values (ones that contain neither spaces nor other shell metacharacters) need not be quoted.
This creates ambiguity, such as in the case at hand: is 0E2D meant to be a number, or meant to be a string ('0E2D')?
PowerShell's parameter (argument) parser handles this ambiguity as follows:
if the unquoted argument can be parsed as a number, it is.
if so, and if the default stringification of the number doesn't equal the argument as specified, the number is wrapped in a (mostly invisible) [psobject] instance that caches the original (string) representation, which can be recalled via .psobject.ToString()
if the unquoted argument binds to an explicitly declared typed parameter, the originally parsed form is ultimately irrelevant, but it does matter in unbound argument-passing, i.e. when arguments are passed positionally in the absence of predeclared parameters, via the automatic $args variable.
Here's an explicit illustration using 1L as a positional, unbound argument, which is parsed as [long] value 1, with the original representation, 1L, cached in the [psobject] wrapper:
PS> & { $args[0].ToString(), $args[0].psobject.ToString() } 1L
1 # default stringification of the [long] value that 1L was parsed as
1L # original representation
The default display representation implicitly calls .psobject.ToString():
PS> & { $args[0] } 1L
1L # original representation - even though it was parsed as [long]
This - commendably - also applies when passing the argument to external programs, as PowerShell should make no assumptions as to whether the argument represents a number or not - that is up the target program:
PS> cmd /c echo 1L
1L # original representation
Unfortunately, however - as your question shows - both Windows PowerShell and PowerShell (Core) as of v7.2.6 - are inconsistent with respect to when they honor the cached string representation:
& {
$args[0] # ditto for ($args[0])
$($args[0]) # ditto for #($args[0])
"$($args[0])"
} 1L
Arguably, all these commands should honor the original representation but only some of them do:
1L
1L
1
With 0E2D as the argument, as in your question, the two PowerShell editions even exhibit differences: $($args[0]) prints 0 in Windows PowerShell vs. O2ED in PowerShell (Core) 7.2.6.
Another inconsistency is that if you pass such an argument to -f, the format operator, it is only the cached string representation that is ever honored, not the numeric type that the argument was actually parsed as:
PS> & { '{0:N1}' -f $args[0] } 1.234e0 # Argument is parsed as a [double]
1.234e0 # !! Formatting the [double] as a number with 1 decimal place failed.
However, this behavior was declared to be by design - see GitHub issue #17199.

It is interpreting 02ED as a decimal number rather than a string due to the suffix d (see https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_numeric_literals?view=powershell-7.2)
So it's 0 x 10^2 (0E2) which is 0.
But the C at the end of 0E2C is not a suffix literal, so it's just a string.

Related

When should I pick using a const over a variable in PowerShell? [duplicate]

When I study PowerShell scripting language, I try to use "Write-Output" command to display variable.
I use a different method to create variables.
Example:
$myvariable = 0x5555
Set-Variable -Name myvariable2 -Value 0x5555
The data type of these two variables is Int32.
When I use the command as below,
Write-Output $myvariable $myvariable2
the result is 21845 and 0x5555.
What's different between these two variables?
How can I display format result like printf %d %x?
PetSerAl, as many times before, has given the crucial pointer in a comment (and later helped improve this answer):
Written as of PowerShell Core 6.2.0.
PowerShell parses an unquoted literal argument that looks like a number as a number and wraps it in a "half-transparent" [psobject] instance, whose purpose is to also preserve the exact argument as specified as a string.
By half-transparent I mean that the resulting $myVariable2:
primarily is a number - a regular (unwrapped) [int] instance - for the purpose of calculations; e.g., $myVariable2 + 1 correctly returns 21846
additionally, it shows that it is a number when you ask for its type with .GetType() or via the Get-Member cmdlet; in other words: in this case PowerShell pretends that the wrapper isn't there (see below for a workaround).
situationally behaves like a string - returning the original argument literal exactly as specified - in the context of:
output formatting, both when printing directly to the console and generally with Out-* cmdlets such as Out-File (but not Set-Content) and Format-* cmdlets.
string formatting with -f, PowerShell's format operator (which is based on .NET's String.Format() method; e.g., 'The answer is {0}' -f 42 is equivalent to [string]::Format('The answer is {0}', 42)).
Surprisingly, it does not behave like a string inside an expandable string ("$myVariable2") and when you call the .ToString() method ($myVariable2.ToString()) and (therefore also) with Set-Content.
However, the original string representation can be retrieved with $myVariable2.psobject.ToString()
Note that specifying number literals as command arguments is inherently ambiguous, because even string arguments generally don't need quoting (unless they contain special characters), so that, for instance, an argument such as 1.0 could be interpreted as a version string or as a floating-point number.
PowerShell's approach to resolving the ambiguity is to parse such a token as a number, which, however, situationally acts as a string[1], as shown above.
The ambiguity can be avoided altogether by typing parameters so as to indicate whether an argument bound to it is a string or a number.
However, the -Value parameter of the Set-Variable and New-Variable cmdlets is - of necessity - [object] typed, because it must be able to accept values of any type, and these cmdlets don't have a parameter that would let you indicate the intended data type.
The solution is to force the -Value argument to be treated as the result of an expression rather than as an unquoted literal argument, by enclosing it in (...):
# Due to enclosing in (...), the value that is stored in $myvariable2
# is *not* wrapped in [psobject] and therefore behaves the same as
# $myvariable = 0x55555
Set-Variable -Name myvariable2 -Value (0x5555)
Conversely, if you don't apply the above solution, you have two choices for unwrapping $myvariable2's value on demand:
# OK: $myvariable isn't wrapped in [psobject], so formatting it as a
# hex. number works as expected:
PS> 'hex: 0x{0:x}' -f $myvariable
hex: 0x5555 # OK: Literal '0x' followed by hex. representation of the [int]
# !! Does NOT work as expected, because $myvariable2 is treated as a *string*
# !! That is, {0:x} is effectively treated as just {0}, and the string
# !! representation stored in the [psobject] wrapper is used as-is.
PS> 'hex: 0x{0:x}' -f $myvariable2
hex: 0x0x5555 # !! Note the extra '0x'
# Workaround 1: Use a *cast* (with the same type) to force creation of
# a new, *unwrapped* [int] instance:
PS> 'hex: 0x{0:x}' -f [int] $myvariable2
hex: 0x5555 # OK
# Workaround 2: Access the *wrapped* object via .psobject.BaseObject.
# The result is an [int] that behaves as expected.
PS> 'hex: 0x{0:x}' -f $myvariable2.psobject.BaseObject
hex: 0x5555 # OK
Note: That -f, the format operator, unexpectedly treats a [psobject]-wrapped number as a string is the subject of GitHub issue #17199; sadly, the behavior was declared to be by design.
Detecting a [psobject]-wrapped value:
The simplest solution is to use -is [psobject]:
PS> $myvariable -is [psobject]
False # NO wrapper object
PS> $myvariable2 -is [psobject]
True # !! wrapper object
(PetSerAl offers the following, less obvious alternative: [Type]::GetTypeArray((, $myvariable2)), which bypasses PowerShell's hiding-of-the-wrapper trickery.)
[1] Preserving the input string representation in implicitly typed numbers passed as command arguments:
Unlike traditional shells, PowerShell uses rich types, so that an argument literal such as 01.2 is instantly parsed as a number - a [double] in this case, and if it were used as-is, it would result in a different representation on output, because - once parsed as a number - default output formatting is applied on output (where the number must again be turned into a string):
PS> 01.2
1.2 # !! representation differs (and is culture-sensitive)
However, the intent of the target command may ultimately be to treat the argument as a string and in that case you do not want the output representation to change.
(Note that while you can disambiguate numbers from strings by using quoting (01.2 vs. '01.2'), this is not generally required in command arguments, the same way it isn't required in traditional shells.)
It is for that reason that a [psobject] wrapper is used to capture the original string representation and use it on output.
Note: Arguably, a more consistent approach would have been to always treat unquoted literal arguments as strings, except when bound to explicitly numerically typed parameters in PowerShell commands.
This is a necessity for invoking external programs, to which arguments can only ever be passed as strings.
That is, after initial parsing as a number, PowerShell must use the original string representation when building the command line (Windows) / passing the argument (Unix-like platforms) as part of the invocation of the external program.
If it didn't do that, arguments could inadvertently be changed, as shown above (in the example above, the external program would receive string 1.2 instead of the originally passed 01.2).
You can also demonstrate the behavior using PowerShell code, with an untyped parameter - though note that is generally preferable to explicitly type your parameters:
PS> & { param($foo) $foo.GetType().Name; $foo } -foo 01.2
Double # parsed as number - a [double]
01.2 # !! original string representation, because $foo wasn't typed
$foo is an untyped parameter, which means that the type that PowerShell inferred during initial parsing of literal 01.2 is used.
Yet, given that the command (a script block ({ ... }) in this case) didn't declare a parameter type for $foo, the [psobject] wrapper that is implicitly used shows the original string representation on output.
The first question is already answered by #PetSerAl in the comments. Your second question:
How can I display format result like printf %d %x
Use PowerShell string formatting to obtain the desired results.

Use a variable in PowerShell to pass multiple arguments to an external program

I downloaded the npm package for merge junit reports - https://www.npmjs.com/package/junit-merge.
The problem is that I have multiple files to merge and I am trying to use string variable to hold file names to merge.
When I write the script myslef like:
junit-merge a.xml b.xml c.xml
This works, the merged file is being created, but when I do it like
$command = "a.xml b.xml c.xml"
junit-merge $command
This does not work. The error is
Error: File not found
Has anyone faced similar issues?
# WRONG
$command = "a.xml b.xml c.xml"; junit-merge $command
results in command line junit-merge "a.xml b.xml c.xml"[1], i.e. it passes a string with verbatim value a.xml b.xml c.xml as a single argument to junit-merge, which is not the intent.
PowerShell does not act like POSIX-like shells such as bash do in this regard: In bash, the value of variable $command - due to being referenced unquoted - would be subject to word splitting (one of the so-called shell expansions) and would indeed result in 3 distinct arguments (though even there an array-based invocation would be preferable).
PowerShell supports no bash-like shell expansions[2]; it has different, generally more flexible constructs, such as the splatting technique discussed below.
Instead, define your arguments as individual elements of an array, as justnotme advises:
# Define the *array* of *individual* arguments.
$command = "a.xml", "b.xml", "c.xml"
# Pass the array to junit-merge, which causes PowerShell
# to pass its elements as *individual arguments*; it is the equivalent of:
# junit-merge a.xml b.xml c.xml
junit-merge $command
This is an application of a PowerShell technique called splatting, where you specify arguments to pass to a command via a variable:
Either (typically only used for external programs, as in your case):
As an array of arguments to pass individually as positional arguments, as shown above.
Or (more typically when calling PowerShell commands):
As a hashtable to pass named parameter values, in which you must replace the $ sigil in the variable reference with #; e.g., in your case #command; e.g., the following is the equivalent of calling Get-ChildItem C:\ -Directory:
$paramVals = #{ LiteralPath = 'C:\'; Directory = $true }; Get-ChildItem #paramVals
Caveat re array-based splatting:
Due to a bug detailed in GitHub issue #6280, PowerShell doesn't pass empty arguments through to external programs (applies to all Windows PowerShell versions / and as of PowerShell (Core) 7.2.x; a fix may be coming in 7.3, via the $PSNativeCommandArgumentPassing preference variable, which in 7.2.x relies on an explicitly activated experimental feature).
E.g., foo.exe "" unexpectedly results in just foo.exe being called.
This problem equally affects array-based splatting, so that
$cmdArgs = "", "other"; foo.exe $cmdArgs results in foo.exe other rather than the expected foo.exe "" other.
Optional use of # with array-based splatting:
You can use the # sigil also with arrays, so this would work too:
junit-merge #command
There is a subtle distinction, however.
While it will rarely matter in practice,
the safer choice is to use $, because it guards against (the however hypothetical) accidental misinterpretation of a --% array element you intend to be a literal.
Only the # syntax recognizes an array element --% as the special stop-parsing symbol, --%
Said symbol tells PowerShell not to parse the remaining arguments as it normally would and instead pass them through as-is - unexpanded, except for expanding cmd.exe-style variable references such as %USERNAME%.
This is normally only useful when not using splatting, typically in the context of being able to use command lines that were written for cmd.exe from PowerShell as-is, without having to account for PowerShell's syntactical differences.
In the context of splatting, however, the behavior resulting from --% is non-obvious and best avoided:
As in direct argument passing, the --% is removed from the resulting command line.
Argument boundaries are lost, so that a single array element foo bar, which normally gets placed as "foo bar" on the command line, is placed as foo bar, i.e. effectively as 2 arguments.
[1] Your call implies the intent to pass the value of variable $command as a single argument, so when PowerShell builds the command line behind the scenes, it double-quotes the verbatim a.xml b.xml c.xml string contained in $command to ensure that. Note that these double quotes are unrelated to how you originally assigned a value to $command.
Unfortunately, this automatic quoting is broken for values with embedded " chars. - see this answer, for instance.
[2] As a nod to POSIX-like shells, PowerShell does perform one kind of shell expansion, but (a) only on Unix-like platforms (macOS, Linux) and (b) only when calling external programs: Unquoted wildcard patterns such as *.txt are indeed expanded to their matching filenames when you call an external program (e.g., /bin/echo *.txt), which is feature that PowerShell calls native globbing.
I had a similar problem. This technique from powershell worked for me:
Invoke-Expression "junit-merge $command"
I also tried the following (from a powershell script) and it works:
cmd / c "junit-merge $command"

What's different between "$myvariable =" and Set-Variable in PowerShell?

When I study PowerShell scripting language, I try to use "Write-Output" command to display variable.
I use a different method to create variables.
Example:
$myvariable = 0x5555
Set-Variable -Name myvariable2 -Value 0x5555
The data type of these two variables is Int32.
When I use the command as below,
Write-Output $myvariable $myvariable2
the result is 21845 and 0x5555.
What's different between these two variables?
How can I display format result like printf %d %x?
PetSerAl, as many times before, has given the crucial pointer in a comment (and later helped improve this answer):
Written as of PowerShell Core 6.2.0.
PowerShell parses an unquoted literal argument that looks like a number as a number and wraps it in a "half-transparent" [psobject] instance, whose purpose is to also preserve the exact argument as specified as a string.
By half-transparent I mean that the resulting $myVariable2:
primarily is a number - a regular (unwrapped) [int] instance - for the purpose of calculations; e.g., $myVariable2 + 1 correctly returns 21846
additionally, it shows that it is a number when you ask for its type with .GetType() or via the Get-Member cmdlet; in other words: in this case PowerShell pretends that the wrapper isn't there (see below for a workaround).
situationally behaves like a string - returning the original argument literal exactly as specified - in the context of:
output formatting, both when printing directly to the console and generally with Out-* cmdlets such as Out-File (but not Set-Content) and Format-* cmdlets.
string formatting with -f, PowerShell's format operator (which is based on .NET's String.Format() method; e.g., 'The answer is {0}' -f 42 is equivalent to [string]::Format('The answer is {0}', 42)).
Surprisingly, it does not behave like a string inside an expandable string ("$myVariable2") and when you call the .ToString() method ($myVariable2.ToString()) and (therefore also) with Set-Content.
However, the original string representation can be retrieved with $myVariable2.psobject.ToString()
Note that specifying number literals as command arguments is inherently ambiguous, because even string arguments generally don't need quoting (unless they contain special characters), so that, for instance, an argument such as 1.0 could be interpreted as a version string or as a floating-point number.
PowerShell's approach to resolving the ambiguity is to parse such a token as a number, which, however, situationally acts as a string[1], as shown above.
The ambiguity can be avoided altogether by typing parameters so as to indicate whether an argument bound to it is a string or a number.
However, the -Value parameter of the Set-Variable and New-Variable cmdlets is - of necessity - [object] typed, because it must be able to accept values of any type, and these cmdlets don't have a parameter that would let you indicate the intended data type.
The solution is to force the -Value argument to be treated as the result of an expression rather than as an unquoted literal argument, by enclosing it in (...):
# Due to enclosing in (...), the value that is stored in $myvariable2
# is *not* wrapped in [psobject] and therefore behaves the same as
# $myvariable = 0x55555
Set-Variable -Name myvariable2 -Value (0x5555)
Conversely, if you don't apply the above solution, you have two choices for unwrapping $myvariable2's value on demand:
# OK: $myvariable isn't wrapped in [psobject], so formatting it as a
# hex. number works as expected:
PS> 'hex: 0x{0:x}' -f $myvariable
hex: 0x5555 # OK: Literal '0x' followed by hex. representation of the [int]
# !! Does NOT work as expected, because $myvariable2 is treated as a *string*
# !! That is, {0:x} is effectively treated as just {0}, and the string
# !! representation stored in the [psobject] wrapper is used as-is.
PS> 'hex: 0x{0:x}' -f $myvariable2
hex: 0x0x5555 # !! Note the extra '0x'
# Workaround 1: Use a *cast* (with the same type) to force creation of
# a new, *unwrapped* [int] instance:
PS> 'hex: 0x{0:x}' -f [int] $myvariable2
hex: 0x5555 # OK
# Workaround 2: Access the *wrapped* object via .psobject.BaseObject.
# The result is an [int] that behaves as expected.
PS> 'hex: 0x{0:x}' -f $myvariable2.psobject.BaseObject
hex: 0x5555 # OK
Note: That -f, the format operator, unexpectedly treats a [psobject]-wrapped number as a string is the subject of GitHub issue #17199; sadly, the behavior was declared to be by design.
Detecting a [psobject]-wrapped value:
The simplest solution is to use -is [psobject]:
PS> $myvariable -is [psobject]
False # NO wrapper object
PS> $myvariable2 -is [psobject]
True # !! wrapper object
(PetSerAl offers the following, less obvious alternative: [Type]::GetTypeArray((, $myvariable2)), which bypasses PowerShell's hiding-of-the-wrapper trickery.)
[1] Preserving the input string representation in implicitly typed numbers passed as command arguments:
Unlike traditional shells, PowerShell uses rich types, so that an argument literal such as 01.2 is instantly parsed as a number - a [double] in this case, and if it were used as-is, it would result in a different representation on output, because - once parsed as a number - default output formatting is applied on output (where the number must again be turned into a string):
PS> 01.2
1.2 # !! representation differs (and is culture-sensitive)
However, the intent of the target command may ultimately be to treat the argument as a string and in that case you do not want the output representation to change.
(Note that while you can disambiguate numbers from strings by using quoting (01.2 vs. '01.2'), this is not generally required in command arguments, the same way it isn't required in traditional shells.)
It is for that reason that a [psobject] wrapper is used to capture the original string representation and use it on output.
Note: Arguably, a more consistent approach would have been to always treat unquoted literal arguments as strings, except when bound to explicitly numerically typed parameters in PowerShell commands.
This is a necessity for invoking external programs, to which arguments can only ever be passed as strings.
That is, after initial parsing as a number, PowerShell must use the original string representation when building the command line (Windows) / passing the argument (Unix-like platforms) as part of the invocation of the external program.
If it didn't do that, arguments could inadvertently be changed, as shown above (in the example above, the external program would receive string 1.2 instead of the originally passed 01.2).
You can also demonstrate the behavior using PowerShell code, with an untyped parameter - though note that is generally preferable to explicitly type your parameters:
PS> & { param($foo) $foo.GetType().Name; $foo } -foo 01.2
Double # parsed as number - a [double]
01.2 # !! original string representation, because $foo wasn't typed
$foo is an untyped parameter, which means that the type that PowerShell inferred during initial parsing of literal 01.2 is used.
Yet, given that the command (a script block ({ ... }) in this case) didn't declare a parameter type for $foo, the [psobject] wrapper that is implicitly used shows the original string representation on output.
The first question is already answered by #PetSerAl in the comments. Your second question:
How can I display format result like printf %d %x
Use PowerShell string formatting to obtain the desired results.

Unquoted tokens in argument mode involving variable references and subexpressions: why are they sometimes split into multiple arguments?

Note: A summary of this question has since been posted at the PowerShell GitHub repository, since superseded by this more comprehensive issue.
Arguments passed to a command in PowerShell are parsed in argument mode (as opposed to expression mode - see Get-Help about_Parsing).
Conveniently, (double-)quoting arguments that do not contain whitespace or metacharacters is usually optional, even when these arguments involve variable references (e.g. $HOME\sub) or subexpressions (e.g., version=$($PsVersionTable.PsVersion).
For the most part, such unquoted arguments are treated as if they were double-quoted strings, and the usual string-interpolation rules apply (except that metacharacters such as , need escaping).
I've tried to summarize the parsing rules for unquoted tokens in argument mode in this answer, but there are curious edge cases:
Specifically (as of Windows PowerShell v5.1), why is the unquoted argument token in each of the following commands NOT recognized as a single, expandable string, and results in 2 arguments getting passed (with the variable reference / subexpression retaining its type)?
$(...) at the start of a token:
Write-Output $(Get-Date)/today # -> 2 arguments: [datetime] obj. and string '/today'
Note that the following work as expected:
Write-Output $HOME/sub - simple var. reference at the start
Write-Output today/$(Get-Date) - subexpression not at the start
.$ at the start of a token:
Write-Output .$HOME # -> 2 arguments: string '.' and value of $HOME
Note that the following work as expected:
Write-Output /$HOME - different initial char. preceding $
Write-Output .-$HOME - initial . not directly followed by $
Write-Output a.$HOME - . is not the initial char.
As an aside: As of PowerShell Core v6.0.0-alpha.15, a = following a simple var. reference at the start of a token also seems to break the token into 2 arguments, which does not happen in Windows PowerShell v5.1; e.g., Write-Output $HOME=dir.
Note:
I'm primarily looking for a design rationale for the described behavior, or, as the case may be, confirmation that it is a bug. If it's not a bug, I want something to help me conceptualize the behavior, so I can remember it and avoid its pitfalls.
All these edge cases can be avoided with explicit double-quoting, which, given the non-obvious behavior above, may be the safest choice to use routinely.
Optional reading: The state of the documentation and design musings
As of this writing, the v5.1 Get-Help about_Parsing page:
incompletely describes the rules
uses terms that aren't neither defined in the topic nor generally in common use in the world of PowerShell ("expandable string", "value expression" - though one can guess their meaning)
From the linked page (emphasis added):
In argument mode, each value is treated as an expandable string unless it begins with one of the following special characters: dollar sign ($), at sign (#), single quotation mark ('), double quotation mark ("), or an opening parenthesis (().
If preceded by one of these characters, the value is treated as a value expression.
As an aside: A token that starts with " is, of course, by definition, also an expandable string (interpolating string).
Curiously, the conceptual help topic about quoting, Get-Help about_Quoting_Rules, manages to avoid both the terms "expand" and "interpolate".
Note how the passage does not state what happens when (non-meta)characters directly follow a token that starts with these special characters, notably $.
However, the page contains an example that shows that a token that starts with a variable reference is interpreted as an expandable string too:
With $a containing 4, Write-Output $a/H evaluates to (single string argument) 4/H.
Note that the passage does imply that variable references / subexpressions in the interior of an unquoted token (that doesn't start with a special char.) are expanded as if inside a double-quoted string ("treated as an expandable string").
If these work:
$a = 4
Write-Output $a/H # -> '4/H'
Write-Output H/$a # -> 'H/4'
Write-Output H/$(2 + 2) # -> 'H/4'
why shouldn't Write-Output $(2 + 2)/H expand to '4/H' too (instead of being treated as 2 arguments?
Why is a subexpression at the start treated differently than a variable reference?
Such subtle distinctions are hard to remember, especially in the absence of a justification.
A rule that would make more sense to me is to unconditionally treat a token that starts with $ and has additional characters following the variable reference / subexpression as an expandable string as well.
(By contrast, it makes sense for a standalone variable reference / subexpression to retain its type, as it does now.)
Note that the case of a token that starts with .$ getting split into 2 arguments is not covered in the help topic at all.
Even more optional reading: following a token that starts with one of the other special characters with additional characters.
Among the other special token-starting characters, the following unconditionally treat any characters that follow the end of the construct as a separate argument (which makes sense):
( ' "
Write-Output (2 + 2)/H # -> 2 arguments: 4 and '/H'
Write-Output "2 + $a"/H # -> 2 arguments: '2 + 4' and '/H', assuming $a equals 4
Write-Output '2 + 2'/H # -> 2 arguments: '2 + 2' and '/H'
As an aside: This shows that bash-style string concatenation - placing any mix of quoted and unquoted tokens right next to each other - is not generally supported in PowerShell; it only works if the 1st substring / variable reference happens to be unquoted. E.g., Write-Output H/'2 + 2', unlike the substrings-reversed example above, produces only a single argument.
The exception is #: while # does have special meaning (see Get-Help about_Splatting) when followed by just a syntactically valid variable name (e.g., #parms), anything else causes the token to be treated as an expandable string again:
Write-Output #parms # splatting (results in no arguments if $parms is undefined)
Write-Output #parms$a # *expandable string*: '#parms4', if $a equals 4
I think what you're sort of hitting here is more the the type "hinting" than anything else.
You're using Write-Output which specifies in it's Synopsis that it
Sends the specified objects to the next command in the pipeline.
This command is designed to take in an array. When it hits the first item as a string like today/ it treats it like a string. When the first item ends up being the result of a function call, that may or may not be a string, so it starts up an array.
It's telling that if you run the same command to Write-Host (which is designed to take in a string to output) it works as you'd expect it to:
Write-Host $(Get-Date)/today
Outputs
7/25/2018 1:30:43 PM /today
So I think you're edge cases you're running up against are less about the parsing, and mor about the typing that powershell uses (and tries to hide).

PowerShell outputting array items when interpolating within double quotes

I found some strange behavior in PowerShell surrounding arrays and double quotes. If I create and print the first element in an array, such as:
$test = #('testing')
echo $test[0]
Output:
testing
Everything works fine. But if I put double quotes around it:
echo "$test[0]"
Output:
testing[0]
Only the $test variable was evaluated and the array marker [0] was treated literally as a string. The easy fix is to just avoid interpolating array variables in double quotes, or assign them to another variable first. But is this behavior by design?
So when you are using interpolation, by default it interpolates just the next variable in toto. So when you do this:
"$test[0]"
It sees the $test as the next variable, it realizes that this is an array and that it has no good way to display an array, so it decides it can't interpolate and just displays the string as a string. The solution is to explicitly tell PowerShell where the bit to interpolate starts and where it stops:
"$($test[0])"
Note that this behavior is one of my main reasons for using formatted strings instead of relying on interpolation:
"{0}" -f $test[0]
EBGreen's helpful answer contains effective solutions, but only a cursory explanation of PowerShell's string expansion (string interpolation):
Only variables by themselves can be embedded directly inside double-quoted strings ("...") (by contrast, single-quoted strings ('...'), as in many other languages, are for literal contents).
This applies to both regular variables and variables referencing a specific namespace; e.g.:
"var contains: $var", "Path: $env:PATH"
If the first character after the variable name can be mistaken for part of the name - which notably includes : - use {...} around the variable name to disambiguate; e.g.:
"${var}", "${env:PATH}"
To use a $ as a literal, you must escape it with `, PowerShell's escape character; e.g.:
"Variable `$var"
Any character after the variable name - including [ and . is treated as a literal part of the string, so in order to index into embedded variables ($var[0]) or to access a property ($var.Count), you need $(...), the subexpression operator (in fact, $(...) allows you to embed entire statements); e.g.:
"1st element: $($var[0])"
"Element count: $($var.Count)"
"Today's date: $((Get-Date -DisplayHint Date | Out-String).Trim())"
Stringification (to-string conversion) is applied to any variable value / evaluation result that isn't already a string:
Caveat: Where culture-specific formatting can be applied, PowerShell chooses the invariant culture, which largely coincides with the US-English date and number formatting; that is, dates and numbers will be represented in US-like format (e.g., month-first date format and . as the decimal mark).
In essence, the .ToString() method is called on any resulting non-string object or collection (strictly speaking, it is .psobject.ToString(), which overrides .ToString() in some cases, notably for arrays / collections and PS custom objects)
Note that this is not the same representation you get when you output a variable or expression directly, and many types have no meaningful default string representations - they just return their full type name.
However, you can embed $(... | Out-String) in order to explicitly apply PowerShell's default output formatting.
For a more comprehensive discussion of stringification, see this answer.
As stated, using -f, the string-formatting operator (<format-string> -f <arg>[, ...]) is an alternative to string interpolation that separates the literal parts of a string from the variable parts:
'1st element: {0}; count: {1:x}' -f $var[0], $var.Count
Note the use of '...' on the LHS, because the format string (the template) is itself a literal. Using '...' in this case is a good habit to form, both to signal the intent of using literal contents and for the ability to embed $ characters without escaping.
In addition to simple positional placeholders ({0} for the 1st argument. {1} for the 2nd, ...), you may optionally exercise more formatting control over the to-string conversion; in the example above, x requests a hex representation of the number.
For available formats, see the documentation of the .NET framework's String.Format method, which the -f operator is based on.
Pitfall: -f has high precedence, so be sure to enclose RHS expressions other than simple index or property access in (...); e.g., '{0:N2}' -f 1/3 won't work as intended, only '{0:N2}' -f (1/3) will.
Caveats: There are important differences between string interpolation and -f:
Unlike expansion inside "...", the -f operator is culture-sensitive:
Therefore, the following two seemingly equivalent statements do not
yield the same result:
PS> [cultureinfo]::CurrentCulture='fr'; $n=1.2; "expanded: $n"; '-f: {0}' -f $n
expanded: 1.2
-f: 1,2
Note how only the -f-formatted command respected the French (fr) decimal mark (,).
Again, see the previously linked answer for a comprehensive look at when PowerShell is and isn't culture-sensitive.
Unlike expansion inside "...", -f stringifies arrays as <type-name>[]:
PS> $arr = 1, 2, 3; "`$arr: $arr"; '$arr: {0}' -f (, $arr)
$arr: 1 2 3
$arr: System.Object[]
Note how "..." interpolation created a space-separated list of the stringification of all array elements, whereas -f-formatting only printed the array's type name.
(As discussed, $arr inside "..." is equivalent to:
(1, 2, 3).psobject.ToString() and it is the generally invisible helper type [psobject] that provides the friendly representation.)
Also note how (, ...) was used to wrap array $arr in a helper array that ensures that -f sees the expression as a single operand; by default, the array's elements would be treated as individual operands.
In such cases you have to do:
echo "$($test[0])"
Another alternative is to use string formatting
echo "this is {0}" -f $test[0]
Note that this will be the case when you are accessing properties in strings as well. Like "$a.Foo" - should be written as "$($a.Foo)"