Here's a script to list directories / files passed on the command line -- recursively or not:
param( [switch] $r )
#gci_args = #{
Recurse = $r
ErrorAction = Ignore
}
$args | gci #gci_args
Now this does not work because Ignore is interpreted as a literal. What's the canonical way to pass an ErrorAction?
I see that both "Ignore" and (in PS7) { Ignore } work as values, but neither seems to make a difference for my use case (bad file name created under Linux, which stops PS5 regardless of the ErrorAction, but does not bother PS7 at all). So I'm not even sure the parameter has any effect.
because Ignore is interpreted as a literal
No, Ignore is interpreted as a command to execute, because it is parsed in argument mode (command invocation, like a shell) rather than in expression mode (like a traditional programming language) - see this answer for more information.
While using a [System.Management.Automation.ActionPreference] enumeration value explicitly, as in filimonic's helpful answer, is definitely an option, you can take advantage of the fact that PowerShell automatically converts back and forth between enum values and their symbolic string representations.
Therefore, you can use string 'Ignore' as a more convenient alternative to [System.Management.Automation.ActionPreference]::Ignore:[1]
$gci_args = #{
# ...
ErrorAction = 'Ignore'
}
Note that it is the quoting ('...') that signals to PowerShell that expression-mode parsing should be used, i.e. that the token is a string literal rather than a command.
Also note that -ErrorAction only operates on non-terminating errors (which are the typical kind, however) - see this answer for more information.
As for discovery of the permissible -ErrorAction values:
The conceptual about_CommonParameters help topic covers all common parameters, of which -ErrorAction is one.
Many common parameters have corresponding preference variables (which accept the same values), covered in about_Preference_Variables, which allow you to preset common parameters.
Interactively, you can use tab-completion to see the permissible values (as unquoted symbolic names, which you simply need to wrap in quotes); e.g.:
# Pressing the Tab key repeatedly where indicated
# cycles through the acceptable arguments.
Get-ChildItem -ErrorAction <tab>
[1] Note that using a string does not mean giving up type safety, if the context unambiguously calls for a specific enum type, such as in this case. Validation only happens at runtime either way, given that PowerShell is an interpreted language.
However, it is possible for a PowerShell-aware editor - such as Visual Studio Code with the PowerShell extension - to flag incorrect values at design time. As of version 2020.6.0, however, that does not yet appear to be the case. Fortunately, however, tab-completion and IntelliSense work as expected, so the problem may not arise.
That said, as zett42 points out, in the context of defining a hashtable entry for latter splatting the expected type is not (yet) known, so explicit use of [System.Management.Automation.ActionPreference] does have advantages: (a) IntelliSense in the editor can guide you, and (b) - assuming that Set-StrictMode -Version 2 or higher is in effect - an invalid value will we reported earlier at runtime, namely at the point of assignment, which makes troubleshooting easier. As of PowerShell 7.1, a caveat regarding Set-StrictMode -Version 2 or higher is that you will not be able to use the intrinsic (PowerShell-supplied) .Count property on objects that don't have it type-natively, due to the bug described in GitHub issue #2798.
I think the best way is to use native type.
$ErrorActionPreference.GetType().FullName # System.Management.Automation.ActionPreference
So, use
$gci_args = #{
Recurse = $r
ErrorAction = [System.Management.Automation.ActionPreference]::Ignore
}
Related
These things drive me nuts:
Is there an easy way to have Powershell just show me the empty string and the list with an empty string in it?
For a while I am maintaining a ConvertTo-Expression which converts an (complex) object into a PowerShell expression which eventually can be used to rebuild most objects. It might useful in situations as comparing test results but also to reveal objects. For details see: readme.
Source and installation
The ConvertTo-Expression script can be installed from the PowerShell Gallery:
Install-Script -Name ConvertTo-Expression
As it concerns a standalone script, installation isn't really required. If you don't have administrator rights, you might just download the script (or copy it) to the required location. You might than simply invoke the script using PowerShell dot sourcing:
. .\ConvertTo-Expression.ps1
Example
The following command outputs the same expression as used to build the object:
$Object = [Ordered]#{
'Null' = $Null
'EmptyString' = ''
'EmptyArray' = #()
'SingleItem' = ,''
'EmptyHashtable' = #{}
}
ConvertTo-Expression $Object
Note the comment from #Mathias there's no functional difference between "one string" and "an array of one string", the pipeline consumes 1 string either way. PowerShell is not node which is described here: PowerShell enumerate an array that contains only one inner array. Some objects might be really different than you expect.
See also: Save hash table in PowerShell object notation (PSON)
This is PowerShell, not Node. So it's not JavaScript or JSON. Also, PowerShell is not Bash or CMD any other regular text-based shell. PowerShell works with objects. .NET objects, in particular. And how objects are represented as text is ... quite a matter of taste. How to represent null? Of course: nothing. How to represent an empty string? Nothing, either. An empty array ... you get my point.
All pipeline output is by default send to Out-Default. In general, the way objects are represented can be controlled by format files: about_Format.ps1xml and about_Types.ps1xml. From PowerShell 6 upwards, the default formats are compiled into the source code, but you can extend them. How you do so, depends on your personal taste. Some options were already mentioned ConvertTo-Json "", ConvertTo-Json #("")), but this would be veryyy JSON-specific.
tl;dr Don't care too much about how objects are represented textually. As you see, there are many possible ways to do so, and also some others. Just make sure your scripts are always object-oriented.
You mean like Python's repr() function? A serialization? "Give me a canonical representation of the object that, when property evaluated, will return an object of the type originally passed?" No, that's not possible unless you write it yourself or you serialize it to XML, JSON, or similar. That's what the *-CliXml and ConvertTo-*/ConvertFrom-* commands are for.
On Powershell 7:
PS C:\> ''.Split(',') | ConvertTo-Json -Compress -AsArray
[""]
PS C:\> '1,2,3,,5'.Split(',') | ConvertTo-Json -Compress -AsArray
["1","2","3","","5"]
The closest would be the ToString() common method. However, that's intended for formatting output and typecasting, not canonical representations or serializations. Not every object even has a string representation that can be converted back into that object. The language isn't designed with that in mind. It's based on C#, a compiled language that favors binary serializations. Javascript requires essentially everything to have a string serialization in order to fulfill it's original design. Python, too, has string representations as fundamental to it's design which is why the repr() function is foundational for serialization and introspection and so on.
C# and .Net go the other way. In a .Net application, if you want to specify a literal empty string you're encouraged to use String.Empty ([String]::Empty in Powershell) because it's easier to see that it's explicit. Why would you ever want a compiled application to tell the user how the canonical representation of an object in memory? You see why that's not a useful thing for C# even if it might be for Powershell?
I'm writing a function that wraps a cmdlet using ValueFromRemainingArguments (as discussed here).
The following simple code demonstrates the problem:
works
function Test-WrapperArgs {
Set-Location #args
}
Test-WrapperArgs -Path C:\
does not work
function Test-WrapperUnbound {
Param(
[Parameter(ValueFromRemainingArguments)] $UnboundArgs
)
Set-Location #UnboundArgs
}
Test-WrapperUnbound -Path C:\
Set-Location: F:\cygwin\home\thorsten\.config\powershell\test.ps1:69
Line |
69 | Set-Location #UnboundArgs
| ~~~~~~~~~~~~~~~~~~~~~~~~~
| A positional parameter cannot be found that accepts argument 'C:\'.
I tried getting to the issue with GetType and EchoArgs from the PowerShell Community Extensions to no avail. At the moment I'm almost considering a bug (maybe related to this ticket??).
The best solution for an advanced function (one that uses a [CmdletBinding()] attribute and/or a [Parameter()] attribute) is to scaffold a proxy (wrapper) function via the PowerShell SDK, as shown in this answer.
This involves essentially duplicating the target command's parameter declarations (albeit in an automatic, but static fashion).
If you do not want to use this approach, your only option is to perform your own parsing of the $UnboundArgs array (technically, it is an instance of [System.Collections.Generic.List[object]]), which is cumbersome, however, and not foolproof:
function Test-WrapperUnbound {
Param(
[Parameter(ValueFromRemainingArguments)] $UnboundArgs
)
# (Incompletely) emulate PowerShell's own argument parsing by
# building a hashtable of parameter-argument pairs to pass through
# to Set-Location via splatting.
$htPassThruArgs = #{}; $key = $null
switch -regex ($UnboundArgs) {
'^-(.+)' { if ($key) { $htPassThruArgs[$key] = $true } $key = $Matches[1] }
default { $htPassThruArgs[$key] = $_; $key = $null }
}
if ($key) { $htPassThruArgs[$key] = $true } # trailing switch param.
# Pass the resulting hashtable via splatting.
Set-Location #htPassThruArgs
}
Note:
This isn't foolproof in that your function won't be able to distinguish between an actual parameter name (e.g., -Path) and a string literal that happens to look like a parameter name (e.g., '-Path')
Also, unlike with the scaffolding-based proxy-function approach mentioned at the top, you won't get tab-completion for any pass-through parameters and the pass-through parameters won't be listed with -? / Get-Help / Get-Command -Syntax.
If you don't mind having neither tab-completion nor syntax help and/or your wrapper function must support pass-through to multiple or not-known-in-advance target commands, using a simple (non-advanced) function with #args (as in your working example; see also below) is the simplest option, assuming your function doesn't itself need to support common parameters (which requires an advanced function).
Using a simple function also implies that common parameters are passed through to the wrapped command only (whereas an advanced function would interpret them as meant for itself, though their effect usually propagates to calls inside the function; with a common parameter such as -OutVariable, however, the distinction matters).
As for what you tried:
While PowerShell does support splatting via arrays (or array-like collections such as [System.Collections.Generic.List[object]]) in principle, this only works as intended if all elements are to be passed as positional arguments and/or if the target command is an external program (about whose parameter structure PowerShell knows nothing, and always passes arguments as a list/array of tokens).
In order to pass arguments with named parameters to other PowerShell commands, you must use hashtable-based splatting, where each entry's key identifies the target parameter and the value the parameter value (argument).
Even though the automatic $args variable is technically also an array ([object[]]), PowerShell has built-in magic that allows splatting with #args to also work with named parameters - this does not work with any custom array or collection.
Note that the automatic $args variable, which collects all arguments for which no parameter was declared - is only available in simple (non-advanced) functions and scripts; advanced functions and scripts - those that use the [CmdletBinding()] attribute and/or [Parameter()] attributes - require that all potential parameters be declared.
I want all of the functions in my module to default to $ErrorActionPreference = 'Stop'. Is it possible without modifying all the functions?
I have a file per function.
Assuming that your module is a script module, i.e., implemented in PowerShell code:
Important:
Modules have their own stack of scopes that is independent of the scopes of non-module code (and other modules'). While this provides isolation from the caller's environment that is generally useful, it also means that the caller's $ErrorActionPreference value never takes effect for script-module functions (unless you directly run from the global scope, which modules also see) - but it does so for compiled cmdlets. This highly problematic behavior is discussed in this GitHub issue.
Even though you therefore currently cannot control a script module's error behavior from the caller by way of $ErrorActionPreference, by setting (overriding) $ErrorActionPreference in your module you're closing that door permanently.
However, using the -ErrorAction common parameter for a specific call instead of the $ErrorActionPreference preference variable will still override your module-global $ErrorActionPreference value, because, behind the scenes, PowerShell translates the -ErrorAction argument to a function-local $ErrorActionPreference variable with that value.
The -ErrorAction and $ErrorActionPreference mechanisms are plagued by inconsistencies and obscure behaviors - this GitHub docs issue provides a comprehensive overview of PowerShell's error handling.
I want all of the functions in my module to default to $ErrorActionPreference = 'Stop'. Is it possible without modifying all the functions?
Yes - simply place $ErrorActionPreference = 'Stop' in your RootModule *.psm1 file's top-level code. (The RootModule entry of a module's manifest file (*.psd1) specifies a module's main module - see the docs).
If you don't have a RootModule entry that references a .psm1 file, simply create a .psm1 file in your module folder (typically named for the enclosing module) with content $ErrorActionPreference = 'Stop', and reference it from the RootModule entry - see this answer for background information.
Unless overridden by the caller by using the common -ErrorAction parameter when calling your module's functions (assuming they are advanced functions), your module's top-level $ErrorActionPreference value will be in effect for all of your module's functions, except if your function directly emits a statement-terminating error[1], in which case it is the caller's $ErrorActionPreference value that matters.
If your module is a binary module, i.e., exports compiled cmdlets (typically implemented in C#):
Compiled cmdlets don't have their own scope - they run in the caller's scope. It is therefore the caller's $ErrorActionPreference that matters, which can be overridden on a per-call basis with common parameter -ErrorAction, but only for non-terminating errors.
As with advanced functions in script modules, directly emitted statement-terminating errors[1] are always subject to the caller's $ErrorActionPreference value, even if -ErrorAction is used. (Note that binary cmdlets do not emit script-terminating errors).
[1] Statement-terminating errors occur in the following scenarios:
Directly, to abort execution of the enclosing cmdlet or advanced function/script:
When a binary cmdlet encounters a severe error that prevents it from continuing, it reports such an error with the ThrowTerminatingError() method (or just throws an exception).
An advanced PowerShell function/script would similarly have to use $PSCmdlet.ThrowTerminatingError(), which, however, is rare in practice; the typically used Throw statement creates a script-terminating error instead, which by default also terminates the entire thread, i.e., also terminates the caller and all its callers.
Indirectly, in PowerShell code:
When an expression causes a runtime error, such as 1 / 0 or 'a,b' -split ',', 'NotAnInt'.
When a .NET method call throws an exception, such as [int]::Parse('NotAnInt')
When, inside an advanced function/script, another cmdlet or advanced function / script is called that itself directly emits a statement-terminating error.
Note that advanced functions/scripts cannot relay statement-terminating errors as such:
By default (with $ErrorActionPreference containing 'Continue', possibly just in the local scope) the expression's / other command's terminating error effectively becomes a non-terminating error from the caller's perspective.
With $ErrorActionPreference set to 'Stop', the originally statement-terminating error is promoted to a script-terminating error.
It looks like you can do this with explicit default parameter setting in the $PSDefaultParameterValues session-wide preference variable.
This includes the ability to set the ErrorAction Common Parameter default for specific functions. Keep in mind, you would want to append the current value of $PSDefaultParameterValues since it uses a hash table, and other functions may currently have defaults set in the current session. This would also mean that [CmdletBinding()] needs to be included in every function that is being given this default value.
$PSDefaultParameterValues is a hash table, so you would want to modify in a fashion like so:
$PSDefaultParameterValues += #{
"Get-Function:ErrorAction"="Stop"
"Get-Command:ErrorAction"="Stop"
"Get-MyFunction*:ErrorAction"="Stop"
}
or
$PSDefaultParameterValues.add("Get-Function:ErrorAction","Stop")
Wildcards are accepted in function/cmdlet names, which may make it easier to get all of your functions in one line if you have a unique noun prefix naming scheme for imports from your module (though, this would include any other functions/cmdlets if any are imported into a session that include the same naming prefix).
I am trying to create an alias (named which) of the Get-Command cmdlet in a way that it doesn't run if I'm not sending any arguments (because if it's run without arguments it outputs all the commands that are available).
I know this can be done using a function but I would like to keep the tab completion functionality without having to write a sizeable function that is to be placed into my $PROFILE.
In short, I only want the alias to work if it is being passed arguments.
You can't do it with an alias, because PowerShell aliases can only refer to another command name or path, and can therefore neither include arguments nor custom logic.
Therefore you do need a function, but it can be a short and simple one:
function which { if ($args.count) { Get-Command #args } else { Throw "Missing command name." } }
Note that while passing -? for showing Get-Command's help does work, tab completion of arguments does not.
In order to get tab completion as well, you'll need to write a wrapper (proxy) function or at least replicate Get-Command's parameter declarations - which then does make the function definition sizable.
If the concern is just the size of the $PROFILE file itself, you can write a proxy script instead - which.ps1 - which you can invoke with just which as well, assuming you place it in one of the directories listed in $env:Path[1]; see next section.
Defining a wrapper (proxy) script or function:
Defining a wrapper (proxy) function or script is a nontrivial undertaking, but allows you to implement a robust wrapper that supports tab completion and even forwarding to the original command's help.
Note:
Bug alert: As zett42 points out, as of PowerShell [Core] 7.1, System.Management.Automation.ProxyCommand.Create neglects to include dynamic parameters if the target command is an (advanced) function or script; however, compiled cmdlets are not affected; see GitHub issue #4792 and this answer for a workaround.
For simplicity, the following creates a wrapper script, which.ps1 , and saves it in the current directory. As stated, if you place it in one of the directories listed in $env:PATH, you'll be able to invoke it as just which.
The code below can easily be adapted to create a wrapper function instead: simply take the contents of the $wrapperCmdSource variable below and enclose it in function which { ... }.
As of PowerShell Core 7.0.0-preview.5, there are some problems with the auto-generated code, which may or may not affect you; they will be fixed at some point; to learn more and to learn how to manually correct them, see GitHub issue #10863.
# Create the wrapper scaffolding as source code (outputs a single [string])
$wrapperCmdSource =
[System.Management.Automation.ProxyCommand]::Create((Get-Command Get-Command))
# Write the auto-generated source code to a script file
$wrapperCmdSource > which.ps1
Note:
Even though System.Management.Automation.ProxyCommand.Create requires a System.Management.Automation.CommandMetadata instance to identify the target command, the System.Management.Automation.CommandInfo instances output by Get-Command can be used as-is.
Re comment-based help: By default, the proxy function simply forwards to the original cmdlet's help; however, you can optionally pass a string to serve as the comment-based help as the 2nd argument.
By using [System.Management.Automation.ProxyCommand]::GetHelpComments() in combination with output from Get-Help, you could start with a copy of the original command's help and modify it:
[System.Management.Automation.ProxyCommand]::GetHelpComments((Get-Help Get-Command))
You now have a fully functional which.ps1 wrapper script that behaves like Get-Command itself.
You can invoke it as follows:
./which # Same as: Get-Command; tab completion of parameters supported.
./which -? # Shows Get-Command's help.
You can now edit the script file to perform the desired customization.
Note: The auto-generated source code contains a lot of boilerplate code; however, typically only one or two places need tweaking to implement the custom functionality.
Specifically, place the following command as the first statement inside the begin { ... } block:
if (-not $MyInvocation.ExpectingInput -and -not ($Name -or $CommandType -or $Module -or $FullyQualifiedModule)) {
Throw "Missing command name or filter."
}
This causes the script to throw an error if the caller didn't provide some way of targeting a specific command or group of commands, either by direct argument or via the pipeline.
If you invoke the modified script without arguments now, you should see the desired error:
PS> ./which.ps1
Missing command name or filter.
...
Other common types of customizations are:
Removing parameters from the wrapper, by simply removing the parameter declaration.
Adding additional parameters to the invocation of the wrapped command, by modifying the following line in the begin block:
# Add parameters, as needed.
$scriptCmd = { & $wrappedCmd #PSBoundParameters }
Preprocessing pipeline input before passing it to the wrapped command, by customizing the process block and replacing $_ with your preprocessed input in the following line:
# Replace $_ with a preprocessed version of it, as needed.
$steppablePipeline.Process($_)
For an example of a complete implementation of a proxy function, see this answer.
[1] Caveat for Linux users: since the Linux file-system is case is case-sensitive, invocation of your script won't work case-insensitively, the way commands normally work in PowerShell. E.g., if your script file name is Get-Foo.ps1, only Get-Foo - using the exact same case - will work, not also get-foo, for instance.
It's possible in PowerShell to define an output type on scripts. Consider myScript.ps1:
[OutputType([String])]
param(
[string]$name
)
The following returns String:
(Get-Command .\myScript.ps1).OutputType.Name
But I would like to specify that a script returns text/json or text/xml. What would be a good way of doing that?
Inventing types for OutputType (e.g. [String.JSON]) does not work.
There are two independent, but complementary mechanisms for declaring output types:
Important: Both ways of declaring output types are informative only and aren't enforced by PowerShell at runtime (but the [OutputType] information is used for tab-completion).
Mechanism A: Using the [OutputType] attribute above the param() declaration in a script or function, as in the question:
Always use this mechanism, and, if necessary, supplement with mechanism B.
Only recognizes full type names of .NET types or the name of PowerShell type accelerators and no up-front validation is performed, but if an unrecognized type is encountered at invocation time:
if originally specified as a string (e.g., 'System.Text.Encoding'): it is quietly ignored.
if originally specified as a type literal (e.g., [System.Text.Encoding]): the function call breaks on invocation
This strict, type-based integration enables tab completion / IntelliSense (command line / Visual Studio Code)
As for when you may want to use mechanism B too:
If the .NET type names don't tell the full story, as in the question.
If a function outputs multiple types and you want to verbally describe what types are output when.
[Supplemental] Mechanism B: Using the .OUTPUTS section of comment-based help:
Accepts free-form descriptions; while referencing actual type names makes sense, doing so is not enforced.
While you can use this mechanism alone, doing so forgoes the advantages of tab completion and IntelliSense.
Therefore, use it to supplement mechanism A, if needed, but note that:
If both mechanisms are used, Get-Help only shows the mechanism B definitions.
Since the two mechanisms are independent, you must manually ensure that the free-form information specified is complete and consistent with the mechanism A declarations.
To examine the output-type information with human eyeballs, use (Get-Help <cmd>).returnvalues (caveat: requires help to be defined, such as via comment-based help) or read the OUTPUTS section of the output from Get-Help -Full <cmd>.
This will either show the free-form .OUTPUTS content or, in its absence, the full type names of the [OutputType[]]-declared types.
For programmatic access, use (Get-Command <cmd>).OutputType, which returns [System.Management.Automation.PSTypeName] instances whose .Type property contains the actual type.
Details below, starting with the answer to the original question.
Mechanism A: Using OutputType attribute(s) above params():
You may only specify .NET types as arguments to the OutputType attribute, so strings such as text/json or text/xml that reflect MIME types won't work.
If you want string output, you've already chosen the closest approximation of your MIME types in terms of .NET types: [OutputType([String])]
You may specify multiple types in a single [OutputType()] attribute, or you may use individual attributes.
You must use individual attributes if you want to map output types to specific parameter sets (e.g., [OutputType([string], ParameterSetName='NameOnly')]).
As of Windows PowerShell v5.1 / PowerShell Core v6.0.1, however, this information is neither used by tab completion / IntelliSense nor reflected in the output from Get-Help -Full.
Note: For a type to be recognized by the OutputType attribute at invocation time,
use either the full type name (e.g., [System.Text.RegularExpressions.Match] rather than just [Match]) or the name of a PowerShell type accelerator, such as [regex].
When in doubt, type [<fullTypeName>] at the prompt and see if it is recognized as a type; additionally you may choose between specifying the type as a string (e.g., 'System.Text.Encoding') or as a type literal (e.g., (e.g., [System.Text.Encoding]), which has behavioral implications - see below.
if the specified type isn't present at invocation time, e.g., because the assembly containing the type hasn't been loaded, the behavior depends on how the output was declared:
if originally specified as a string: it is quietly ignored.
if originally specified as a type literal: the function call breaks on invocation
Beyond that,
either: simply describe the specific types of strings that your cmdlet outputs in its help text, such as via mechanism B described below,
or: create custom .NET types whose names reflect the desired conceptual type, and specify them in the OutputType attribute - see below.
As stated, despite its constrained nature, the OutputType attribute is purely informative at runtime - it is, however, used for tab completion and IntelliSense (Visual Studio Code).
Example of using a custom type:
# Define an empty custom type for the sole purpose for being able to use
# it with the OutputType attribute.
# Note: On first call, this may take a second or two, as the code is being
# compiled.
Add-Type #'
namespace org.example {
public class text_json {}
}
'#
function foo {
# Reference the custom type defined above; full type name required.
[OutputType([org.example.text_json])]
param(
[string]$name
)
}
You then get:
PS> (Get-Command foo).OutputType.Name
org.example.text_json
Note that the [System.Management.Automation.PSTypeName] instances that .OutputType outputs are not the same as the [type] instances you get when you inspect a type directly:
The .Name property of [System.Management.Automation.PSTypeName] corresponds to the .FullName property of [type], so you get the full type name if the type is recognized (available in the session); otherwise, it is the name as originally specified.
Mechanism B: Using the .OUTPUTS section in comment-based help:
Conceptual help topic Get-Help about_Comment_Based_Help describes how section .OUTPUTS inside comment-based help for scripts and functions can be used to list and describe output types.
Note: Similarly, the .INPUTS section can be used to describe supported input types, though that is arguably less interesting, given that specifying input types is an integral part of parameter declaration and documentation. By and large, .INPUTS functions analogously to .OUTPUTS, with only differences mentioned below.
An .OUTPUTS section uses the following format suggested by the examples in the help topic, but note that the text is ultimately free-form, and no structure is enforced.
<type-name> <optional-description>
Even though the help topic (as of PSv5) doesn't mention it, it seems that in the event of multiple output types, each should be described in its own .OUTPUTS section.
That said, the free-form format allows you to describe multiple output-type descriptions in a single section.
Example, taking advantage of the free-form format to describe the output in terms of MIME types:
<#
.SYNOPSIS
Does stuff.
.OUTPUTS
text/json. In case of x, returns a JSON [string].
.OUTPUTS
text/xml. In case of y, returns an XML [string].
#>
function foo {
param()
}
Note that when using Get-Help to view the help as a whole, the (aggregated) .OUTPUTS (and .INPUTS) sections are only shown with Get-Help -Full.
Querying that information programmatically essentially yields the OUTPUTS section from
Get-Help -Full verbatim (with individual .OUTPUTS sections in the source concatenated with an empty line in between, and extra trailing empty lines):
PS> (Get-Help foo).returnvalues
text/json. In case of x, returns a JSON [string].
text/xml. In case of y, returns an XML [string].
To access the descriptions individually, by index:
PS> (Get-Help foo).returnvalues.returnvalue[0].type.name
text/json. In case of x, returns a JSON [string].
However, given the free-form nature of the descriptions and that they're intended for human consumption, this granular access may not be needed.
That said, using this form returns the text without extra whitespace, so (Get-Help foo).returnvalues.returnvalue.type.name can be used to return all text without empty lines.
This works analogously for the .INPUTS sections:
(Get-Help foo).inputTypes.inputType.type.name
One neat way is using the PowerShell help comment syntax.
<#
.Synopsis
Returns a list of files.
.Description
...
.Inputs
text/csv
.Outputs
text/json[]
#>
You can access this information from the Get-Help object:
$cmd = Get-Help -Name .\components\FileList.ps1
"Input Type: " + $cmd.inputTypes.inputType[0].type.name
"Output Type: " + $cmd.returnValues.returnValue[0].type.name
Results in:
Input Type: text/csv
Output Type: text/json[]
Also works on standard cmdlets:
(Get-Help Get-Date).returnValues[0].returnValue[0].type.name
Returns:
System.DateTime or System.String