So I am using the kind of buggy Sapien powershell studio to make a powershell driven GUI application, and I am attempting to perform an ADSI query.
$nameOfDeviceInput is a System.Windows.Forms.TextBox
On one form, I have the following function:
$buttonPerformAction_Click={
if (FindInAD($nameOfDeviceInput.Text).Count -gt 0)
{
$buttonPerformAction.BackColor = 'Red'
$buttonPerformAction.Text = "System already exists in AD with that name. Try another name"
return
}
.....
}
On the "main" form, I have the function FindInAD
function FindInAd($nameOfSystem)
{
Write-Host "seeking system" $nameOfSystem
([adsisearcher]"(CN=$nameOfSystem)").FindAll()
}
FindInAd() is failing because for whatever reason, $nameOfSystem is set to 1, and if I don't explicitly cast it as a string, it gets implicitly cast to Int32 (obviously)
I have tried the following:
Fully qualifying the textbox input by notating the form it belongs to ( $adObjectModifier )
$buttonPerformAction_Click={
if (FindInAD($adObjectModifier.$nameOfDeviceInput.Text).Count -gt 0)
{
$buttonPerformAction.BackColor = 'Red'
$buttonPerformAction.Text = "System already exists in AD with that name. Try another name"
return
}
.....
}
Explicitly casting the $nameOfSystem parameter as a type of [string]
function FindInAd([string]$nameOfSystem)
{
Write-Host "seeking system" $nameOfSystem
([adsisearcher]"(CN=$nameOfSystem)").FindAll()
}
Passing a raw string into FindInAD from the AdObjectModifier form.
....
if (FindInAD("Test").Count -gt 0)
....
There is nothing else on the output pipeline at the time, (at least not from me) in between the method invocation. It is EventHandler > Function Call with String parameter
Why are the strings I'm passing getting changed to a digit???
EDIT: I think my passed parameter is being automatically replaced with the resulting boolean somehow, but this doesn't make any sense to me....
Your have a syntax problem:
FindInAD($nameOfDeviceInput.Text).Count # WRONG
Note: Wrong in this context means: the syntax is formally valid, but doesn't do what you expect - see the bottom section.
It should be:
(FindInAD $nameOfDeviceInput.Text).Count
PowerShell commands - functions, cmdlets, scripts and external programs - are invoked like shell commands - foo arg1 arg2 - and not like C# methods - foo('arg1', 'arg2').
That is:
Do not put (...) around the list of arguments.
However, you do need (...) around the call as a whole if you want a command call to participate in an expression, as shown above with the access to property .Count - see this answer for more information.
Separate arguments with spaces, both from each other and from the command name - do not use ,
, between arguments functions differently: It constructs an array that is passed as a single argument - see below.
You may pass simple strings (ones that contain neither spaces nor PowerShell metacharacters such as ; or &) as barewords; that is, quoting them is optional; e.g., instead of foo 'bar', you can call foo bar - see this answer for how PowerShell parses unquoted command arguments.
Also, if a target function or script has explicitly declared parameters (which binary cmdlets invariably do), such as -bar and -baz, you can pass your values as named arguments, i.e. by prepending them with the target parameter name; doing so is good practice in scripts: foo -bar arg1 -baz arg2
By contrast, calling methods of objects uses the syntax familiar from regular programming languages such as C# ($obj.foo('arg1', 'arg2'))
This difference relates two PowerShell's two fundamental parsing modes, explained in detail in this answer:
Commands are parsed in argument mode - as in shells.
Method calls and operator-based expressions are parsed in expression mode - as in regular programming languages.
These modes are required in order to allow PowerShell serve double duty: as a shell on the one hand, and as a scripting (programming) language on the other.
PowerShell can help you avoid this syntax problem:
Note that the problem isn't that using method syntax to call a command is invalid syntax, but that it doesn't work as intended, which can be difficult to diagnose.
In short: When you call command foo as foo('foo', 'bar'), ('foo', 'bar')is a 2-element array, which is then passed to foo as a single argument.
To prevent the problem to begin with, you can set Set-StrictMode to -Version 2 or higher, which makes PowerShell report an error if you accidentally use method syntax when calling a command:
# Turn on the check for accidental method syntax.
# Note: This also turns on ADDITIONAL checks - see below.
Set-StrictMode -Version 2
# This call now produces an ERROR, because the proper syntax would be:
# foo 'a' 'b'
foo('a', 'b')
Caveats:
Set-StrictMode -Version 2 comprises additional strictness checks that you must then also conform to, notably:
You must not reference non-existent variables.
You must not reference non-existent properties; see GitHub issue #2798 for an associated pitfall in connection with PowerShell's unified handling of scalars and collections.
An error is reported only for pseudo method calls with multiple arguments (e.g.,
foo('bar', 'baz')), not with only one; e.g., foo('bar') is accepted, because the single-argument case generally still (accidentally) works.
The errors reported for strictness violations are statement-terminating errors: that is, they only terminate the statement at hand, but by default the script continues; to ensure that overall execution aborts - on any type of error - you'd have to set
$ErrorActionPreference = 'Stop' at the start of your code. See this answer for more information.
As for what you tried:
FindInAD($nameOfDeviceInput.Text).Count
is the same as:
FindInAD ($nameOfDeviceInput.Text).Count
That is, the result of expression ($nameOfDeviceInput.Text).Count is passed as an argument to function FindInAD.
Related
Trying to get get param(...) to be have some basic error checking... one thing that puzzles me is how to detect invalid switch and flags that are not in the param list?
function abc {
param(
[switch]$one,
[switch]$two
)
}
When I use it:
PS> abc -One -Two
# ok... i like this
PS> abc -One -Two -NotAValidSwitch
# No Error here for -NotAValidSwitch? How to make it have an error for invalid switches?
As Santiago Squarzon, Abraham Zinala, and zett42 point out in comments, all you need to do is make your function (or script) an advanced (cmdlet-like) one:
explicitly, by decorating the param(...) block with a [CmdletBinding()] attribute.
and/or implicitly, by decorating at least one parameter variable with a [Parameter()] attribute.
function abc {
[CmdletBinding()] # Make function an advanced one.
param(
[switch]$one,
[switch]$two
)
}
An advanced function automatically ensures that only arguments that bind to explicitly declared parameters may be passed.
If unexpected arguments are passed, the invocation fails with a statement-terminating error.
Switching to an advanced script / function has side effects, but mostly beneficial ones:
You gain automatic support for common parameters, such as -OutVariable or -Verbose.
You lose the ability to receive unbound arguments, via the automatic $args variable variable (which is desired here); however, you can declare a catch-all parameter for any remaining positional arguments via [Parameter(ValueFromRemainingArguments)]
To accept pipeline input in an advanced function or script, a parameter must explicitly be declared as pipeline-binding, via [Parameter(ValueFromPipeline)] (objects as a whole) or [Parameter(ValueFromPipelineByPropertyName)] (value of the property of input objects that matches the parameter name) attributes.
For a juxtaposition of simple (non-advanced) and advanced functions, as well as binary cmdlets, see this answer.
If you do not want to make your function an advanced one:
Check if the automatic $args variable - reflecting any unbound arguments (a simpler alternative to $MyInvocation.UnboundArguments) - is empty (an empty array) and, if not, throw an error:
function abc {
param(
[switch]$one,
[switch]$two
)
if ($args.Count) { throw "Unexpected arguments passed: $args" }
}
Potential reasons for keeping a function a simple (non-advanced) one:
To "cut down on ceremony" in the parameter declarations, e.g. for pipeline-input processing via the automatic $input variable alone.
Generally, for simple helper functions, such as for module- or script-internal use that don't need support for common parameters.
When a function acts as a wrapper for an external program to which arguments are to be passed through and whose parameters (options) conflict with the names and aliases of PowerShell's common parameters, such as -verbose or -ov (-Out-Variable).
What isn't a good reason:
When your function is exported from a module and has an irregular name (not adhering to PowerShell's <Verb>-<Noun> naming convention based on approved verbs) and you want to avoid the warning that is emitted when you import that module.
First and foremost, this isn't an issue of simple vs. advanced functions, but relates solely to exporting a function from a module; that is, even an irregularly named simple function will trigger the warning. And the warning exists for a good reason: Functions exported from modules are typically "public", i.e. (also) for use by other users, who justifiable expect command names to follow PowerShell's naming conventions, which greatly facilitates command discovery. Similarly, users will expect cmdlet-like behavior from functions exported by a module, so it's best to only export advanced functions.
If you still want to use an irregular name while avoiding a warning, you have two options:
Disregard the naming conventions altogether (not advisable) and choose a name that contains no - character, e.g. doStuff - PowerShell will then not warn. A better option is to choose a regular name and define the irregular names as an alias for it (see below), but note that even aliases have a (less strictly adhered-to) naming convention, based on an official one or two-letter prefix defined for each approved verb, such as g for Get- and sa for Start- (see the approved-verbs doc link above).
If you do want to use the <Verb>-<Noun> convention but use an unapproved verb (token before -), define the function with a regular name (using an approved verb) and also define and export an alias for it that uses the irregular name (aliases aren't subject to the warning). E.g., if you want a command named Ensure-Foo, name the function Set-Foo, for instance, and define Set-Alias Ensure-Foo Set-Foo. Do note that both commands need to be exported and are therefore visible to the importer.
Finally, note that the warning can also be suppressed on import, namely via Import-Module -DisableNameChecking. The downside of this approach - aside from placing the burden of silencing the warning on the importer - is that custom classes exported by a module can't be imported this way, because importing such classes requires a using module statement, which has no silencing option (as of PowerShell 7.2.1; see GitHub issue #2449 for background information.
I downloaded the npm package for merge junit reports - https://www.npmjs.com/package/junit-merge.
The problem is that I have multiple files to merge and I am trying to use string variable to hold file names to merge.
When I write the script myslef like:
junit-merge a.xml b.xml c.xml
This works, the merged file is being created, but when I do it like
$command = "a.xml b.xml c.xml"
junit-merge $command
This does not work. The error is
Error: File not found
Has anyone faced similar issues?
# WRONG
$command = "a.xml b.xml c.xml"; junit-merge $command
results in command line junit-merge "a.xml b.xml c.xml"[1], i.e. it passes a string with verbatim value a.xml b.xml c.xml as a single argument to junit-merge, which is not the intent.
PowerShell does not act like POSIX-like shells such as bash do in this regard: In bash, the value of variable $command - due to being referenced unquoted - would be subject to word splitting (one of the so-called shell expansions) and would indeed result in 3 distinct arguments (though even there an array-based invocation would be preferable).
PowerShell supports no bash-like shell expansions[2]; it has different, generally more flexible constructs, such as the splatting technique discussed below.
Instead, define your arguments as individual elements of an array, as justnotme advises:
# Define the *array* of *individual* arguments.
$command = "a.xml", "b.xml", "c.xml"
# Pass the array to junit-merge, which causes PowerShell
# to pass its elements as *individual arguments*; it is the equivalent of:
# junit-merge a.xml b.xml c.xml
junit-merge $command
This is an application of a PowerShell technique called splatting, where you specify arguments to pass to a command via a variable:
Either (typically only used for external programs, as in your case):
As an array of arguments to pass individually as positional arguments, as shown above.
Or (more typically when calling PowerShell commands):
As a hashtable to pass named parameter values, in which you must replace the $ sigil in the variable reference with #; e.g., in your case #command; e.g., the following is the equivalent of calling Get-ChildItem C:\ -Directory:
$paramVals = #{ LiteralPath = 'C:\'; Directory = $true }; Get-ChildItem #paramVals
Caveat re array-based splatting:
Due to a bug detailed in GitHub issue #6280, PowerShell doesn't pass empty arguments through to external programs (applies to all Windows PowerShell versions / and as of PowerShell (Core) 7.2.x; a fix may be coming in 7.3, via the $PSNativeCommandArgumentPassing preference variable, which in 7.2.x relies on an explicitly activated experimental feature).
E.g., foo.exe "" unexpectedly results in just foo.exe being called.
This problem equally affects array-based splatting, so that
$cmdArgs = "", "other"; foo.exe $cmdArgs results in foo.exe other rather than the expected foo.exe "" other.
Optional use of # with array-based splatting:
You can use the # sigil also with arrays, so this would work too:
junit-merge #command
There is a subtle distinction, however.
While it will rarely matter in practice,
the safer choice is to use $, because it guards against (the however hypothetical) accidental misinterpretation of a --% array element you intend to be a literal.
Only the # syntax recognizes an array element --% as the special stop-parsing symbol, --%
Said symbol tells PowerShell not to parse the remaining arguments as it normally would and instead pass them through as-is - unexpanded, except for expanding cmd.exe-style variable references such as %USERNAME%.
This is normally only useful when not using splatting, typically in the context of being able to use command lines that were written for cmd.exe from PowerShell as-is, without having to account for PowerShell's syntactical differences.
In the context of splatting, however, the behavior resulting from --% is non-obvious and best avoided:
As in direct argument passing, the --% is removed from the resulting command line.
Argument boundaries are lost, so that a single array element foo bar, which normally gets placed as "foo bar" on the command line, is placed as foo bar, i.e. effectively as 2 arguments.
[1] Your call implies the intent to pass the value of variable $command as a single argument, so when PowerShell builds the command line behind the scenes, it double-quotes the verbatim a.xml b.xml c.xml string contained in $command to ensure that. Note that these double quotes are unrelated to how you originally assigned a value to $command.
Unfortunately, this automatic quoting is broken for values with embedded " chars. - see this answer, for instance.
[2] As a nod to POSIX-like shells, PowerShell does perform one kind of shell expansion, but (a) only on Unix-like platforms (macOS, Linux) and (b) only when calling external programs: Unquoted wildcard patterns such as *.txt are indeed expanded to their matching filenames when you call an external program (e.g., /bin/echo *.txt), which is feature that PowerShell calls native globbing.
I had a similar problem. This technique from powershell worked for me:
Invoke-Expression "junit-merge $command"
I also tried the following (from a powershell script) and it works:
cmd / c "junit-merge $command"
TIL that all of the function's arguments are optional by default.
function f([int] $x) {
if (!$x) {
echo "Why so null?"
}
}
f
Et voilĂ ! The forgotten $x argument just became a $null
> .\script.ps1
Why so null?
For $x to be mandatory, its declaration needs to be updated to [parameter(Mandatory=$true)][int] $x, which doesn't appear to be a sane solution if there's more than one or two parameters. It'd be nice to have this behavior as the default, because otherwise a huge codebase containing a lots of functions looks a little bit verbose and oversaturated with attributes.
At first glance, Set-StrictMode sounds like a magic word that should make all of the function's arguments mandatory by default, but unfortunately it doesn't behave this way.
What are the best practices for making all of the function's arguments in the scope mandatory?
The best practice is to mark all mandatory arguments as mandatory.
With PowerShell all mandatory arguments must be explicitly specified or else they will be considered optional, at this time no catch all exists.
You can find more on PowerShell argument properties here.
I got excited with PowerShell ScriptBlock at first but I was confused recently with its executing ordering inside blocks. For example:
$test_block = {
write-host "show 1"
ps
write-host "show 2"
Get-Date
}
The output by calling $test_block.Invoke():
show 1
show 2
<result of command 'ps'>
<result of command 'get-date'>
Do commands who output something run first?
This behaviour is because write-host doesn't put the output on the pipeline. The other commands are placed on the pipeline so are not output to the screen until the function (invoke) returns.
To get the behaviour I believe you were expecting, use write-output instead, the results of all the commands will then be returned in the pipeline.
$test_block = {
write-output "show 1"
ps
write-output "show 2"
Get-Date
}
$test_block.Invoke()
To complement David Martin's helpful answer:
Avoiding Write-Host (which is often the wrong tool to use) in favor of Write-Output, i.e. outputting to the success output stream - rather than printing to the display with Write-Host[1] - solves your immediate problem.
However, you could have avoided the problem by using &, the call operator instead of the .Invoke() method:
# Invokes the script block and *streams* its output.
& $test_block
Using & is generally preferable, not just to avoid .Invoke()'s collect-all-success-output-stream-first behavior - see next section.
As an aside:
This answer describes common problem with similar symptoms (success output (pipeline output) appearing out of order relative to other streams), although it is technically unrelated and also occurs with &:
# !! The Write-Host output prints FIRST, due to implicit, asynchronous
# !! table formatting of the [pscustomobject] instance.
& { [pscustomobject] #{ Foo = 'Bar' }; Write-Host 'Why do I print first?' }
Why &, not .Invoke(), should be used to invoke script blocks ({ ... }):
Script blocks ({ ... }) are normally invoked with &, the call operator, in argument (parsing) mode (like cmdlets and external programs), not via their .Invoke() method, which allows for more familiar syntax; e.g.:
& $test_block rather than $test_block.Invoke()
with arguments: & $test_block arg1 ... rather than $test_block.Invoke(arg1, ...)
Perhaps more importantly, using this operator-based invocation syntax (& { ... } ... or . { ... } ...) has the following advantages:
It preserves normal streaming semantics, meaning that success output (too) is emitted from the script block as it is being produced, whereas .Invoke() collects all success output first, in a [System.Collections.ObjectModel.Collection[psobject]] instance, which it then returns - by contrast, the Write-Host output goes straight to the display in both cases.
As a beneficial side effect, your specific output-ordering problem goes away, but note that in PSv5+ there can generally still be an output-ordering problem, although its cause is unrelated:
Implicit tabular output (implied Format-Table) for output types without predefined format data is asynchronous in an effort to determine suitable column widths (your specific code happens to only use cmdlets with predefined format data).
See this answer for more information; a simple repro:
[pscustomobject] #{ foo = 1 }; Write-Host 'Should print after, but prints first.'
It allows you to pass named arguments (e.g. -Foo Bar), whereas .Invoke() supports only positional (unnamed) ones (e.g. Bar).
It preserves normal semantics for script-terminating errors:
# OK: & throw aborts the entire script; 'after' never prints.
& { throw 'fatal' }; 'after'
# !! .Invoke() "eats" the script-terminating error and
# !! effectively converts it to a *statement*-terminating one.
# !! Therefore, execution continues, and 'after' prints.
{ throw 'fatal' }.Invoke(); 'after'
Additionally, using operator-based invocation gives you the option to use ., the dot-sourcing operator, in lieu of &, so as to run a script block directly in the caller's scope, whereas an .Invoke() method call only runs in a child scope (as & does):
# Dot-sourcing the script block runs it in the caller's scope.
. { $foo='bar' }; $foo # -> 'bar'
Use of .Invoke() is best limited to PowerShell SDK projects (which are typically C#-based, where use of PowerShell operators isn't an option).
[1] Technically, since PowerShell v5 Write-Host outputs to the information stream (stream number 6), which, however, prints to the display by default, while getting ignored in the pipeline and redirections. See this answer for a juxtaposition of Write-Host and Write-Output, and why the latter is typically note even needed.
This is a weird one. Normally when I execute an external command from powershell I use the & operator like this:
& somecommand.exe -p somearguments
However, today I came across the . operator used like this:
.$env:systemdrive\chocolatey\chocolateyinstall\chocolatey.cmd install notepadplusplus
What purpose does the period serve in this scenario? I don't get it.
The "." dot sourcing operator will send AND receive variables from other scripts you have called. The "&" call operator will ONLY send variables.
For instance, considering the following:
Script 1 (call-operator.ps1):
clear
$funny = "laughing"
$scriptpath = split-path -parent $MyInvocation.MyCommand.Definition
$filename = "laughing.ps1"
"Example 1:" # Call another script. Variables are passed only forward.
& $scriptpath\$filename
"Example 2:" # Call another script. Variables are passed backwards and forwards.
. $scriptpath\$filename
$variableDefinedInOtherScript
Script 2 (laughing.ps1):
# This is to test the passing of variables from call-operator.ps1
"I am $funny so hard. Passing variables is so hilarious."
$variableDefinedInOtherScript = "Hello World!"
Create both scripts and ONLY run the first one. You'll see that the "." dot sourcing operator sends and receives variables.
Both have their uses, so be creative. For instance, the "&" call operator would be useful if you wanted to modify the value(s) of variables in another script while preserving the original value(s) in the your current script. Kinda a safeguard. ;)
The Short:
It is a Special Operator used to achieve what regular operators cannot achieve. This particular operator . actually has two distinctively different Special Operator use cases.
The Long:
As with any other language, scripting or otherwise, PowerShell script also supports many different types of Operators to help manipulate values. These regular operators include:
Arithmetic
Assignment
Comparison
Logical
Redirection
List item
Split and Join
Type
Unary
However, PowerShell also supports whats known as Special Operators which are used to perform tasks that cannot be performed by the other types of operators.
These Special Operators Include:
#() Array subexpression operator
& Call operator
[ ] Cast operator
, Comma operator
. Dot sourcing operator
-f Format operator
[ ] Index operator
| Pipeline operator
. Property dereference operator
.. Range operator
:: Static member operator
$( ) Subexpression operator
. Dot sourcing operator: is used in this context to allow a script to run in the current scope essentially allowing any functions, aliases, and variables which has been created by the script to be added to the current script.
Example:
. c:\scripts.sample.ps1
NoteThat this application of the . Special Operator is followed by a space to distinguish it from the (.) symbol that represents the current directory
Example:
. .\sample.ps1
. Property dereference operator: Allows access to the properties and methods of of an object which follows the . by indicating that the expression on the left side of the . character is an object and the expression on the right side of the is an object member (a property or method).
Example:
$myProcess.peakWorkingSet
(get-process PowerShell).kill()
Disclaimer & Sources:
I had the same question while looking at a PowerShell script that I was trying to expand on its feature sets and landed here when doing my research for the answer. However I managed to find my answer using this magnificent write up on the Microsoft Development Network supplemented with this further expansion of the same ideas from IT Pro.
Cheers.
The dot is a call operator:
$a = "Get-ChildItem"
. $a # (executes Get-ChildItem in the current scope)
In your case, however, I don't see what it does.
.Period or .full stop for an objects properties; like
$CompSys.TotalPhysicalMemory
See here: http://www.computerperformance.co.uk/powershell/powershell_syntax.htm#Operators_
This answer is to expand slightly upon those already provided by David Brabant and his commenters. While those remarks are all true and pertinent, there is something that has been missed.
The OPs use of & when invoking external commands is unnecessary. Omitting the & would have no effect (on the example of his usage). The purpose of & is to allow the invocation of commands whose names are the values of a (string) expression. By using the & above, powershell then (essentially) treats the subsequent arguments as strings, the first of which is the command name that & duly invokes. If the & were omitted, powershell would take the first item on the line as the command to execute.
However, the . in the second example is necessary (although, as noted by others, & would work just as well in this case). Without it, the command line would begin with a variable access ($env:systemdrive) and so powershell would be expecting an expression of some form. However, immediately following the variable reference is a bare file path which is not a valid expression and will generate an error. By using the . (or &) at the beginning of the line, it is now treated as a command (because the beginning doesn't look like a valid expression) and the arguments are processed as expandable strings (" "). Thus, the command line is treated as
. "$env:systemdrive\chocolatey\chocolateyinstall\chocolatey.cmd" "install" "notepadplusplus"
The first argument has $env:systemdrive substituted into it and then . invokes the program thus named.
Note: the full description of how powershell processes command line arguments is way more complicated than that given here. This version was cut down to just the essential bits needed to answer the question. Take a look at about_Parsing for a comprehensive description. It is not complete but should cover most normal usage. There are other posts on stackoverflow and github (where powershell now resides) that cover some of the seemingly quirky behaviour not listed in the official documentation. Another useful resource is about_Operators though again this isn't quite complete. An example being the equivalence of . and & when invoking something other than a powershell script/cmdlet getting no mention at all.