What are good guidelines for naming PowerShell verbs? - powershell

I'm early on in my PowerShell learning, and I'm wondering if there are some good guidelines for verbs in Posh for cmdlets (or advanced functions, whatever they're called in CTP3).
If I do a get-verb I can see the lot of them. But I'm still not sure how I should lay out my modules.
Here's the example I'm running into right now. I have a little script that asks Perforce: if I were to sync, what files would change and how big are they? It outputs a summary of sizes and a mini-tree of folders for where the changes will occur (as well as how many would need resolving).
Is that a query-p4sync? Or is it a 'sync-p4 -whatif'? Or something else?
Before I start writing a lot of these scripts I want to make sure I name them right.

You can find a list of common verbs on MSDN along with a description what they should be used for.

Here's an updated list of approved verbs on the Windows PowerShell Blog, as of July 15.

From your use of the word "modules", I'm going to guess you are using V2 of PowerShell, which allows you to take advantage of Advanced Functions.
Advanced functions provide a way to attribute your function to provide native support for -WhatIf and -Confirm
function Sync-PerforceRepository()
{
[cmdletbinding(SupportShouldProcess=$true)]
param (...) #add your parameters
Begin
{
#setup code here
}
Process
{
if ($pscmdlet.ShouldProcess($ObjectBeingProcessed,"String Describing Action Happening")
{
#Process logic here
}
}
End
{
#Cleanup code
}
}

Related

Documenting Powershell modules and scripts

With Powershell 5 introducing OOP Classes support, the traditional comment-based Powershell documentation methods for functions, scripts and modules are no longer a good fit. Get-Help is not presenting any help for classes, methods or properties and it looks like it will be staying that way. Other than that, Get-Help is not much of help when trying to find information on a specific function without actually having the module or powershell script in question.
As classes are especially useful for more complex Powershell projects, the need for an up-to-date documentation is more pressing than ever. Projects like Doxygen and the Sandcastle Help File Builder do support help generation for a number of OO-languages, but do not seem to be able to handle Powershell code. A quick look at the PoshBuild project reveals that it is targeted at .NET language projects, too and needs to be integrated into the Visual Studio build process, which pure-Powershell code does not have.
There is also PSDoc capable of generating documentation for modules in HTML or markdown formats based on Get-Help output, which would have been pretty much what I want if it supported classes.
So how do I auto-generate sensible documentation if I have
.ps1 scripts
.psm1 modules
classes in my Powershell code
using the comment-based help documentation syntax?
#trebleCode still deserves the answer, I'm just posting this for anyone interested.
I started trying to answer this question a while ago but got distracted and never finished. If I recall correctly, there was some discussion I found on Github where they said they didn't plan on supporting comment annotated classes, which is sad because I like Powershell Comments.
My thought here was that by calling the builtin help methods you could create a helper function that would detect these non-standard comments above the class keyword and convert them to comment objects without invoking get-help. These comments could also be stored in external files.
Below I found the code for parsing comments into objects and creating comment objects in code.
# References:
# https://learn-powershell.net/2015/08/07/invoking-private-static-methods-using-powershell/
# https://stackoverflow.com/questions/1259222/how-to-access-internal-class-using-reflection
# https://stackoverflow.com/questions/15652656/get-return-value-after-invoking-a-method-from-dll-using-reflection
# https://github.com/PowerShell/PowerShell/blob/a8627b83e5cea71c3576871eacad7f2b19826d53/src/System.Management.Automation/help/HelpCommentsParser.cs
$ExampleComment = #"
<#
.SYNOPSIS
This was a triumph
#>
"#
$CommentLines = [Collections.Generic.List`1[String]]::new()
$InvokeArgs = #($ExampleComment, $CommentLines)
# GetMethod Filter
$BindingFlags = 'static','nonpublic','instance'
# GetMethod Filter: We need to specify overloaded methods by their parameters
$ParamTypes = [Type]::GetTypeArray($InvokeArgs)
$ParamCount = [System.Reflection.ParameterModifier]::new(2)
$HelpParser = [psobject].Assembly.GetType('System.Management.Automation.HelpCommentsParser')
$CollectCommentText = $HelpParser.GetMethod('CollectCommentText', $BindingFlags, $null, $ParamTypes, $ParamCount)
# Extension methods aren't part of the class so null gets called first.
# TODO: Figure out return value
$CollectCommentText.Invoke($Null,$InvokeArgs)
$InvokeArgs
# Comment object but properties are read only.
$CommentHelp = [System.Management.Automation.Language.CommentHelpInfo]::new()
$CommentHelp.Synopsis
$CommentHelp.Description
$CommentHelp.Examples
$CommentHelp

Why automation framework require proxy function for every powershell cmdlet?

In my new project team, for each powershell cmdlet they have written proxy function. When i asked the reason for this practice, they said that it is a normal way that automation framework would be written. They also said that If powershell cmdlet is changed then we do not need to worry ,we can just change one function.
I never saw powershell cmdlets functionality or names changed.
For example, In SQL powershell module they previously used snapin then they changed to module. but still the cmdlets are same. No change in cmdlet signature. May be extra arguments would have added.
Because of this proxy functions , even small tasks taking long time. Is their fear baseless or correct? Is there any incident where powershell cmdlets name or parameter changed?
I guess they want to be extra safe. Powershell would have breaking changes here and here sometimes but I doubt that what your team is doing would be impacted by those (given the rare nature of these events). For instance my several years old scripts continue to function properly up to present day (and they were mostly developed against PS 2-3).
I would say that this is overengineering, but I cant really blame them for that.
4c74356b41 makes some good points, but I wonder if there's a simpler approach.
Bear with me while I restate the situation, just to ensure I understand it.
My understanding of the issue is that usage of a certain cmdlet may be strewn about the code base of your automation framework.
One day, in a new release of PowerShell or that module, the implementation changes; could be internal only, could be parameters (signature) or even cmdlet name that changes.
The problem then, is you would have to change the implementation all throughout your code.
So with proxy functions, you don't prevent this issue; a breaking change will break your framework, but the idea is that fixing it would be simpler because you can fix up your own proxy function implementation, in one place, and then all of the code will be fixed.
Other Options
Because of the way command discovery works in PowerShell, you can override existing commands by defining functions or aliases with the same name.
So for example let's say that Get-Service had a breaking change and you used it all over (no proxy functions).
Instead of changing all your code, you can define your own Get-Service function, and the code will use that instead. It's basically the same thing you're doing now, except you don't have to implement hundreds of "empty" proxy functions.
For better naming, you can name your function Get-FrameworkService (or something) and then just define an alias for Get-Service to Get-FrameworkService. It's a bit easier to test that way.
One disadvantage with this is that reading the code could be unclear, because when you see Get-Service somewhere it's not immediately obvious that it could have been overwritten, which makes it a bit less straightforward if you really wanted to call the current original version.
For that, I recommend importing all of the modules you'll be using with -Prefix and then making all (potentially) overridable calls use the prefix, so there's a clear demarcation.
This even works with a lot of the "built-in" commands, so you could re-import the module with a prefix:
Import-Module Microsoft.PowerShell.Utility -Prefix Overridable -Force
TL;DR
So the short answer:
avoid making lots and lots of pass-thru proxy functions
import all modules with prefix
when needed create a new function to override functionality of another
then add an alias for prefixed_name -> override_function
Import-Module Microsoft.PowerShell.Utility -Prefix Overridable -Force
Compare-OverridableObject $a $b
No need for a proxy here; later when you want to override it:
function Compare-CanonicalObject { <# Stuff #> }
New-Alias Compare-OverridableObject Compare-CanonicalObject
Anywhere in the code that you see a direct call like:
Compare-Object $c $d
Then you know: either this intentionally calls the current implementation of that command (which in other places could be overridden), or this command should never be overridden.
Advantages:
Clarity: looking at the code tells you whether an override could exist.
Testability: writing tests is clearer and easier for overridden commands because they have their own unique name
Discoverability: all overridden commands can be discovered by searching for aliases with the right name pattern i.e. Get-Alias *-Overridable*
Much less code
All overrides and their aliases can be packaged into modules

Alternative to "Sort" as a PowerShell verb?

I have a PowerShell function Sort-VersionLabels. When I add this function to a module, Import-Module complains:
WARNING: Some imported command names include unapproved verbs which might make
them less discoverable. Use the Verbose parameter for more detail or type
Get-Verb to see the list of approved verbs.
According to this, Sort is a "reserved verb".
What could be a good (and approved) alternative?
Update
The function takes a array of version numbers in the form: <major>.<minor>.<revision>[-<milestone[nr]>]. Milestone can be dev, alpha, beta or stable (in that order). So the standard Sort-Object function won't work.
It outputs the sorted array to the pipe line.
I think something like ConvertTo-SortedVersionLabels, while a little bit awkward, uses an approved and non-reserved verb but is still clear.
You could also make sorting a parameter to a different function, like Get-VersionLabels -Sorted.
How you would work that in depends on your module as a whole and whether you have such a function to modify. It's unclear from your current post, but if you edit it with more details we might be able to provide more suggestions.
The core of this issue will generate opinionated results. This creates a conundrum since you are looking for something specific that the current answers have been unable to address. I understand that you are looking for a solution that logically fits your function while being in the standard verb list, which is admirable. To continue from an earlier comment I made I am going to try and state a case for all the approved verbs that might fit your situation. I will refer to the Approved Verbs List linked in your question frequently and will use "AVL" for brevity going forward.
Group: The comments on the AVL refers to using this in place of Arrange. Arrange being a synonym for Sort would be a good fit. Sticking with the recommendation then we should use Group
Set: It is a synonym for Sort. However, in the AVL, it is associated with Write, Reset, Assign, or Configure which are not related to your cmdlet. Still, it is in the list and could fit if you are willing to put aside the discombobulation that it creates with existing PowerShell cmdlets.
I dont really have a number 3.
Update: This is a weak case but the AVL refers its use as a way to maintain [a cmdlets] state [and] accuracy.
Order/Organize: Not in the AVL but I find these very fitting and dont currently conflict with any existing verbs.
Ultimately, AVL be damned and do whatever you want. Sort is a very good fit for what you are trying to do. You can also just use -DisableNameChecking when importing your module. It is only a warning after all. Briatist's answer is also good in my opinion.
Bonus from comments
Not that you asked for it, but when you said we have to enable name checking I thought about this. Just for fun!
$reservedVerbs = "ForEach","Format","Group","Sort","Tee"
$approvedVerbList = (Get-Verb).Verb
Get-Command -Module Microsoft.WSMan.Management | ForEach-Object{
If ($approvedVerbList -notcontains ($_.Name -split "-")[0]){
Write-Warning "$($_.Name) does not use an approved verb."
}
If ($reservedVerbs -contains ($_.Name -split "-")[0]){
Write-Warning "$($_.Name) is using a reserved verb."
}
}
Whenever I need a verb that is not an approved PowerShell verb, I use Invoke-* instead. So in your case, you could name it Invoke-SortVersionLabels
You shouldn't need a special cmdlet at all. If a VersionLabel is an object, just take the collection and pipe it to Sort-Object using the property(ies) you need.
# Assuming a versionlabel has a 'Name' Property...
$VersionLabelCollection | Sort-Object -Property:Name

How do you not to repeat code in a powershell module? For example, getting a list of hostnames from a file, or storing user credentials

I am writing a powershell module with a list of utilties that I use on a daily basis. However, my question is: How can I not repeat so much code?
For example if I have a function that gets a list of hostnames from a file, I have to create that parameter in every single function. How can I just create it once, and then have each function prompt for it, or grab it?
function CopyFiles {
param (
[parameter(Mandatory = $true, HelpMessage = "Enter the Path to the Machine List File (UNC Path or local). ")]
[ValidateScript({$_ -ne ""})]
[string] $MachineListFilename,
...Sometime later in the script...
$MachineList = Get-Content $MachineListFilename
}
function DoSomeOtherTask {
param (
[parameter(Mandatory = $true, HelpMessage = "Enter the Path to the Machine List File (UNC Path or local). ")]
[ValidateScript({$_ -ne ""})]
[string] $MachineListFilename,
...Sometime later in the script...
$MachineList = Get-Content $MachineListFilename
}
It just seems really in-efficient to cut and paste the same code over and over again. Especially for something like, domain-name, username, password, etc.
Ultimately, I'm trying to get to a point to where I just write wrapper scripts for these functions once I import the module. Then I can just pass parameters via the command line. However, with the current way I'm doing it, the module is going to be littered with a lot of repetitive code, like parameters for username and password, etc.
Is there a better way?
Make your cmdlets/functions as independent and flexible as you can. Sometimes a wrapper function is the way to go, other times consolidating things into one function and calling it differently is more workable.
In the example you've given here, give the caller two options - you can pass in the filename for the list of machines, or pass in the list of machines. That way, you can read the file once in the calling script, and pass the array of machine names into each function. This will be much more efficient as you're only reading from disk one time.
I strongly recommend reading up on advanced functions and parametersets to simplify things (you'll need this for my suggestion above).
As for "repetitive code" - as soon as you find yourself copying/pasting code, stop. Find a way to make that code generic and move it into its own function, then call that function wherever it's needed. This isn't a PowerShell notion - this is standard programming, the DRY Principle.
Even then, you'll still find yourself with some modicum of copypasta. It's going to happen just because of the nature of the PowerShell environment. Look at Microsoft's own cmdlets - you'll see evidence of it there too. The key is to minimize it.
Having 3 cmdlets that all take username & password (why not take a Credential object instead/as another option, BTW?) will result in copying & pasting those parameters in the function definition. You're not going to avoid that, and it's not necessarily a bad thing. You can create code snippets in most good editors (PowerShell ISE included) to automatically "generate" it for you if that makes it easier/faster.
I personally like to create intermediary functions that call my functions with specific parameters for things I do a lot of times. I manage these with a switch statement. This way, the backend driver does not change, and I have a nice interface I can give to others who want to use, but not develop on, the code I made.
function frontEnd {
call intermediary(typeA)
}
function intermediary (callType){
switch(callType){
case(typeA):
call backEnd(param1="get dns" param2="domain1" param3=True
case(typeB):
call backEnd(param1="add to dns" param2="domain" param3=False
case(other):
call backEnd(arg1, arg2, arg3)
}
Depending on what functionality you are looking for, this could help you. This is a very crude way of doing it, and I highly suggesting making it more robust and stable if you aren't going to be the only one using it.

Does powershell have a method_missing()?

I have been playing around with the dynamic abilities of powershell and I was wondering something
Is there is anything in powershell analogous to Ruby's method_missing() where you can set up a 'catch all method' to dynamically handle calls to non-existant methods on your objects?
No, not really. I suspect that the next version of PowerShell will become more in line with the dynamic dispatch capabilities added to .NET 4 but for the time being, this would not be possible in pure PowerShell.
Although I do recall that there is a component model similar to that found in .NET's TypeDescriptor for creating objects that provide properties and methods dynamically to PowerShell. This is how XML elements are able to be treated like objects, for example. But this is poorly documented if at all and in my experience, a lot of the types/methods needed to integrate are marked as internal.
You can emulate it, but it's tricky. The technique is described in Lee Holmes book and is boiled down to two scripts - Add-RelativePathCapture http://poshcode.org/2131 and New-CommandWrapper http://poshcode.org/2197.
The essence is - you can override any cmdlet via New-CommandWrapper. Thus you can redefine Out-Default that is implicitly invoked at the end of almost every command (excluding commands with explicit formatters like Format-Table at the end). In the new Out-Default you check if the last command threw an exception saying that no method / property was found. And there you insert your method_missing logic.
You could use Try Catch within Powershell 2.0
http://blogs.technet.com/b/heyscriptingguy/archive/2010/03/11/hey-scripting-guy-march-11-2010.aspx