Documenting Powershell modules and scripts - powershell

With Powershell 5 introducing OOP Classes support, the traditional comment-based Powershell documentation methods for functions, scripts and modules are no longer a good fit. Get-Help is not presenting any help for classes, methods or properties and it looks like it will be staying that way. Other than that, Get-Help is not much of help when trying to find information on a specific function without actually having the module or powershell script in question.
As classes are especially useful for more complex Powershell projects, the need for an up-to-date documentation is more pressing than ever. Projects like Doxygen and the Sandcastle Help File Builder do support help generation for a number of OO-languages, but do not seem to be able to handle Powershell code. A quick look at the PoshBuild project reveals that it is targeted at .NET language projects, too and needs to be integrated into the Visual Studio build process, which pure-Powershell code does not have.
There is also PSDoc capable of generating documentation for modules in HTML or markdown formats based on Get-Help output, which would have been pretty much what I want if it supported classes.
So how do I auto-generate sensible documentation if I have
.ps1 scripts
.psm1 modules
classes in my Powershell code
using the comment-based help documentation syntax?

#trebleCode still deserves the answer, I'm just posting this for anyone interested.
I started trying to answer this question a while ago but got distracted and never finished. If I recall correctly, there was some discussion I found on Github where they said they didn't plan on supporting comment annotated classes, which is sad because I like Powershell Comments.
My thought here was that by calling the builtin help methods you could create a helper function that would detect these non-standard comments above the class keyword and convert them to comment objects without invoking get-help. These comments could also be stored in external files.
Below I found the code for parsing comments into objects and creating comment objects in code.
# References:
# https://learn-powershell.net/2015/08/07/invoking-private-static-methods-using-powershell/
# https://stackoverflow.com/questions/1259222/how-to-access-internal-class-using-reflection
# https://stackoverflow.com/questions/15652656/get-return-value-after-invoking-a-method-from-dll-using-reflection
# https://github.com/PowerShell/PowerShell/blob/a8627b83e5cea71c3576871eacad7f2b19826d53/src/System.Management.Automation/help/HelpCommentsParser.cs
$ExampleComment = #"
<#
.SYNOPSIS
This was a triumph
#>
"#
$CommentLines = [Collections.Generic.List`1[String]]::new()
$InvokeArgs = #($ExampleComment, $CommentLines)
# GetMethod Filter
$BindingFlags = 'static','nonpublic','instance'
# GetMethod Filter: We need to specify overloaded methods by their parameters
$ParamTypes = [Type]::GetTypeArray($InvokeArgs)
$ParamCount = [System.Reflection.ParameterModifier]::new(2)
$HelpParser = [psobject].Assembly.GetType('System.Management.Automation.HelpCommentsParser')
$CollectCommentText = $HelpParser.GetMethod('CollectCommentText', $BindingFlags, $null, $ParamTypes, $ParamCount)
# Extension methods aren't part of the class so null gets called first.
# TODO: Figure out return value
$CollectCommentText.Invoke($Null,$InvokeArgs)
$InvokeArgs
# Comment object but properties are read only.
$CommentHelp = [System.Management.Automation.Language.CommentHelpInfo]::new()
$CommentHelp.Synopsis
$CommentHelp.Description
$CommentHelp.Examples
$CommentHelp

Related

Why automation framework require proxy function for every powershell cmdlet?

In my new project team, for each powershell cmdlet they have written proxy function. When i asked the reason for this practice, they said that it is a normal way that automation framework would be written. They also said that If powershell cmdlet is changed then we do not need to worry ,we can just change one function.
I never saw powershell cmdlets functionality or names changed.
For example, In SQL powershell module they previously used snapin then they changed to module. but still the cmdlets are same. No change in cmdlet signature. May be extra arguments would have added.
Because of this proxy functions , even small tasks taking long time. Is their fear baseless or correct? Is there any incident where powershell cmdlets name or parameter changed?
I guess they want to be extra safe. Powershell would have breaking changes here and here sometimes but I doubt that what your team is doing would be impacted by those (given the rare nature of these events). For instance my several years old scripts continue to function properly up to present day (and they were mostly developed against PS 2-3).
I would say that this is overengineering, but I cant really blame them for that.
4c74356b41 makes some good points, but I wonder if there's a simpler approach.
Bear with me while I restate the situation, just to ensure I understand it.
My understanding of the issue is that usage of a certain cmdlet may be strewn about the code base of your automation framework.
One day, in a new release of PowerShell or that module, the implementation changes; could be internal only, could be parameters (signature) or even cmdlet name that changes.
The problem then, is you would have to change the implementation all throughout your code.
So with proxy functions, you don't prevent this issue; a breaking change will break your framework, but the idea is that fixing it would be simpler because you can fix up your own proxy function implementation, in one place, and then all of the code will be fixed.
Other Options
Because of the way command discovery works in PowerShell, you can override existing commands by defining functions or aliases with the same name.
So for example let's say that Get-Service had a breaking change and you used it all over (no proxy functions).
Instead of changing all your code, you can define your own Get-Service function, and the code will use that instead. It's basically the same thing you're doing now, except you don't have to implement hundreds of "empty" proxy functions.
For better naming, you can name your function Get-FrameworkService (or something) and then just define an alias for Get-Service to Get-FrameworkService. It's a bit easier to test that way.
One disadvantage with this is that reading the code could be unclear, because when you see Get-Service somewhere it's not immediately obvious that it could have been overwritten, which makes it a bit less straightforward if you really wanted to call the current original version.
For that, I recommend importing all of the modules you'll be using with -Prefix and then making all (potentially) overridable calls use the prefix, so there's a clear demarcation.
This even works with a lot of the "built-in" commands, so you could re-import the module with a prefix:
Import-Module Microsoft.PowerShell.Utility -Prefix Overridable -Force
TL;DR
So the short answer:
avoid making lots and lots of pass-thru proxy functions
import all modules with prefix
when needed create a new function to override functionality of another
then add an alias for prefixed_name -> override_function
Import-Module Microsoft.PowerShell.Utility -Prefix Overridable -Force
Compare-OverridableObject $a $b
No need for a proxy here; later when you want to override it:
function Compare-CanonicalObject { <# Stuff #> }
New-Alias Compare-OverridableObject Compare-CanonicalObject
Anywhere in the code that you see a direct call like:
Compare-Object $c $d
Then you know: either this intentionally calls the current implementation of that command (which in other places could be overridden), or this command should never be overridden.
Advantages:
Clarity: looking at the code tells you whether an override could exist.
Testability: writing tests is clearer and easier for overridden commands because they have their own unique name
Discoverability: all overridden commands can be discovered by searching for aliases with the right name pattern i.e. Get-Alias *-Overridable*
Much less code
All overrides and their aliases can be packaged into modules

Looking for BP on PS using module, import-module, import using dot-sourcing and Add-PSSnapin

Background: I am looking for Best Practices for building PowerShell framework of my own. As I was used to be .NET programmer I like to keep source files small and organize code in classes and libraries.
Question: I am totally confused with using module, import-module, sourcing using dot-import and Add-PSSnapin. Sometimes it works. Sometimes it does not. Some includes work when running from ISE/VS2015 but fail when running via cmd powershell -command "& './myscript.ps1'". I want to include/import classes and functions. I also would like to use type and namespace aliases. Using them with includes produces even weirdest results, but sometimes somehow they work.
Edit: let me be more specific:
Local project case (all files in one dir): main.ps1, common_autorun.ps1, _common.psm1, _specific.psm1.
How to include these modules into main script using relative paths?
_specific.psm1 also rely on _common.psm1.
There are ScriptBlocks passed between modules that may contain calls to classes defined in parent context.
common_autorun.ps1 contains solely type accelerators and namespace imports as described here.
Modules contain mainly classes with static methods as I am not yet used to PowerShell style of programming where functions do not have predicted returns.
As I understand my problems are related to context and scope. Unfortunately these PowerShell concepts are not well documented for v5 classes.
Edit2: Simplified sample:
_common.psm1 contains watch
_specific.psm1 contains getDiskSpaceInfoUNC
main.ps1 contains just:
watch 'getDiskSpaceInfoUNC "\\localhost\d$"'
What includes/imports should I put into these files in order this code to work both in ISE and powershell.exe -command "& './main.ps1'"?
Of cause this works perfectly when both functions are defined in main.ps1.

Powershell - Missing System.Collections.Generic in C:\windows\assembly\GAC_MSIL

I'm trying to use this code in Powershell:
Add-Type -AssemblyName "System.Collections.Generic"
However I get this error:
Add-Type : Cannot add type. One or more required assemblies are missing.
I've looked in C:\windows\assembly\GAC_MSIL and can see that there is no folder named System.Collections.Generic.
Do I need to download this library, if so where?
There is no System.Collections.Generic assembly. That is a namespace. A large portion of the types in that namespace are in the core library assembly, mscorlib.dll which is already available in Powershell since Powershell is .Net.
Go to MSDN for the namespace, find the type you are trying to use, and you can see the assembly it is in, and remember that there is not necessarily a one to one relationship between assemblies and namespaces.
Using generic types is a bit involved in Powershell and can depend on using reflection, or formatting complex type names.
See this stackoverflow question for some more details.
Never mind,
not sure why that doesn't work, but turns out I don't need it anyway.
I had this code which for some reason wasn't working intially because it couldn't find [Collections.Generic.List[String]], but now it seems to work:
[string[]] $csvUserInfo = #([IO.File]::ReadAllLines($script:EmailListCsvFile))
[Collections.Generic.List[String]]$x = $csvUserInfo
I would answer by another question, Why do you need this assembly ?
Can you try to replace your code by :
$x = [IO.File]::ReadAllLines($script:EmailListCsvFile)
In PowerShell it should work.

Does powershell have a method_missing()?

I have been playing around with the dynamic abilities of powershell and I was wondering something
Is there is anything in powershell analogous to Ruby's method_missing() where you can set up a 'catch all method' to dynamically handle calls to non-existant methods on your objects?
No, not really. I suspect that the next version of PowerShell will become more in line with the dynamic dispatch capabilities added to .NET 4 but for the time being, this would not be possible in pure PowerShell.
Although I do recall that there is a component model similar to that found in .NET's TypeDescriptor for creating objects that provide properties and methods dynamically to PowerShell. This is how XML elements are able to be treated like objects, for example. But this is poorly documented if at all and in my experience, a lot of the types/methods needed to integrate are marked as internal.
You can emulate it, but it's tricky. The technique is described in Lee Holmes book and is boiled down to two scripts - Add-RelativePathCapture http://poshcode.org/2131 and New-CommandWrapper http://poshcode.org/2197.
The essence is - you can override any cmdlet via New-CommandWrapper. Thus you can redefine Out-Default that is implicitly invoked at the end of almost every command (excluding commands with explicit formatters like Format-Table at the end). In the new Out-Default you check if the last command threw an exception saying that no method / property was found. And there you insert your method_missing logic.
You could use Try Catch within Powershell 2.0
http://blogs.technet.com/b/heyscriptingguy/archive/2010/03/11/hey-scripting-guy-march-11-2010.aspx

What are good guidelines for naming PowerShell verbs?

I'm early on in my PowerShell learning, and I'm wondering if there are some good guidelines for verbs in Posh for cmdlets (or advanced functions, whatever they're called in CTP3).
If I do a get-verb I can see the lot of them. But I'm still not sure how I should lay out my modules.
Here's the example I'm running into right now. I have a little script that asks Perforce: if I were to sync, what files would change and how big are they? It outputs a summary of sizes and a mini-tree of folders for where the changes will occur (as well as how many would need resolving).
Is that a query-p4sync? Or is it a 'sync-p4 -whatif'? Or something else?
Before I start writing a lot of these scripts I want to make sure I name them right.
You can find a list of common verbs on MSDN along with a description what they should be used for.
Here's an updated list of approved verbs on the Windows PowerShell Blog, as of July 15.
From your use of the word "modules", I'm going to guess you are using V2 of PowerShell, which allows you to take advantage of Advanced Functions.
Advanced functions provide a way to attribute your function to provide native support for -WhatIf and -Confirm
function Sync-PerforceRepository()
{
[cmdletbinding(SupportShouldProcess=$true)]
param (...) #add your parameters
Begin
{
#setup code here
}
Process
{
if ($pscmdlet.ShouldProcess($ObjectBeingProcessed,"String Describing Action Happening")
{
#Process logic here
}
}
End
{
#Cleanup code
}
}