Auto-Loading Cmdlets as part of a Module dynamically - powershell

So, I was thinking: I have a lot (!) of custom cmdlets, but I don't want them to load all of them in my profile, because, naturally, that will take a lot of time. (I work fast, so my tools need to be fast.) But also, I don't want to always load them manually, because, well that's annoying. (And again ... I work fast!)
Luckily, there's a neat functionality called "Auto-Loading" which will totally solve my problem ... kind of. I just have to put my cmdlets into script modules, and they will be loaded automatically, right?
It seems though, there need to be some requirements met, for PowerShell to "detect" the cmdlets belonging to a module. The easiest way I know so far, is to simply put each cmdlet in a single ps1 file and then create a manifest somewhat like this:
#{
ModuleVersion = "1.0"
NestedModules = #(
"Get-Something.ps1",
"Remove-Something.ps1",
"Test-Something.ps1",
"Run-Something.ps1",
"Invoke-Something.ps1",
"Start-Something.ps1",
"Stop-Something.ps1"
),
CmdletsToExport = #(
"Get-Something",
"Remove-Something",
"Test-Something",
"Run-Something",
"Invoke-Something",
"Start-Something",
"Stop-Something"
)
}
This does work. But as I said .. I work fast, and I'm lazy. It would be much easier, if the module could discover its members dynamically. Basically, all ps1 files in the same folder. This would work pretty easily with some code in the psm1 file. But then, the Auto-Loading would not work. I understand PowerShell does need to know the exported cmdlets before hand.
Is there any other, dynamic way to do that, other than specifying them explicitly in the psd1 module manifest?

If you are worried about the size or organization of your module, create different modules for the different common subjects of your cmdlets, and put your functions into the psm1 for each module. Then make sure you add the cmdlet definitions to CmdletsToExport in the module manifest (psd1). If you have a centralized feed, distributing your modules becomes easy, and if you simply want all of them at a given time you can create a top level module that lists your other modules as dependencies so they get installed when you install the top level module. Unless you have hundreds to thousands of cmdlets to export as part of a single module, you shouldn't need to worry about performance problems.
For each module you have on the PSModulePath, you can use wildcards in the CmdletsToExport and FunctionsToExport sections of your manifest for auto-complete to work, if you don't want to export them name by name.
Edit: Technically, you can call Set-PSReadLineKeyHandler and Register-ArgumentCompleter yourself, however, you have the same problem now in that you need to add this to the $profile for each user or machine where you want the autocomplete to work, in addition to shipping your code there in the first place. So it doesn't really solve your issue.

Related

Classes loaded as dependency in imported module are not imported into script

I have some PowerShell modules and scripts that looks something like this:
MyClassModule.psm1
class MyClass {
#...
}
MyUtilityModule.psm1
using module MyClassModule
# Some code here
MyScript.ps1
using module MyUtilityModule
[MyClass]::new()
My questions are:
Why does this script fail if the class is imported as a dependency in MyUtilityModule?
Are imports always only local to the direct script they are imported into?
Is there any way to make them global for a using statement?
If it makes any difference I need to write with backwards compatibility to at least PowerShell version 5.1
This is all about scope. The user experience is directly influenced by how one loads a class. We have two ways to do so:
import-Module
Is the command that allows loading the contents of a module into the session.
...
It must be called before the function you want to call that is located in that specific module. But not necessarly at the beginning/top of your
script.
Using Module
...
The using statement must be located at the very top of your script. It also must be the very first statement of your script (Except for comments). This
make loading the module ‘conditionally’ impossible.
Comand Type, Can be called anywhere in script, internal functions, public functions, Enums, Classes
Import-Module, Yes, No, Yes, No, No
using Module, No, No, Yes, Yes, Yes
See these references for further details
How to write Powershell modules with classes
https://stephanevg.github.io/powershell/class/module/DATA-How-To-Write-powershell-Modules-with-classes
What is this Module Scope in PowerShell that you Speak of?
https://mikefrobbins.com/2017/06/08/what-is-this-module-scope-in-powershell-that-you-speak-of

Why automation framework require proxy function for every powershell cmdlet?

In my new project team, for each powershell cmdlet they have written proxy function. When i asked the reason for this practice, they said that it is a normal way that automation framework would be written. They also said that If powershell cmdlet is changed then we do not need to worry ,we can just change one function.
I never saw powershell cmdlets functionality or names changed.
For example, In SQL powershell module they previously used snapin then they changed to module. but still the cmdlets are same. No change in cmdlet signature. May be extra arguments would have added.
Because of this proxy functions , even small tasks taking long time. Is their fear baseless or correct? Is there any incident where powershell cmdlets name or parameter changed?
I guess they want to be extra safe. Powershell would have breaking changes here and here sometimes but I doubt that what your team is doing would be impacted by those (given the rare nature of these events). For instance my several years old scripts continue to function properly up to present day (and they were mostly developed against PS 2-3).
I would say that this is overengineering, but I cant really blame them for that.
4c74356b41 makes some good points, but I wonder if there's a simpler approach.
Bear with me while I restate the situation, just to ensure I understand it.
My understanding of the issue is that usage of a certain cmdlet may be strewn about the code base of your automation framework.
One day, in a new release of PowerShell or that module, the implementation changes; could be internal only, could be parameters (signature) or even cmdlet name that changes.
The problem then, is you would have to change the implementation all throughout your code.
So with proxy functions, you don't prevent this issue; a breaking change will break your framework, but the idea is that fixing it would be simpler because you can fix up your own proxy function implementation, in one place, and then all of the code will be fixed.
Other Options
Because of the way command discovery works in PowerShell, you can override existing commands by defining functions or aliases with the same name.
So for example let's say that Get-Service had a breaking change and you used it all over (no proxy functions).
Instead of changing all your code, you can define your own Get-Service function, and the code will use that instead. It's basically the same thing you're doing now, except you don't have to implement hundreds of "empty" proxy functions.
For better naming, you can name your function Get-FrameworkService (or something) and then just define an alias for Get-Service to Get-FrameworkService. It's a bit easier to test that way.
One disadvantage with this is that reading the code could be unclear, because when you see Get-Service somewhere it's not immediately obvious that it could have been overwritten, which makes it a bit less straightforward if you really wanted to call the current original version.
For that, I recommend importing all of the modules you'll be using with -Prefix and then making all (potentially) overridable calls use the prefix, so there's a clear demarcation.
This even works with a lot of the "built-in" commands, so you could re-import the module with a prefix:
Import-Module Microsoft.PowerShell.Utility -Prefix Overridable -Force
TL;DR
So the short answer:
avoid making lots and lots of pass-thru proxy functions
import all modules with prefix
when needed create a new function to override functionality of another
then add an alias for prefixed_name -> override_function
Import-Module Microsoft.PowerShell.Utility -Prefix Overridable -Force
Compare-OverridableObject $a $b
No need for a proxy here; later when you want to override it:
function Compare-CanonicalObject { <# Stuff #> }
New-Alias Compare-OverridableObject Compare-CanonicalObject
Anywhere in the code that you see a direct call like:
Compare-Object $c $d
Then you know: either this intentionally calls the current implementation of that command (which in other places could be overridden), or this command should never be overridden.
Advantages:
Clarity: looking at the code tells you whether an override could exist.
Testability: writing tests is clearer and easier for overridden commands because they have their own unique name
Discoverability: all overridden commands can be discovered by searching for aliases with the right name pattern i.e. Get-Alias *-Overridable*
Much less code
All overrides and their aliases can be packaged into modules

Powershell naming conventions/scoping

I want to create a Powershell function/cmdlet which installs (and one that uninstalls) a web application: copies files, creates an app pool, creates the web application, sets up all kinds of IIS properties, does some web.config modifications, etc. I'm confused about how I should name it. Powershell has this verb-object naming convention, and it's all nice, but the names I want to use (New-WebApplication, etc.) are already taken by the WebAdministration module (which this new function will use internally). Is there a nice way to scope my functions to make it clear that it's a different module? Like mymodule.New-WebApplication, My-New-WebApplication, New-MyWebApplication? Or I could call it Install-WebApplication but that could lead to confusion because of reusing the same name.
I just ran into this recently for a similar issue. This could have many opinionated answers but this would handle the way to scope my functions to make it clear that it's a different module.
You could use the -Prefix parameter of Import-Module
Import-Module mymodule -Prefix Super
So when you go to use your cmdlet you would call it with
New-SuperWebApplication
Alternatively, you can also explicitly call the cmdlet with the module path
mymodule\New-WebApplication
I agree with Matt's answer, but I wanted to offer another perspective.
I wrote a module where the intention was specifically to recreate the functionality of an existing cmdlet. I named my function differently, but I also exported functions from the module that allow the caller to overrride the existing cmdlet with mine (using an Alias, which is interpreted first), and then to also undo that process.
This allowed someone to explicitly call the function without needing to use -Prefix nor use the \ syntax, using the new name with new code, but it also allowed one to use my function as a drop-in replacement for existing code by calling a single new command.
Here's that module if you want to take a look:
DnsCmdletFixes

How to protect powershell file, and call single function

I'm having this problem for a while now and google have its limits.
I'm writing a powershell file that contain several generic function.
I use the function in vary scripts and now I want to let other personal in my work to use them as well.
the problem is, do to sensitive operation, I want to lock and protect the script (compile to a dll, exe etc').
how do I create powershell library like C# DLL?
one option I try but did not find out how to continue is to compile the script using powerGUI to executable file ( .exe) but then I canot access the function in it let alone pass on parameters to that function.
hope you understood me :)
thank you.
You don't. Rather than trying to obscure this information (if you compile them, they can be decompiled and your "protected" resources will no longer be), remove them entirely and make those parameters for your functions. This both protects your "sensitive" data and makes the code much more reusable.
You can then package your functions into a module

How can I have a parent script but maintain separation of concerns with Powershell Scripts?

So I am working on some IIS management scripts for a specific IIS Site setup exclusive to a product to complete tasks such as:
- Create Site
- Create App Pool
- Create Virtual directories
The problem, is I would like to keep separate scripts for each concern and reference them in a parent script. The parent script could be ran to do a full deployment/setup. Or you could run the individual scripts for a specific task. The problem is that they are interactive, so they will request for the user information relevant to completing the task.
I am not sure how to approach the problem where each script will have a script body that will acquire information from the user, yet if it is loaded into the parent script avoid that specific's scripts script body from prompting the user.
NOTE: I know I could put them into modules, and fire off the individual "Exported to the environment" functions, but this script is going to be moved around to the environment that needs setup, and having to manually put modules (psm1) files into the proper PowerShell module folders just to run the scripts is a route I am not particularly fond of.
I am new to scripting with Powershell, any thoughts or recommendations?
Possible Answer*
This might be (a) solution: but I found I could Import-Modules from the working directory and from there have access to those exported functions.
I am interested in any other suggestions as well.
They way I would address it would be to implement a param block at the top of each sub script that would collect the information it needs to run. If a sub script is run individually the param block would prompt the user for the data needed to run that individual script. This also allows the parent script to pass the data needed to run the subscripts as the parent script calls the sub script. The data needed can be hard coded in the parent script or prompted for or some mixture thereof. That way you can make the sub scripts run either silently or with user interaction. You get the user interaction for free from Powershell's parameter handling mechanism. In the subscripts add a parameter attribute to indicate that Powershell will request those particular parameter values from the user if they are not already provided by the calling script.
At the top of your sub scripts, use a parameter block to collected needed data.
param
(
[parameter(Mandatory=$true, HelpMessage="This is required, please enter a value.")]
[string] $SomeParameter
)
You can have a deploy.ps1 script which dot sources the individual scripts and then calls the necessary functions within them:
. $scriptDir\create-site.ps1
. $scriptDir\create-apppool.ps1
. $scriptDir\create-virtualdirectories.ps1
Prompt-Values
Create-Site -site test
Create-AppPool -pool test
Create-VirtualDirectories -vd test
In the individual functions, you can see if the values needed are passed in from the caller ( deploy.ps1 or the command line)
For example, create-site.ps1 will be like:
function Create-Site($site){
if(-not $site){
Prompt-Values
}
}
The ideal is to make the module take care of maintaining storing it's own settings, depending on distribution concerns of that module, and to make commands to help work with the settings.
I.e.
Write a Set-SiteInfo -Name -Pool -VirtualDirectory and have that store values in the registry or in the local directory of the module ($psScriptRoot), then have other commands in the module use this.
If the module is being put in a location where there's no file write access for low-rights users (i.e. a web site directory, or $psHome), then it's a notch better to store the values in the registry.
Hope this Helps