I'm currently learning about PowerShell modules. If you're using a .psd1 manifest file, you have the option to use .ps1 script files as well as .psm1 manfiest files. Why do you need both?
I created a module with both, with the .psm1 set as RootModule and the .ps1 set in ScriptsToProcess and I've noted some differences, but I'm not sure what they add up to.
If I add Write-Output statements to both, on import the output is displayed for .ps1, but suppressed for the .psm1. Write-Warnings are displayed for .psm1.
If I run the Get-Command prefix for the module prefix, functions from the .psm1 are listed with the module name whereas functions from the .ps1 file are listed with a blank module name.
The section of your manifest in which you place the references to the Ps1 files determines how they are executed.
In your case:
The ScriptsToProcess will execute the listed PowerShell scripts in the caller's environment prior to importing the module. This makes me think of them as prep scripts.
This is because files listed here are not meant to contain functions; it's meant to be a script. If you want additional functions accessible by your module you have a few options:
List them in NestedModules
Include them in your module
Try listing them in the functions to the export section of the manifest. (I have not tried this method, but it's my understanding that it will work the way you want no matter where the function is located.)
Related
We learn that Powershell introduced Module Auto-Loading in 3.0 :
... PowerShell imports modules automatically the first time that you
run any command in an installed module. You can now use the
commands in a module without any set-up or profile configuration, ...
And this is done via PSModulePath.
What the docs fail to explain is how Powershell can detect which commands are in a module without first loading the module.
That is, when I use Import-Module, I "know" that Powershell will execute (??!) the powershell code in my .psm1 file, exporting all functions, ... or whatever I spec with Export-Modulemember.
However, the Auto-Load feature has to know before hand that a certain command is available via a certain module without actually loading the module.
Since we had some misbehaving third party modules in PsModulePath and since we have a very few modules that we wrote ourselves that we like to anchor in PSModulePath, I would very much like to understand how the files in PSModulePath are processed.
This is a partial answer.
This is done/implemented via Get-Command and this seems to be enabled to "parse" module files without actually executing the PoSh Code there. See below.
From the powershell docs:
Implicitly Importing a Module
... works on any module in a directory that is included in the value
of the PSModulePath environment variable ...
To support automatic importing of modules, the Get-Command cmdlet
gets all cmdlets and functions in all installed modules, even if the
module is not imported into the session. ...
And then:
Get-Command
The Get-Command cmdlet gets all commands that are installed on the
computer ...
Get-Command that uses the exact name of the command, without
wildcard characters, automatically imports the module that contains
the command so that you can use the command immediately. ...
Get-Command gets its data directly from the command code, unlike
...
The docs do not explain how this is implemented, but one could possibly look up how it's done in the source code. (I wasn't so far able to find it, despite browsing the sources for a while.)
Incidentally I find mentioned that Powershell 3 was the first version to expose the AST, so it stands to reason that the posh code does exactly that: Parse the Scripts and inspect their AST in some way to determine if the command is provided.
I've written a PowerShell cmdlet in C#.
Where do I copy the library at this point?
And how do I import it into PowerShell so that I can use it?
There are two ways to load your new cmdlet.
Import Cmdlets Using Modules. Here you either put your cmdlet DLL into a system-recognized path that will allow you to load a module with a simple name (e.g. Import-Module MyModule), or you can put it in an arbitrary directory for which you need to specify a complete path (e.g. Import-Module C:\code\MyModule.dll). If you have only a single DLL and no dependencies, you can actually give the DLL as shown. Typically, though, you will also want to create a manifest using New-ModuleManifest (creating, e.g., a MyModule.psd1 file) then pass that psd1 file rather than the dll to Import-Module.
Create a Windows PowerShell Snap-in. This requires writing one additional C# class, quite small, that provides the glue necessary to treat your cmdlet as a snap-in. Then you have to register the snap-in with the installutil program and finally load the snapin with Add-SnapIn. (See also How to Register Snap-ins...)
Curiously, almost all articles that talk about writing cmdlets suggest the snap-in approach, but this is simply because that technique has been available since PowerShell version 1, while modules did not come along until version 2. Everything I have read, though, suggests essentially that the snap-in approach is deprecated to the simpler--and more flexible--module approach.
Right now I have a collection of .ps1 PowerShell script cmdlets (they can take parameters) that are related to each other, but each is fairly involved. I'd like to organize them into a module, preferably while keeping them in separate files.
What is the best way to do that? Can I keep them in separate .ps1 files, and use a module manifest to say they are part of the module? Do I need to dot source the files into a .psm1file in order to keep the files separated? Or is it unwise to separate them into separate files?
Ultimately you will need to have at least one .PSM1 file that either contains the variable and function definitions you want to export from your module OR dot sources in those definitions from .PS1 files. By default, variables are not exported while all functions are exported. If you want to modify that behavior, then use Export-ModuleMember -Variable MyExportedVariable -Function *-* at the end of the PSM1 file.
If much of the code in your PS1 files is internal implementation details it should be fine to keep in PS1 files. Just remember that the PSM1 would export the "public" facing interface of your module.
I have written all my powershell functions in a ps1 file.
In another ps1 file ,it is being dot sourced and function are being called.
When i look for better methods i came to know that putting all the functions as modules (.psm1) is better option.
But for .ps1 file i can simply it in a folder and ship it.
With .psm1 file ,it says i have to add it to particular location so that it can be imported.
how to provide .psm1 file to customer then ? should we instruct them to copy to the mentioned location before using it ? (if we don't ship via msi)
Technically you can import psm1 files via path but that isn't the best user experience. If you put the file in a folder under either $home\documents\WindowsPowerShell\Modules or $pshome\Modules then the user can import based on just the name of the psm1 file. Finally, you can put the psm1 file in any location you want and if you modify the PSModulePath environment variable to include that directory, PowerShell will search for modules in that dir.
I want to organize functions into multiple .psm1 files and have them loaded by a single Module Manifest file (.psd1) -- such that Only the .psd1 file would need to have the same name as the module.
I think it should be possible. Can anyone help me out please ?
Launch the Powershell ISE
Use the New-ModuleManifest command
Follow the instructions here - How to Write a Module Manifest. When asked for nested modules, key in the module as Modulepath\Modulename.psm1
Finally, once the .psd1 file is created, load / import it using Import-Module <<module-name>>
You can load them manually in your main module psm1 file using Import-Module calls or by specifying them in the NestedModules key in the manifest file (psd1)