I want to make my life easier when making scripts. I'm staring a little framework that will have a hierarchy of include files. The problem is dot sourcing a ps1 script that already has other files dot sourced brakes the scope in the original calling scripts.
It looks like this:
\config\loadvariables.ps1
$var = "shpc0001"
\config\config.ps1
. '.\loadvariables.ps1'
\test.ps1
. '.\config\config.ps1'
echo $var
The problem is that test.ps1 tries to load loadvariables.ps1 as it is located beside test.ps1 script.
How can I solve this?
The easiest way to manage a collection of scripts which have inter-dependencies is to convert them to modules. This feature is only available in 2.0 but it allows you to separate a group of scripts into independent components with declared dependencies.
Here is a link to a tutorial on getting modules up and running
https://learn.microsoft.com/en-us/powershell/scripting/developer/module/how-to-write-a-powershell-script-module
As Jared said, modules are the way to go. But since you may even dot-source inside your modules, it is best to use full paths (which can still be calculated at run time) like so.
## Inside modules, you can refer to the module's location like so
. "$PSScriptRoot\loadvariables.ps1"
## Outside a module, you can do this
$ScriptRoot = Split-Path $MyInvocation.MyCommand.Path
. "$ScriptRoot\loadvariables.ps1"
Related
I just published my first perl program, unifdef+ (code::unifdefplus, v0.5.3), but I'm not sure if I've done it properly. The program is broken into two parts -- a script (script/unifdef+.pl), and a module (lib/unifdefplus.pm). The script is basically a wrapper for the module. This is supposed to act as a command line utility (which is in reality what I wanted to publish).
The README file I included documents the script, not the module. CPAN seems to be taking the version from the module rather than the script as well (which is undefined at the moment).
So, my questions is: if I want this to be indexed as a script rather than a module, do I need to do anything differently? Also, I'm taking it I should write some documentation for the module as well -- in which case I'm assuming it should be a README file in the lib directory?
Again, I apologize, but this is the first time I've done this, and I want to make sure I've done it right.
Right off the bat, please read On the naming of modules from the PAUSE admins. If you still have questions, or you're still unsure, reach out to modules <at> perl.org.
The simplest way is to use a name in the App:: namespace, such as App::MyMod.
Typically, I'd keep the script and module documentation within their separate files, but near the top of the module documentation, clearly link to the script's documentation, and state that most users will want to read that for normal use.
To build the README from the script documentation:
pod2readme bin/my_script
Likewise, if you change your mind and want README to reference the module instead:
pod2readme lib/App/MyMod.pm
Assuming you're using ExtUtils::MakeMaker for your builds, you can ensure that the script is installed by adding a directive:
EXE_FILES => [
'bin/my_script'
],
With of course your script in the top-level bin directory of your distribution. Other build systems have similar directives.
We've accumulated a bunch of scripts, each looks and feels like CmdLets, i.e. it has a set of declared params and then it immediately calls a Main function which does the work, calling private sub-functions within.
An example is Remove-ContentLine.ps1 which just spits out the contents of a file or piped input except for lines matching some pattern.
So they're like little "function-scripts".
Is there any way I can aggregate these scripts into a module while also keeping them exactly as they are in files?
Edit
If your hunch is that its easier to just copy paste and refactor them into a psm1 then just say ;)
You ask:
Is there any way I can aggregate these scripts into a module while
also keeping them exactly as they are in files?
But I am certain that is not what you really want. If so, then all of your code will immediately execute when you load the module! Rather, I think what you want is that each of your scripts should be contained within a function; that group of functions is then loaded when you import the module; and you can then execute any of your functions on demand.
The process is very straightforward, and I have written an extensive article on just how to do that (Further Down the Rabbit Hole: PowerShell Modules and Encapsulation) but I will summarize here:
(1) Edit each file to wrap the entire contents into a function and conclude with exporting the function. I would suggest name the function based on the file name. Thus, Remove-ContentLine.ps1 should now look like this:
function Remove-ContentLine()
{
# original content of Remove-ContentLine.ps1 here
}
Export-ModuleMember Remove-ContentLine
(2) Decide on a name for your module and create a directory of that name. Let's call it MyModule. Within the MyModule directory, create a subdirectory to place all your .ps1 files; let's call that ScriptCmdlets.
(3) Create a module file MyModule.psm1 within MyModule whose contents will be exactly this:
Resolve-Path $PSScriptRoot\ScriptCmdlets\*.ps1 |
? { -not ($_.ProviderPath.Contains(".Tests.")) } |
% { . $_.ProviderPath }
Yes, every module (.psm1) file I write contains that identical code!
(4) Create a module manifest MyModule.psd1 within MyModule using the New-ModuleManifest cmdlet.
Then to use your module, just use Import-Module. But I urge you to review my article for more details to gain a better understanding of the process.
I doubt you can if the scripts already executing something ("main"). If they just expose a function like Remove-ContentLine for the Remove-ContentLine.ps1 you could dot source all the scripts in a single script to aggregate them or use the ScriptsToProcess = #() section when working with a module manifest.
I think it would be best to refactor the functions from within each .ps1 into a proper module. It should be essentially just copy/pasting the scripts into a single .psm1 file and creating a .psd1 for it. Be sure to check for and properly handle anything that is set in the script or global scopes, and there are no naming conflicts between functions.
If you have Sapien PowerShell Studio, there is a 'New Module from Functions' option in the File menu which would help automate the bulk of this for you.
Dumb question, I was wondering if I am able to put a variable in my dot source when pulling in a function. I have a couple of scripts that I add to others to ensure that I have common variables and so forth. but I always assume someone is going to put them in the C directory. How can I make sure if they put them in F: or D powershell will still be able to find them? For example...
. C:\CI\scripts\variables.ps1
Function StopOrStartServices{
Param
(
$ServiceName,
$Remoteserver,
$StopOrStart
)
If I change the above lines to the following...
Function StopOrStartServices{
Param
(
$ServiceName,
$Remoteserver,
$StopOrStart
$baseDir
)
. $baseDir\CI\scripts\variables.ps1
Will that still work?
From my understanding, you have to have your dot source as one of the first lines in your script? Or am I confusing that with something else.
Yes, Powershell will expand that variable for you before calling the .ps1 script. The only restriction I can think of that is similar to what you are referring to is the need to import scripts before using the contents of them.
Since PS is an interpreted (not compiled) language it gets run from the top down, and you need to import any includes or modules before invoking them.
What is the best and correct way to run a PowerShell script from another one?
I have a script a.ps1 from which I want to call b.ps1 which does different task.
Let me know your suggestions. Is dot sourcing is the best option here?
Dot sourcing will run the second script as if it is part of the caller—all script scope changes will affect the caller. If this is what you want then dot-source,
However it is more usual to call the other script as if it were a function (a script can use param and function level attributes just like a function). In many ways a script is a PowerShell function, with the name of the file replacing the naming of the function.
Dot sourcing makes it easier to at a later stage convert your script(s) into a module, you won't have to change the script(s) into functions.
Another advantage of dot sourcing is that you can add the function to your shell by adding the file that holds the functions to Microsoft.PowerShell_profile.ps1, meaning you have them available at all times (eliminating the need to worry about paths etc).
I have a short write-host at the top of my dot sourced files with the name of the function and common parameters and I dot source the functions in my profile. Each time I open PowerShell, the list of functions in my profile scrolls by (If like me, you frequently forget the exact names of your functions/files You'll appreciate this as over time as the number of functions start to pile up).
Old but still relevant.
I work with modules with "Import-Module ", this will import the module in the current powershell session.
To avoid keep in cache and to always have the last changes from the module I put a "Get-Module | Remove-Module" that will clear all the loaded modules in the current session.
Get-Module | Remove-Module
Import-Module '.\IIS\Functions.psm1'
I am currently creating a Perl module for in house use. I used ExtUtils::ModuleMaker to generate a build script and skeleton for my Perl module. I would like to include .ini config files that my core modules need to run properly. Where do I put these files so they are installed with my module? What path do I need to use to access these config files across the main and sub modules?
P.S. this is the directory frame:
|-lib
|---Main.pm
|---Main
|-----subModule1.pm
|-----subModule2.pm
|-----subModule3.pm
|-scripts
|-t
If you are using Module::Install, you can use Module::Install::Share and File::ShareDir.
If you are using Module::Build, you may want to use its config_data tool and and a *::ConfigData module.
Taking a look at the generated Makefile, I would bet the better place to put it is under lib/Main and then you can direct your module to look at ~/.modulerc first, then PERLLIB/Main/modulerc.ini or something like that.
You could also embed the defaults in your module in a way that, in absence of ~/.modulerc, the module works using the default data.
To find the home directory, see File::HomeDir. You'll not want to use ~ (since that's a shell thing anyway).
I would suggest having your module work without the rc file as much as possible. If it doesn't exist, the code should fall back to defaults. This should be true, too, even if the file exists, but a particular flag is missing - it should fall back to the default, too.
You may want to look at Config::Any while you're at it. No point reinventing that wheel.