How to manage dependencies between scripts in multi-file Powershell Module? - powershell

I have a large amount of powershell code that I've written over the course of a long project; these scripts perform a wide variety of functions, and most of them depend in some way on others within the scope of the project. Right now, the work is made up of a couple of files containing many functions each. Originally, in order to work with these scripts all the script files were sort of haphazardly dot-sourced into the environment.
However, I've learned recently that Powershell 2.0 introduces modules, and I would like to deploy these scripts together that way. Since a module's contents are all loaded together, I would like to split apart my files so that each script has its own file, in order to aid source control. However, I'm a little unclear about the connections between the scripts now.
I've done some testing and it seems that it's ok to move the Export-ModuleMember command for each function to the individual .ps1 files; this seems more like functions declaring their own scope like public and private scoping in C#. However, after doing that my .psm1 file contains nothing but this:
Get-ChildItem -recurse $psScriptRoot | where { $_.Extension -eq ".ps1" } | foreach { . $_.FullName }
Does that seem right? All the scripts are being dot sourced, and all the scripts refer to each other under that assumption. Should they instead refer to each other using their locations relative to $psScriptRoot?
Is there a way different than both these ways? Can anyone offer advice? I don't know much about these yet.

I've seen a similar technique where each .ps1 file contains one function and the functions are sourced in the PSM1 file used in the WPK module and PSRemoteRegistry modules.
This line is the PSRemoteRegistry module:
Get-ChildItem -Path $PSScriptRoot\*.ps1 | Foreach-Object{ . $_.FullName }
I would say I like the technique over having one giant script files of functions.

you could also look at creating a manifest (I don't actually know if you need a psm1 with a psd1).
Here was my use of a manifest:
New-ModuleManifest `
-Path Fiddler.psd1 `
-Author "Niklas Goude" `
-CompanyName "http://www.powershell.nu/" `
-ModuleVersion 1.0 `
-Description "Module from http://www.powershell.nu/2011/03/14/fiddler/</a> - psd1 created by Matt # amonskeysden.tumblr.com" `
-FormatsToProcess #() `
-RequiredAssemblies #("Fiddler.dll") `
-NestedModules #() `
-Copyright "" `
-ModuleToProcess "Fiddler.psm1" `
-TypesToProcess #() `
-FileList #("Fiddler.psm1","Fiddler.dll")
I think the answer to your question would be to include your file list in the FileList parameter there.
I wrote some of my findings (including links to MS resources) here:
http://amonkeysden.tumblr.com/post/5127684898/powershell-and-fiddler

Related

powershell - read all .sql files in a folder and save them all into a single .sql file without changing line ends or line feeds

I manage database servers and often I have to apply scripts into different servers or databases.
Sometimes these scripts are all saved in a directory and need to be open and run in the target server\database.
As I have been looking at automating this task I came across how Run All PowerShell Scripts In A Directory and also How can I execute a set of .SQL files from within SSMS? and that is exactly what I needed, however I stumbled over a few issues:
I don't know the file names
:setvar path "c:\Path_to_scripts\"
:r $(path)\file1.sql
:r $(path)\file2.sql
I tried to add all .sql files into one big thing, but when I copied from powershell into sql, in many of the procedures that had long lines, the lines got messed up
cls
$Radhe = Get-Content 'D:\apply all scripts to SQLPRODUCTION\*.sql' -Raw
$Radhe.Count
$Radhe.LongLength
$Radhe
If I could read all the files in that specific folder and save them all into a single the_scripts_to_run.sql file, without changing the line endings, that would be perfect.
I don't need to use get-content or any command in particular, I just would like to get all my scripts into a big single script with everything in it, without changes.
How can I achieve that?
I even found Merge multiple SQL files into a single SQL file but I want to get it done via powershell.
This should work fine, I'm not sure what you mean by not needing to use Get-Content you could use [System.IO.File]::ReadAllLines( ) or [System.IO.File]::ReadAllText( ) but this should work fine too. Try it and let me know if it works.
$path = "c:\Path_to_scripts"
$scripts = (Get-ChildItem "$path\*.sql" -Recurse -File).FullName
$merged = [system.collections.generic.list[string[]]]::new()
foreach($script in $scripts)
{
$merged.Add((Get-Content $script))
}
$merged | Out-File "$path\mergedscripts.sql"
This is actually much simpler than the proposed solutions. Get-Content takes a list of paths and supports wildcards, so no loop is required.
$path = 'c:\temp\sql'
Set-Content -Path "$path\the_scripts_to_run.sql" -Value (Get-Content -Path "$path\*.sql" -Raw)
Looks like me and #Santiago had the same idea:
Get-ChildItem -Path "$path" -Filter "*.sql" | ForEach-Object -Process {
Get-Content $_.FullName | Out-File $Path\stuff.txt -Append utf8
}

Is it possible to make a search and replace in file-content on multiple network locations?

I need to search for a string and then replace it with another in multiple files. Sound easy, but the hard part is that is that it's multiple files on multiple network locations. I've tried connecting to all of the locations at once with vscode and then using the built-in search and replace function. This allmost works, except when I get to big searches is seems to hang.
I'm now looking for another, more stable, way to do this. Anybody got any ideas? I thought powershell could be a good competitor, but unfortunately I'm not that used to working with powershell.
I found this guide and it's a bit like what I want, except I need to do it on multiple files at multiple locations at once.
https://mcpmag.com/articles/2018/08/08/replace-text-with-powershell.aspx
I would settle with running one skript for each location since it's only < 20 locations to scan. But it needs to include subfolders.
Any tips are appreciated, thanks! :)
Edit 1:
The folder structure differs from location to location so its hard to say how it looks. But I can say that no location has a folder structure deeper than 15 steps. The text that I'm replacing are thumbprints of certificates stored in .config files. The files are between 100 and 1000 characters long and the thumbprints I'm replacing looks something like this d2e8c58e5b34021671f2121483572f03f54ab9ae
This is assuming that the different network locations are in trusted domains or at least part of the wmi trustedhosts. PowerShell remoting will also need to be enabled on all computers involved. Run (In elevated PowerShell) Enable-PSRemoting -Force to enable PowerShell Remoting
$command = { Get-ChildItem -Path C:\Test\ -Include *.config -Name -Recurse | ForEach-Object {$configContent = Get-Content -Path $_ -Raw; $configContent.Replace("Old Value", "New Value") | Out-File -FilePath ($_.FullName) -Force } }
Invoke-Command -ComputerName "TestServer1", "TestServer2", "etc..." -ScriptBlock $command
If you are not part of the domain but have a domain/server login, you will need to use the -Credentials switch on the Invoke-Command function. This will basically find all files that have the .config extension in any subfolders in the path, get the current content of the .config file, replace your value, and finally overwrite the existing config file. WATCH OUT THOUGH this will get EVERY .config file that is in that path. If you have more than one it will also grab it, but if it doesn't have the string it will just rewrite the same file.
Without seeing an example of the folder structures and files this is quite hard to give a thorough answer on. However I would probably build a series of ForEach segments. For example:
ForEach ($Server in $Servers)
{
ForEach ($File in $Files)
{
Select-String -Path $File -Pattern "$ExampleString"
}
}

How to Do MSBuild's GetDirectoryNameOfFileAbove in PowerShell?

In MSBuild, there's the GetDirectoryNameOfFileAbove property function.
How do I achieve the same with PowerShell?
Should better have compact syntax, because that's what you have to paste into every entry-point script to find its includes.
The idea of this question:
There's a large solution in source code control. Some of its parts are relatively autonomous.
It has a location for shared scripts and reusable functions, at a known folder under the root.
There're numerous entry-point scripts (files which you explicitly execute) scattered around the project, all of them including the shared scripts.
What's the convenient way for locating the shared scripts from the entry-point-script?
Relative paths turn out to work bad because they look like "../../../../../../scripts/smth", are hard to write and maintain.
Registering modules is not a good option because (a) you're getting this from SCC, not by installing (b) you usually have different versions in different disc locations, all at the same time and (c) this is an excess dependency on the environment when technically just local info is enough.
MSBuild way for doing this (since v4) is as follows: drop a marker file (say, root.here or whatever), get an absolute path to that folder with GetDirectoryNameOfFileAbove, et voila! You got the local root to build paths from.
Maybe it's not the right way to go with powershell, so I'd be grateful for such directions as well.
You can access the current folder thus:
$invoker= Split-Path -Parent $MyInvocation.MyCommand.Path
So the parent of that one is :
$parent=Split-Path -Parent $MyInvocation.MyCommand.Path|Split-Path -Parent
A quick and dirty solution looks like this:
function GetDirectoryNameOfFileAbove($markerfile)
{
$result = ""
$path = $MyInvocation.ScriptName
while(($path -ne "") -and ($path -ne $null) -and ($result -eq ""))
{
if(Test-Path $(Join-Path $path $markerfile)) {$result=$path}
$path = Split-Path $path
}
if($result -eq "") {throw "Could not find marker file $markerfile in parent folders."}
return $result
}
It could be compacted into a single line for planting into scripts, but it's still too C#-ish, and I think it might be shortened down with some PS pipes/LINQ style magic.
UPD: edited the script, it was found that $MyInvocation.MyCommand.Path is often NULL when script is called from cmdline with dotsourcing (with any context level), so the current hypothesis is ScriptName.

Clean up a remote machine's temporary directory: How do I identify files that are in use? (Powershell)

I have a number of remote machines whose temporary directories get full. (The are Selenium / webdriver grid remotes). I have a powershell script that identifies the files and directories that need to be cleaned. The command in use looks something like this (excluding complexities of the various machines and directories):
gci $env:TEMP -Recurse| Remove-Item -ErrorAction Continue -Recurse
The problem is that this takes far too long when some files are in use. Locally, I could join to the output of handles (parsing would be a little ugly), but that would be more complicated on a remote machine. Among other things, I'd need to verify that WinRM was configured correctly, handles was in path, etc.
Is there a simpler way to identify that a file is in use? Ideally one that can be filtered on via Powershell (which includes .NET). I'm familiar with a variety of other scripting languages (ruby, python, perl).
The best tool I've found for listing open files is the SysInternals tool handle.exe e.g.:
$openFiles = #(handle $env:TEMP | Foreach {($_ -split ": ")[3]} | Select -Unique)

How to get the current directory of the cmdlet being executed

This should be a simple task, but I have seen several attempts on how to get the path to the directory where the executed cmdlet is located with mixed success. For instance, when I execute C:\temp\myscripts\mycmdlet.ps1 which has a settings file at C:\temp\myscripts\settings.xml I would like to be able to store C:\temp\myscripts in a variable within mycmdlet.ps1.
This is one solution which works (although a bit cumbersome):
$invocation = (Get-Variable MyInvocation).Value
$directorypath = Split-Path $invocation.MyCommand.Path
$settingspath = $directorypath + '\settings.xml'
Another one suggested this solution which only works on our test environment:
$settingspath = '.\settings.xml'
I like the latter approach a lot and prefer it to having to parse the filepath as a parameter each time, but I can't get it to work on my development environment. What should I do? Does it have something to do with how PowerShell is configured?
Yes, that should work. But if you need to see the absolute path, this is all you need:
(Get-Item .).FullName
The reliable way to do this is just like you showed $MyInvocation.MyCommand.Path.
Using relative paths will be based on $pwd, in PowerShell, the current directory for an application, or the current working directory for a .NET API.
PowerShell v3+:
Use the automatic variable $PSScriptRoot.
The easiest method seems to be to use the following predefined variable:
$PSScriptRoot
about_Automatic_Variables and about_Scripts both state:
In PowerShell 2.0, this variable is valid only in script modules (.psm1). Beginning in PowerShell 3.0, it is valid in all scripts.
I use it like this:
$MyFileName = "data.txt"
$filebase = Join-Path $PSScriptRoot $MyFileName
You can also use:
(Resolve-Path .\).Path
The part in brackets returns a PathInfo object.
(Available since PowerShell 2.0.)
Try :
(Get-Location).path
or:
($pwd).path
Path is often null. This function is safer.
function Get-ScriptDirectory
{
$Invocation = (Get-Variable MyInvocation -Scope 1).Value;
if($Invocation.PSScriptRoot)
{
$Invocation.PSScriptRoot;
}
Elseif($Invocation.MyCommand.Path)
{
Split-Path $Invocation.MyCommand.Path
}
else
{
$Invocation.InvocationName.Substring(0,$Invocation.InvocationName.LastIndexOf("\"));
}
}
Get-Location will return the current location:
$Currentlocation = Get-Location
I like the one-line solution :)
$scriptDir = Split-Path -Path $MyInvocation.MyCommand.Definition -Parent
Try this:
$WorkingDir = Convert-Path .
In Powershell 3 and above you can simply use
$PSScriptRoot
If you just need the name of the current directory, you could do something like this:
((Get-Location) | Get-Item).Name
Assuming you are working from C:\Temp\Location\MyWorkingDirectory>
Output
MyWorkingDirectory
Most answers don't work when debugging in the following IDEs:
PS-ISE (PowerShell ISE)
VS Code (Visual Studio Code)
Because in those the $PSScriptRoot is empty and Resolve-Path .\ (and similars) will result in incorrect paths.
Freakydinde's answer is the only one that resolves those situations, so I up-voted that, but I don't think the Set-Location in that answer is really what is desired. So I fixed that and made the code a little clearer:
$directorypath = if ($PSScriptRoot) { $PSScriptRoot } `
elseif ($psise) { split-path $psise.CurrentFile.FullPath } `
elseif ($psEditor) { split-path $psEditor.GetEditorContext().CurrentFile.Path }
For what it's worth, to be a single-line solution, the below is a working solution for me.
$currFolderName = (Get-Location).Path.Substring((Get-Location).Path.LastIndexOf("\")+1)
The 1 at the end is to ignore the /.
Thanks to the posts above using the Get-Location cmdlet.
this function will set the prompt location to script path, dealing with the differents way to get scriptpath between vscode, psise and pwd :
function Set-CurrentLocation
{
$currentPath = $PSScriptRoot # AzureDevOps, Powershell
if (!$currentPath) { $currentPath = Split-Path $pseditor.GetEditorContext().CurrentFile.Path -ErrorAction SilentlyContinue } # VSCode
if (!$currentPath) { $currentPath = Split-Path $psISE.CurrentFile.FullPath -ErrorAction SilentlyContinue } # PsISE
if ($currentPath) { Set-Location $currentPath }
}
You would think that using '.\' as the path means that it's the invocation path. But not all the time. Example, if you use it inside a job ScriptBlock. In which case, it might point to %profile%\Documents.
This is what I came up with. It's an array including multiple methods of finding a path, uses the current location, filters out null\empty results, and returns the first not-null value.
#((
($MyInvocation.MyCommand.Module.ModuleBase),
($PSScriptRoot),
(Split-Path -Parent -Path $MyInvocation.MyCommand.Definition -ErrorAction SilentlyContinue),
(Get-Location | Select-Object -ExpandProperty Path)
) | Where-Object { $_ })[0]
To only get the current folder name, you can also use:
(Split-Path -Path (Get-Location) -Leaf)
To expand on #Cradle 's answer: you could also write a multi-purpose function that will get you the same result per the OP's question:
Function Get-AbsolutePath {
[CmdletBinding()]
Param(
[parameter(
Mandatory=$false,
ValueFromPipeline=$true
)]
[String]$relativePath=".\"
)
if (Test-Path -Path $relativePath) {
return (Get-Item -Path $relativePath).FullName -replace "\\$", ""
} else {
Write-Error -Message "'$relativePath' is not a valid path" -ErrorId 1 -ErrorAction Stop
}
}
I had similar problems and it made me a lot of trouble since I am making programs written in PowerShell (full end user GUI applications) and I have a lot of files and resources I need to load from disk.
From my experience, using . to represent current directory is unreliable. It should represent current working directory, but it often does not.
It appears that PowerShell saves location from which PowerShell has been invoked inside ..
To be more precise, when PowerShell is first started, it starts, by default, inside your home user directory. That is usually directory of your user account, something like C:\USERS\YOUR USER NAME.
After that, PowerShell changes directory to either directory from which you invoked it, or to directory where script you are executing is located before either presenting you with PowerShell prompt or running the script. But that happens after PowerShell app itself originally starts inside your home user directory.
And . represents that initial directory inside which PowerShell started. So . only represents current directory in case if you invoked PowerShell from the wanted directory. If you later change directory in PowerShell code, change appears not to be reflected inside . in every case.
In some cases . represents current working directory, and in others directory from which PowerShell (itself, not the script) has been invoked, what can lead to inconsistent results.
For this reason I use invoker script. PowerShell script with single command inside:
POWERSHELL.
That will ensure that PowerShell is invoked from the wanted directory and thus make . represent current directory. But it only works if you do not change directory later in PowerShell code.
In case of a script, I use invoker script which is similar to last one I mentioned, except it contains a file option:
POWERSHELL -FILE DRIVE:\PATH\SCRIPT NAME.PS1.
That ensures that PowerShell is started inside current working directory.
Simply clicking on script invokes PowerShell from your home user directory no matter where script is located.
It results with current working directory being directory where script is located, but PowerShell invocation directory being C:\USERS\YOUR USER NAME, and with . returning one of these two directories depending on the situation, what is ridiculous.
But to avoid all this fuss and using invoker script, you can simply use either $PWD or $PSSCRIPTROOT instead of . to represent current directory depending on weather you wish to represent current working directory or directory from which script has been invoked.
And if you, for some reason, want to retrieve other of two directories which . returns, you can use $HOME.
I personally just have invoker script inside root directory of my apps I develop with PowerShell which invokes my main app script, and simply remember to never ever change current working directory inside my source code of my app, so I never have to worry about this, and I can use . to represent current directory and to support relative file addressing in my applications without any problems.
This should work in newer versions of PowerShell (newer than version 2).
Mine was a short, so unplug everything but USB from it and recompile