Microsoft's Consistency in PowerShell CmdLet Parameter Naming - powershell

Let's say I wrote a PowerShell script that includes this commmand:
Get-ChildItem -Recurse
But instead I wrote:
Get-ChildItem -Re
To save time. After some time passed and I upgraded my PowerShell version, Microsoft decided to add a parameter to Get-ChildItem called "-Return", that for example returns True or False depending if any items are found or not.
In that virtual scenario, do I have I to edit all my former scripts to ensure that the script will function as expected? I understand Microsoft's attempt to save my typing time, but this is my concern and therefore I will probably always try to write the complete parameter name.
Unless of course you know something I don't. Thank you for your insight!

This sounds more like a rant than a question, but to answer:
In that virtual scenario, do I have I to edit all my former scripts to ensure that the script will function as expected?
Yes!
You should always use the full parameter names in scripts (or any other snippet of reusable code).
Automatic resolution of partial parameter names, aliases and other shortcuts are great for convenience when using PowerShell interactively. It lets us fire up powershell.exe and do:
ls -re *.ps1|% FullName
when we want to find the path to all scripts in the profile. Great for exploration!
But if I were to incorporate that functionality into a script I would do:
Get-ChildItem -Path $Home -Filter *.ps1 -Recurse |Select-Object -ExpandProperty FullName
not just for the reasons you mentioned, but also for consistency and readability - if a colleague of mine comes along and maybe isn't familiar with the shortcuts I'm using, he'll still be able to discern the meaning and expected output from the pipeline.
Note: There are currently three open issues on GitHub to add warning rules for this in PSScriptAnalyzer - I'm sure the project maintainers would love a hand with this :-)

Related

PowerShell Get-VHD "is not an existing virtual hard disk file"

When creating a new VM in Hyper-V, to keep things organized, I use a particular naming convention when creating the associated VHDX files. The naming convention is the VMs FQDN followed by the SCSI controller attachment point followed by what the name of the drive is called or used for inside of the VM. I encapsulate the SCSI and Name parameters inside smooth and square brackets respectively. I find this tends to make things a little bit easier from a human perspective to match the VHDX files in Hyper-V to what the VM sees internally when needing to do maintenance tasks. It has also helped with scripting in the past. An example file name would look as follows...
servername.example.com(0-0)[OS].vhdx
This has worked well for quite some time, but recently I tried to run some PowerShell commands against the VHDX files and ran across a problem. Apparently the square brackets for the internal VM name are being parsed as RegEx or something inside of the PowerShell commandlet (I'm honestly just guessing on this). When I try to use Get-VHD on a file with the above naming convention it spits out an error as follows:
Get-VHD : 'E:\Hyper-V\servername.example.com\Virtual Hard Disks\servername.example.com(0-0)[OS].vhdx' is not an existing virtual hard disk file.
At line:1 char:12
+ $VhdPath | Get-VHD
+ ~~~~~~~
+ CategoryInfo : InvalidArgument: (:) [Get-VHD], VirtualizationException
+ FullyQualifiedErrorId : InvalidParameter,Microsoft.Vhd.PowerShell.Cmdlets.GetVHD
If I simply rename the VHDX file to exclude the "[OS]" portion of the naming convention the command works properly. The smooth brackets for the SCSI attachment point don't seem to bother it. I've tried doing a replace command to add a backtick ''`'' in front of the brackets to escape them, but the same error results. I've also tried double backticks to see if passing in a backtick helped... that at least showed a single backtick in the error it spat out. Suspecting RegEx, I tried the backslash as an escape character too... which had the interesting effect of converting all the backslashes in the file path into double backslashes in the error message. I tried defining the path variable via single and double quotes without success. I've also tried a couple of different ways of obtaining it via pipeline such as this example...
((Get-VM $ComputerName).HardDrives | Select -First 1).Path | Get-VHD
And, for what it's worth, as many VMs as I am attempting to process... I need to be able to run this via pipeline or some other automation scriptable method rather than hand coding a reference to each VHDX file.
Still thinking it may be something with RegEx, I attempted to escape the variable string with the following to no avail:
$VhdPathEscaped = [System.Text.RegularExpressions.Regex]::Escape($VhdPath)
Quite frankly, I'm out of ideas.
When I first ran across this problem was when I tried to compact a VHDX file with PowerShell. But, since the single VM I was working with needed to be offline for that function to run anyway, rather than fight the error with the VHDX name, I simply renamed it, compacted it, and reset the name back. However, for the work I'm trying to do now, I can't afford to take the VM offline as this script is going to run against a whole fleet of live VMs. So, I need to know how to properly escape those characters so the Get-VHD commandlet will accept those file names.
tl;dr:
A design limitation of Get-VHD prevents it from properly recognizing VHD paths that contain [ and ] (see bottom section for details).
Workaround: Use short (8.3) file paths assuming the file-system supports them:
$fso = New-Object -ComObject Scripting.FileSystemObject
$VhdPath |
ForEach-Object { $fso.GetFile((Convert-Path -LiteralPath $_)) } |
Get-VHD
Otherwise, your only options are (as you report, in your case the VHDs are located on a ReFS file-system, which does not support short names):
Rename your files (and folders, if applicable) to not contain [ or ].
Alternatively, if you can assume that your VHDs are attached to VMs, you can provide the VM(s) to which the VHD(s) of interests are attached as input to Get-VHD, via Get-VM (you may have to filter the output down to only the VHDs of interest):
(Get-VM $vmName).Id | Get-VHD
Background information:
It looks like Get-VHD only has a -Path parameter, not also a -LiteralPath parameter, which looks like a design flaw:
Having both parameters is customary for file-processing cmdlets (e.g. Get-ChildItem):
-Path accepts wildcard expressions to match potentially multiple files by a pattern.
-LiteralPath is used to pass literal (verbatim) paths, to be used as-is.
What you have is a literal path that happens to look like a wildcard expression, due to use of metacharacters [ and ]. In wildcard contexts, these metacharacter must normally be escaped - as `[ and `] - in order to be treated as literals, which the following (regex-based) -replace operation ensures[1] (even with arrays as input).
Unfortunately, this appears not to be enough for Get-VHD. (Though you can verify that it works in principle by piping to Get-Item instead, which also binds to -Path).
Even double `-escaping (-replace '[][]', '``$&') doesn't work (which is - unexpectedly required in come cases - see GitHub issue #7999).
# !! SHOULD work, but DOES NOT
# !! Ditto for -replace '[][]', '``$&'
$VhdPath -replace '[][]', '`$&' | Get-VHD
Note: Normally, a robust way to ensure that a cmdlet's -LiteralPath parameter is bound by pipeline input is to pipe the output from Get-ChildItem or Get-Item to it.[2] Given that Get-VHD lacks -LiteralPath, this is not an option, however:
# !! DOES NOT HELP, because Get-VHD has no -LiteralPath parameter.
Get-Item -LiteralPath $VhdPath | Get-VHD
[1] See this regex101.com page for an explanation of the regex ($0 is an alias of $& and refers to the text captured by the match at hand, i.e. either [ or ]). Alternatively, you could pass all paths individually to the [WildcardPattern]::Escape() method (e.g., [WildcardPattern]::Escape('a[0].txt') yields a`[0`].txt.
[2] See this answer for the specifics of how this binding, which happens via the provider-supplied .PSPath property, works.
Ok... So, I couldn't get the escape characters to be accepted by Get-VHD... be it by hand or programmatically. I gave it a go of passing it on the pipeline using Get-ChildItem too without success. However... I did manage to find an alternative for my particular use case. In addition to a path to a VHDX file, the Get-VHD command will also accept vmid, and disknumber as parameters. So, not that it's the way I wanted to go about obtaining what I need (because this method spits out info on all the attached drives), I can still manage to accomplish the task at hand by using the following example:
Get-VM $ComputerName | Select-Object -Property VMId | Get-VHD
By referencing them in this manner the Get-VHD commandlet is happy. This works for today's problem only because the VHDX files in question are attached to VMs. However, I'll still need to figure out about referencing unattached files at some point in the future. Which... Maybe ultimately require a slow and painful renaming of all the VHDX files to not use the square brackets in their name.

Get all references to a given PowerShell module

Is there a way to find a list of script files that reference a given module (.psm1)? In other words, get all files that, in the script code, use at least 1 of the cmdlets defined in the module.
Obviously because of PowerShell 3.0 and above, most of my script files don't have an explicit Import-Module MODULE_NAME in the code somewhere, so I can't use that text to search on.
I know I can use Get-ChildItem -Path '...' -Recurse | Select-String 'TextToSearchFor' to search for a particular string inside of files, but that's not the same as searching for any reference to any cmdlet of a module. I could do a search for every single cmdlet in my module, but I was wondering if there is a better way.
Clarification: I'm only looking inside of a controlled environment where I have all the scripts in one file location.
Depending on the scenario, the callstack could be interesting to play around with. In that case you need to modify the functions which you want to find out about to gather information about the callstack at runtime and log it somewhere. Over time you might have enough logs to make some good assumptions.
function yourfunction {
$stack = Get-PSCallStack
if ($stack.Count -gt 1) {
$stack[1] # log this to a file or whatever you need
}
}
This might not work at all in your scenario, but I thought I throw it in there as an option.

preplog.exe ran in foreach log file

I have a folder with x amount of web log files and I need to prep them for bulk import to SQL
for that I have to run preplog.exe into each one of them.
I want to create a Power script to do this for me, the problem that I'm having is that preplog.exe has to be run in CMD and I need to enter the input path and the output path.
For Example:
D:>preplog c:\blah.log > out.log
I've been playing with Foreach but I haven't have any luck.
Any pointers will be much appreciated
I would guess...
Get-ChildItem "C:\Folder\MyLogFiles" | Foreach-Object { preplog $_.FullName | Out-File "preplog.log" -Append }
FYI it is good practice on this site to post your not working code so at least we have some context. Here I assume you're logging to the current directory into one file.
Additionally you've said you need to run in CMD but you've tagged PowerShell - it pays to be specific. I've assumed PowerShell because it's a LOT easier to script.
I've also had to assume that the folder contains ONLY your log files, otherwise you will need to include a Where statement to filter the items.
In short I've made a lot of assumptions that means this may not be an accurate answer, so keep all this in mind for your next question =)

PowerShell guidelines for -Confirm, -Force, and -WhatIf

Are there any official guidelines from Microsoft about when to add -Confirm, -Force, and -WhatIf parameters to custom PowerShell cmdlets? There doesn't seem to be a clear consensus about when/how to use these parameters. For example this issue.
In the absence of formal guidelines, is there a best practice or rule of thumb to use? Here is some more background, with my current (possibly flawed) understanding:
-WhatIf
The -WhatIf flag displays what the cmdlet would do without actually performing any action. This is useful for a dry run of a potentially destabilizing operation, to see what the actual results would be. The parameter is automatically added if the cmdlet's Cmdlet attribute has the SupportsShouldProcess property set to true.
It seems like (but I'd love to see more official guidance here) that you should add -WhatIf if you are ever adding or removing resources. (e.g. deleing files.) Operations that update existing resources probably wouldn't benefit from it. Right?
-Force
The -Force switch is used to declare "I know what I'm doing, and I'm sure I want to do this". For example, when copying a file (Copy-File) the -Force parameter means:
Allows the cmdlet to copy items that cannot otherwise be changed, such as copying over a read-only file or alias.
So to me it seems like (again, I'd love some official guidance here) that you should add an optional -Force parameter when you have a situation where the cmdlet would otherwise fail, but can be convinced to complete the action.
For example, if you are creating a new resource that will clobber an existing one with the same name. The default behavior of the cmdlet would report an error and fail. But if you add -Force it will continue (and overwrite the existing resource). Right?
-Confirm
The -Confirm flag gets automatically added like -WhatIf if the cmdlet has SupportsShouldProcess set to true. In a cmdlet if you call ShouldProcess then the user will be prompted to perform the action. And if the -Confirm flag is added, there will be no prompt. (i.e. the confirmation is added via the cmdlet invocation.)
So -Confirm should be available whenever a cmdlet has a big impact on the system. Just like -WhatIf this should be added whenever a resource is added or removed.
With my potentially incorrect understanding in mind, here are some of the questions I'd like a concrete answer to:
When should it be necessary to add -WhatIf/-Confirm?
When should it be necessary to add -Force?
Does it ever make sense to support both -Confirm and -Force?
I haven't researched whether the documentation is this detailed, but the following are based on my observations:
You should use -WhatIf for anything that makes a change. Updates are changes that can benefit from -WhatIf (e.g., what if you want to make a lot of updates?).
-Force means "force overwrite of an existing item" or "override a read-only file system attribute". In either case the success of the action depends on the user having permission.
-Confirm and -Force are not mutually exclusive. For example, you can confirm an action to write a file, but the file might be protected with the read-only attribute. In this case the action would fail unless you also specify -Force.
If you want to validate that your implementation of these common parameters is compliant to the guidelines (for example, Set-Xxx cmdlets should have -Confirm and -WhatIf), then you can use the excellent PsScriptAnalyzer module (which is based on code analysis).
Make sure the module is installed:
PS E:\> Install-Module -Name 'PsScriptAnalyzer'
Then run PowerShell Code Analysis as follows:
PS E:\> Invoke-ScriptAnalyzer -Path . | FL
RuleName : PSUseShouldProcessForStateChangingFunctions
Severity : Warning
Line : 78
Column : 10
Message : Function 'Update-something' has verb that could change system state.
Therefore, the function has to support 'ShouldProcess'.
Documentation (and sources) can be found on GitHub:
https://github.com/PowerShell/PSScriptAnalyzer
As an added observation, -Force should not overrule -WhatIf. Or in other words: -WhatIf has priority over -Force.
If you use:
Get-ChildItem -Recurse | Remove-Item -Recurse -Force -WhatIf
it will result in the following output:
What if: Performing the operation "Remove Directory" on target "E:\some directory\".
It will not actually remove the items, even when -Force is specified.
This means that you should never write:
if($Force -or $Pscmdlet.ShouldProcess($)) {
...
}

Change path separator in Windows PowerShell

Is it possible to get PowerShell to always output / instead of \? For example, I'd like the output of get-location to be C:/Documents and Settings/Administrator.
Update
Thanks for the examples of using replace, but I was hoping for this to happen globally (e.g. tab completion, etc.). Based on Matt's observation that the separator is defined by System.IO.Path.DirectorySeparatorChar which appears in practice and from the documentation to be read-only, I'm guessing this isn't possible.
It's a good question. The underlying .NET framework surfaces this as System.IO.Path.DirectorySeparatorChar, and it's a read/write property, so I figured you could do this:
[IO.Path]::DirectorySeparatorChar = '/'
... and that appears to succeed, except if you then type this:
[IO.Path]::DirectorySeparatorChar
... it tells you that it's still '\'. It's like it's not "taking hold". Heck, I'm not even sure that PowerShell honours that particular value even if it was changing.
I thought I'd post this (at the risk of it not actually answering your question) in case it helps someone else find the real answer. I'm sure it would be something to do with that DirectorySeparatorChar field.
Replace "\" with "/".
PS C:\Users\dance2die> $path = "C:\Documents and Settings\Administrator"
PS C:\Users\dance2die> $path.Replace("\", "/")
C:/Documents and Settings/Administrator
You could create a filter (or function) that you can pipe your paths to:
PS C:\> filter replace-slash {$_ -replace "\\", "/"}
PS C:\> Get-Location | replace-slash
C:/