I've created a proxy function for Remove-Item, which deletes to the recycle bin instead of permanently (using the proxy so that I can seamlessly replace the rm alias, without breaking 3rd party scripts).
However, it doesn't work when a file is piped into the function. The heart of the proxy function is this:
if ($PSBoundParameters['DeletePermanently'] -or $PSBoundParameters['LiteralPath'] -or $PSBoundParameters['Filter'] -or $PSBoundParameters['Include'] -or $PSBoundParameters['Exclude'] -or $PSBoundParameters['Recurse'] -or $PSBoundParameters['Force'] -or $PSBoundParameters['Credential']) {
if ($PSBoundParameters['DeletePermanently']) { $PSBoundParameters.Remove('DeletePermanently') | Out-Null }
$scriptCmd = {& $wrappedCmd #PSBoundParameters }
} else {
$scriptCmd = {& Recycle-Item -Path $PSBoundParameters['Path'] }
}
So, my custom Recycle-Item function is only called if Path is the only parameter. So, something like Get-ChildItem .\temp\ | rm -DeletePermanently works just fine, but Get-ChildItem .\temp\ | rm has an error because the Path passed to Recycle-Item is $null.
I've tried passing $Path instead of $PSBoundParameters['Path'] and tried splatting #PSBoundParameters like the call to $wrappedCmd above, but none of it appears to do much good. I've copied the params from this function to Recycle-Item, to ensure that it is expecting input from the pipeline, but that doesn't seem to help either. Some of those changes appear to pass along the file name, but not the full path, so I don't know if there's some magic inside Remove-Item that I need to replicate to handle a file object from the pipeline.
Recycle-Item is just a basic function:
function Recycle-Item($Path) {
$item = Get-Item $Path
$directoryPath = Split-Path $item -Parent
$shell = new-object -comobject "Shell.Application"
$shellFolder = $shell.Namespace($directoryPath)
$shellItem = $shellFolder.ParseName($item.Name)
$shellItem.InvokeVerb("delete")
}
As mentioned in the comments, the provider cmdlets usually bind on LiteralPath when you pipe objects between them. This way allows Path to support wildcard globbing without the chance of passing ambiguous item paths between cmdlets.
Remove-Item has only two parameter sets, and they are named after their mandatory parameters, Path and LiteralPath
To solve your problem, simply check for all defined parameters that are not one of these two, then pass the appropriate value to Remove-Item based on the $PSCmdlet.ParameterSetName value:
if(#($PSBoundParameters.Keys |Where-Object {#('DeletePermanently','Filter','Include','Exclude','Recurse','Force','Credential') -contains $_}).Count -ge 1){
# a parameter other than the Path/LiteralPath or the common parameters was specified, default to Remove-Item
if ($PSBoundParameters['DeletePermanently']) {
$PSBoundParameters.Remove('DeletePermanently') | Out-Null
}
$scriptCmd = {& $wrappedCmd #PSBoundParameters }
} else {
# apart from common parameters, only Path/LiteralPath was specified, go for Recycle-Item
$scriptCmd = {& Recycle-Item -Path $PSBoundParameters[$PSCmdlet.ParameterSetName] }
}
Related
I'm attempting to add a wallpaper, along with certain parameters, to each user on a computer. It's been hit and miss with this working/not working on computers. The ones that fail I get the error "Method invocation failed because [System.Management.Automation.PSObject] does not contain a method named 'op_Addition'."
The variables $WallpaperPath and $Style are coming from another source within Automation Manager (using N-Central).
# Get each user profile SID and Path to the profile
$UserProfiles = Get-ItemProperty "HKLM:\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList\*" | Where {$_.PSChildName -match "S-1-5-21-(\d+-?){4}$" } | Select-Object #{Name="SID"; Expression={$_.PSChildName}}, #{Name="UserHive";Expression={"$($_.ProfileImagePath)\NTuser.dat"}}
# Add in the .DEFAULT User Profile
$DefaultProfile = "" | Select-Object SID, UserHive
$DefaultProfile.SID = ".DEFAULT"
$DefaultProfile.Userhive = "C:\Users\Public\NTuser.dat"
$UserProfiles += $DefaultProfile
# Loop through each profile on the machine</p>
Foreach ($UserProfile in $UserProfiles) {
# Load User ntuser.dat if it's not already loaded
If (($ProfileWasLoaded = Test-Path Registry::HKEY_USERS\$($UserProfile.SID)) -eq $false) {
Start-Process -FilePath "CMD.EXE" -ArgumentList "/C REG.EXE LOAD HKU\$($UserProfile.SID) $($UserProfile.UserHive)" -Wait -WindowStyle Hidden
}
# Write to the registry
$key = "Registry::HKEY_USERS\$($UserProfile.SID)\Control Panel\Desktop"
Set-ItemProperty -Path $key -name Wallpaper -value "$WallpaperPath"
Set-ItemProperty -Path $key -name TileWallpaper -value "0"
Set-ItemProperty -Path $key -name WallpaperStyle -value "$Style" -Force
# Unload NTuser.dat
If ($ProfileWasLoaded -eq $false) {
[gc]::Collect()
Start-Sleep 1
Start-Process -FilePath "CMD.EXE" -ArgumentList "/C REG.EXE UNLOAD HKU\$($UserProfile.SID)" -Wait -WindowStyle Hidden| Out-Null
}
}
I'm looking for this to load a temporary HKU hive for each user that's not currently logged in, and has an NTuser.dat file, and write the registry entries specified. It should then unload any hive for users it added.
Instead of $UserProfiles = ..., use either [array] $UserProfiles = ... or $UserProfiles = #(...) in order to ensure that $UserProfiles always contains an array, even if the command happens to return just one object.
That way, your += operation is guaranteed to work as intended, namely to (loosely speaking) append an element to the array.[1]
Note that PowerShell's pipeline has no concept of an array, just a stream of objects. When such a stream is collected, a single object is captured as itself; only two or more objects are captured in an array ([object[]]) - see this answer for more information.
A simple demonstration:
2, 1 | ForEach-Object {
$result = Get-ChildItem / | Select-Object Name -First $_
try {
$result += [pscustomobject] #{ Name = 'another name' }
"`$result has $($result.Count) elements."
} catch {
Write-Warning "+= operation failed: $_"
}
}
In the first iteration, 2 objects are returned, which are
stored in an array. += is then used to "append" another element.
In the second iteration, only 1 object is returned and stored as such.
Since [pscustomobject], which is the type of object returned by Select-Object, doesn't define a + operation (which would have
to be implemented via an op_Addition() method at the .NET level), the error you saw occurs.
Using an [array] type constraint or #(...), the array-subexpression operator operator, avoids this problem:
2, 1 | ForEach-Object {
# Note the use of #(...)
# Alternatively:
# [array] $result = Get-ChildItem \ | Select-Object Name -First $_
$result = #(Get-ChildItem / | Select-Object Name -First $_)
$result += [pscustomobject] #{ Name = 'another name' }
"`$result has $($result.Count) elements."
}
As noted, [array] $results = Get-ChildItem \ | Select-Object Name -First $_ works too, though there are subtle differences between the two approaches - see this answer.
As an aside:
To synchronously execute console applications or batch files and capture their output, call them directly (c:\path\to\some.exe ... or & $exePath ...), do not use Start-Process (or the System.Diagnostics.Process API it is based on) - see this answer. GitHub docs issue #6239 provides guidance on when use of Start-Process is and isn't appropriate.
That is, you can just make calls such as the following:
REG.EXE LOAD "HKU\$($UserProfile.SID)" "$($UserProfile.UserHive)"
Also, it's easier and more efficient to construct [pscustomobject] instances with their literal syntax (v3+; see the conceptual about_PSCustomObject help topic):
$UserProfiles += [pscustomobject] #{
SID = ".DEFAULT"
Userhive = "C:\Users\Public\NTuser.dat"
}
[1] Technically, a new array must be created behind the scenes, given that arrays are fixed-size data structures. While += is convenient, it is therefore inefficient, which matters in loops - see this answer.
I am trying to add ShouldProcess logic to a script that deletes files on a remote server to I can use the -WhatIf parameter, but it is returning an error. Here is the function:
function testshouldprocess {
[CmdletBinding(SupportsShouldProcess = $true]
param(
$server
)
invoke-command $server {
Get-ChildItem c:\temp\ | ForEach-Object {
if($pscmdlet.ShouldProcess($Server)) {
remove-item $_.fullname
}
}
}
}
testshouldprocess 'Server1' -WhatIf
When the script is run, it returns error
InvalidOperation: You cannot call a method on a null-valued expression.
as each file passes through the pipeline. If I change the code to
if ($pscmdlet.ShouldProcess($server)) {
invoke-command $server {
Get-ChildItem c:\temp\ | ForEach-Object {
remove-item $_.fullname
}
}
}
it works, but the WhatIf only executes one time for the entire directory listing. If I change the code to
Get-ChildItem \\$server\c$\temp\ | ForEach-Object {
if ($pscmdlet.ShouldProcess($server)) {
remove-item $_.fullname
}
}
it works, but I would would much prefer to use Invoke-Command.
Is ShouldProcess not compatible with Invoke-Command?
Any insight is appreciated.
Hazrelle's answer provides the crucial pointer regarding the need to use the $using: scope in order for the remotely executing script block to have access to values from the caller's scope.
To fully support your scenario - both for -WhatIf and for -Confirm functionality, both of which are implied by turning SupportShouldProces on - you must:
Make your remotely executing script block an advanced one too, with its own [CmdletBinding(SupportsShouldProcess)] attribute above the param() block, and therefore its own $PSCmdlet instance.
Refer to the what-if/confirm-relevant values from the caller's scope via $using:WhatIfPreference and $using:ConfirmPreference
Note that for advanced functions and scripts PowerShell translates the -WhatIf and -Confirm switches into the equivalent preference-variable values, using function-local variables; that is, passing -WhatIf creates a function-local $WhatIfPreference variable with value $true, and passing -Confirm creates a function-local $ConfirmPreference with value High.
function testshouldprocess {
[CmdletBinding(SupportsShouldProcess)]
param(
$server
)
Invoke-Command $server {
[CmdletBinding(SupportsShouldProcess)]
param()
# Use the caller's WhatIf / Confirm preferences.
$WhatIfPreference = $using:WhatIfPreference
$ConfirmPreference = $using:ConfirmPreference
Get-ChildItem c:\temp\ | ForEach-Object {
if ($pscmdlet.ShouldProcess($using:server, "delete file: $($_.FullName)")) {
Remove-Item $_.FullName
}
}
}
}
testshouldprocess 'Server1' -WhatIf
The remote server knows only the command you execute. Not the values from the remote caller. Try with remove-item $_.fullname -Whatif:$($using:pscmdlet.ShouldProcess($server)). See Remote variables
Another option is to specify $WhatIfPreference on the remote server and use that in next statements
$WhatIfPreference = $using:pscmdlet.ShouldProcess($server);
Then remove-item $_.fullname -WhatIf:$WhatIfPreference
1. Code Description alias how it is intended to work
User enters a path to a directory in PowerShell. Code checks if any folder within the declared directory contains no data at all. If so, the path of any empty folder will be shown on the prompt to the user and eventually removed from the system.
2. The Issue alias what I am struggling with
The code I just wrote doesn't count the depth of a folder hierarchy as I would expect (the column in the output table is blank). Besides that, the program works okay - I've still got to fix the issue where my code removes empty parent directories first and child directories later, which of course will cause an error in PowerShell; for instance, take
C:\Users\JohnMiller\Desktop\Homework
where Homework consists of Homework\Math\School Project and Homework\Computer Science\PowerShell Code. Note that all directories are supposed to be empty with the exception of PowerShell Code, the folder containing this script. (Side note: A folder is considered empty when no file dwells inside. At least that's what my code is based on for now.)
3. The Code
# Delete all empty (sub)folders in [$path]
[Console]::WriteLine("`n>> Start script for deleting all empty (sub)folders.")
$path = Read-Host -prompt ">> Specify a path"
if (test-path $path)
{
$allFolders = Get-ChildItem $path -recurse | Where {$_.PSisContainer -eq $True}
$allEmptyFolders = $allFolders | Where-Object {$_.GetFiles().Count -eq 0}
$allEmptyFolders | Select-Object FullName,#{Name = "FolderDepth"; Expression = {$_.DirectoryName.Split('\').Count}} | Sort-Object -descending FolderDepth,FullName
[Console]::WriteLine("`n>> Do you want do remove all these directories? Validate with [True] or [False].") #'#
$answer = Read-Host -prompt ">> Answer"
if ([System.Convert]::ToBoolean($answer) -eq $True)
{
$allEmptyFolders | Remove-Item -force -recurse
}
else
{
[Console]::WriteLine(">> Termination confirmed.`n")
exit
}
}
else
{
[Console]::WriteLine(">> ERROR: [$($path)] is an invalid directory. Program terminates.`n")
exit
}
The depth-count problem:
Your code references a .DirectoryName property in the calculated property passed to Select-Object, but the [System.IO.DirectoryInfo] instances output by Get-ChildItem have no such property. Use the .FullName property instead:
$allEmptyFolders |
Select-Object FullName,#{Name='FolderDepth'; Expression={$_.FullName.Split('\').Count}} |
Sort-Object -descending FolderDepth,FullName
Eliminating nested empty subfolders:
To recap your problem with a simple example:
If c:\foo is empty (no files) but has empty subdir. c:\foo\bar, your code outputs them both, and if you then delete c:\foo first, deleting c:\foo\bar next fails (because deleting c:\foo also removed c:\foo\bar).
If you eliminate all nested empty subdirs. up front, you not only declutter what you present to the user, but you can then safely iterative of the output and delete one by one.
With your approach you'd need a 2nd step to eliminate the nested empty dirs., but here's a depth-first recursive function that omits nested empty folders. To make it behave the same way as your code with respect to hidden files, pass -Force.
function Get-RecursivelyEmptyDirectories {
[cmdletbinding()]
param(
[string] $LiteralPath = '.',
[switch] $Force,
[switch] $DoNotValidatePath
)
$ErrorActionPreference = 'Stop'
if (-not $DoNotValidatePath) {
$dir = Get-Item -LiteralPath $LiteralPath
if (-not $dir.PSIsContainer) { Throw "Not a directory path: $LiteralPath" }
$LiteralPath = $dir.FullName
}
$haveFiles = [bool] (Get-ChildItem -LiteralPath $LiteralPath -File -Force:$Force | Select-Object -First 1)
$emptyChildDirCount = 0
$emptySubdirs = $null
if ($childDirs = Get-ChildItem -LiteralPath $LiteralPath -Directory -Force:$Force) {
$emptySubDirs = New-Object System.Collections.ArrayList
foreach($childDir in $childDirs) {
if ($childDir.LinkType -eq 'SymbolicLink') {
Write-Verbose "Ignoring symlink: $LiteralPath"
} else {
Write-Verbose "About to recurse on $($childDir.FullName)..."
try { # If .AddRange() fails due to exceeding the array list's capacity, we must fail too.
$emptySubDirs.AddRange(#(Get-RecursivelyEmptyDirectories -DoNotValidatePath -LiteralPath $childDir.FullName -Force:$Force))
} catch {
Throw
}
# If the last entry added is the child dir. at hand, that child dir.
# is by definition itself empty.
if ($emptySubDirs[-1] -eq $childDir.FullName) { ++$emptyChildDirCount }
}
} # foreach ($childDir ...
} # if ($childDirs = ...)
if (-not $haveFiles -and $emptyChildDirCount -eq $childDirs.Count) {
# There are no child files and all child dirs., if any, are themselves
# empty, so we only output the input path at hand, as the highest
# directory in this subtree that is empty (save for empty descendants).
$LiteralPath
} else {
# This directory is not itself empty, so output the (highest-level)
# descendants that are empty.
$emptySubDirs
}
}
Tips regarding your code:
Get-ChildItem -Directory is available in PSv3+, which is not only shorter but also more efficient than Get-ChildItem | .. Where { $_.PSisContainer -eq $True }.
Use Write-Host instead of [Console]::WriteLine
[System.Convert]::ToBoolean($answer) only works with the culture-invariant string literals 'True' and 'False' ([bool]::TrueString and [bool]::FalseString, although case variations and leading and trailing whitespace are allowed).
I am struggling with my script - for some reason, the PSDrive that my script creates is not accessible for Resolve-Path.
In general, in the script there is "Start-RDP" function which starts RDP with preloaded credentials (autologon), and then checks if the Powershell profile on the target host is up to date (by comparing the filehashes). However, in order for the script to access the remote filesystem I need to mount it as PSDrive.
Here is the script that is offending. All the variables are set properly during that time, above in the script.
New-PSDrive -name "$computername" -Root "\\$computername\c$" -Credential $CurrentCred -PSProvider FileSystem | out-null
Start-Sleep -Seconds 10
while (!(Test-Path -Path ${Computername}:\$Userpath\$Documents\)) { Write-host "UserDir not created yet!" ; start-sleep -Seconds 5 }
if (Test-Path -Path ${Computername}:\$Userpath\$Documents\WindowsPowerShell) {
$ProfileHash = Get-FileHash $Profile.CurrentUserAllHosts
if (!(Test-Path "${computername}:\$Userpath\$Documents\WindowsPowerShell\profile.ps1")) { Copy-Item -Force -Path "$env:userprofile\WindowsPowershell\profile.ps1" -Destination "${computername}:\$Userpath\$Documents\WindowsPowerShell\" }
$RemoteProfileHash = Get-FileHash "${computername}:\$Userpath\$Documents\WindowsPowerShell\profile.ps1"
if ($ProfileHash -ne $RemoteProfileHash) { Copy-Item -Force -Path "$env:userprofile\$Documents\WindowsPowershell\profile.ps1" -Destination "${computername}:\$userpath\$Documents\WindowsPowerShell\" }
}
The error I am getting is at second Test-Path (where I check if WindowsPowerShell directory exists).
Resolve-Path : Cannot find drive. A drive with the name 'server01' does not exist.
At C:\windows\system32\windowspowershell\v1.0\Modules\Microsoft.PowerShell.Utility\Microsoft.PowerShell.Utility.psm1:35 char:32
+ $pathsToProcess += Resolve-Path $Path | Foreach-Object ProviderPath
+ ~~~~~~~~~~~~~~~~~~
+ CategoryInfo : ObjectNotFound: (server01:String) [Resolve-Path], DriveNotFoundException
+ FullyQualifiedErrorId : DriveNotFound,Microsoft.PowerShell.Commands.ResolvePathCommand
I am unable to trace down the specific reason this error occurs. The drive is there (I checked using PSBreakpoint)
I'm kind of stuck at this for some time now, do you have any ideas on that one?
I see what you did there.
The problem is that you are using the variable $Profile.CurrentUserAllHosts which powershell is trying to resolve as a complete variable name. $Profile is a string, which has no property called CurrentUserAllHosts. To fix, use the following:
$ProfileHash = Get-FileHash "${Profile}.CurrentUserAllHosts"
After some more investigation, I found this snippet on a blog
commands like Resolve-Path and $PSCmdlet.GetUnresolvedProviderPathFromPSPath() don’t normalize UNC paths properly, even when the FileSystem provider handles them.
Which then links to the Get-NormalizedFileSystemPath script on technet.
Since Get-FileHash is a system provided method, you'll want to Get-NormalizedFileSystemPath before passing it to Get-FileHash
And for posterity sake, here's the script:
function Get-NormalizedFileSystemPath
{
<#
.Synopsis
Normalizes file system paths.
.DESCRIPTION
Normalizes file system paths. This is similar to what the Resolve-Path cmdlet does, except Get-NormalizedFileSystemPath also properly handles UNC paths and converts 8.3 short names to long paths.
.PARAMETER Path
The path or paths to be normalized.
.PARAMETER IncludeProviderPrefix
If this switch is passed, normalized paths will be prefixed with 'FileSystem::'. This allows them to be reliably passed to cmdlets such as Get-Content, Get-Item, etc, regardless of Powershell's current location.
.EXAMPLE
Get-NormalizedFileSystemPath -Path '\\server\share\.\SomeFolder\..\SomeOtherFolder\File.txt'
Returns '\\server\share\SomeOtherFolder\File.txt'
.EXAMPLE
'\\server\c$\.\SomeFolder\..\PROGRA~1' | Get-NormalizedFileSystemPath -IncludeProviderPrefix
Assuming you can access the c$ share on \\server, and PROGRA~1 is the short name for "Program Files" (which is common), returns:
'FileSystem::\\server\c$\Program Files'
.INPUTS
String
.OUTPUTS
String
.NOTES
Paths passed to this command cannot contain wildcards; these will be treated as invalid characters by the .NET Framework classes which do the work of validating and normalizing the path.
.LINK
Resolve-Path
#>
[CmdletBinding()]
param (
[Parameter(Mandatory = $true, ValueFromPipeline = $true, ValueFromPipelineByPropertyName = $true)]
[Alias('PSPath', 'FullName')]
[string[]]
$Path,
[switch]
$IncludeProviderPrefix
)
process
{
foreach ($_path in $Path)
{
$_resolved = $_path
if ($_resolved -match '^([^:]+)::')
{
$providerName = $matches[1]
if ($providerName -ne 'FileSystem')
{
Write-Error "Only FileSystem paths may be passed to Get-NormalizedFileSystemPath. Value '$_path' is for provider '$providerName'."
continue
}
$_resolved = $_resolved.Substring($matches[0].Length)
}
if (-not [System.IO.Path]::IsPathRooted($_resolved))
{
$_resolved = Join-Path -Path $PSCmdlet.SessionState.Path.CurrentFileSystemLocation -ChildPath $_resolved
}
try
{
$dirInfo = New-Object System.IO.DirectoryInfo($_resolved)
}
catch
{
$exception = $_.Exception
while ($null -ne $exception.InnerException)
{
$exception = $exception.InnerException
}
Write-Error "Value '$_path' could not be parsed as a FileSystem path: $($exception.Message)"
continue
}
$_resolved = $dirInfo.FullName
if ($IncludeProviderPrefix)
{
$_resolved = "FileSystem::$_resolved"
}
Write-Output $_resolved
}
} # process
} # function Get-NormalizedFileSystemPath
Well I've struggled long enough with this one. I have a project to compare two folders, one on each of two servers. We are comparing files on the source server with those on the target server and will create a list of the files from the source that will need to be refreshed once an update is completed on the target server.
Here's my script (many thanks to http://quickanddirtyscripting.wordpress.com for the original) :
param ([string] $src,[string] $dst)
function get-DirHash()
{
begin
{
$ErrorActionPreference = "silentlycontinue"
}
process
{
dir -Recurse $_ | where { $_.PsIsContainer -eq $false -and ($_.Name -like "*.js" -or $_.Name -like "*.css"} | select Name,FullName,#{Name="SHA1 Hash"; Expression={get-hash $_.FullName -algorithm "sha1" }}
}
end
{
}
}
function get-hash
{
param([string] $file = $(throw 'a filename is required'),[string] $algorithm = 'sha256')
try
{
$fileStream = [system.io.file]::openread((resolve-path $file));
$hasher = [System.Security.Cryptography.HashAlgorithm]::create($algorithm);
$hash = $hasher.ComputeHash($fileStream);
$fileStream.Close();
}
catch
{
write-host $_
}
return $hash
}
Compare-Object $($src | get-DirHash) $($dst | get-DirHash) -property #("Name", "SHA1 Hash")
Now for some reason if I run this against local paths say c:\temp\test1 c:\temp\test2 it works fine, but when I run it using UNC paths between two servers I get
Exception calling "OpenRead" with "1" argument(s): "The given path's format is not supported."
Any help with this would be greatly appreciated. The end result should be a list of files, but for some reason it doesn't like the UNC path.
The script name is compare_js_css.ps1 and is called as such:
.\compare_js_css.ps1 c:\temp\test1 c:\temp\test2 <-- This works
.\compare_js_css.ps1 \\\\devserver1\c$\websites\site1\website \\\\devserver2\c$\websites\site1\website <-- Returns the aforementioned exception.
Why?
This gives the path you are after without the Microsoft.PowerShell.Core\FileSystem:::
(Resolve-Path $file).ProviderPath
No need to use a string replace.
OpenRead supports UNC paths. Resolve-Path returns you an object. Use (Resolve-Path MyFile.txt).Path.Replace('Microsoft.PowerShell.Core\FileSystem::', '') as the argument for OpenRead. The path returned from Resolve-Path when using UNC paths includes PowerShell's fully qualified schema which contains a header which is unsupported by the OpenRead method so it needs to be omitted.
Use the Convert-Path cmdlet, which will provide you with the path in the 'regular' UNC form. This will be required any time you use any shell commands, or need to pass an entire path to a .Net method etc...
See https://technet.microsoft.com/en-us/library/ee156816.aspx