If I have an example function ...
function foo()
{
# get a list of files matched pattern and timestamp
$fs = Get-Item -Path "C:\Temp\*.txt"
| Where-Object {$_.lastwritetime -gt "11/01/2009"}
if ( $fs -ne $null ) # $fs may be empty, check it first
{
foreach ($o in $fs)
{
# new bak file
$fBack = "C:\Temp\test\" + $o.Name + ".bak"
# Exception here Get-Item! See following msg
# Exception thrown only Get-Item cannot find any files this time.
# If there is any matched file there, it is OK
$fs1 = Get-Item -Path $fBack
....
}
}
}
The exception message is ... The WriteObject and WriteError methods cannot be called after the pipeline has been closed. Please contact Microsoft Support Services.
Basically, I cannot use Get-Item again within the function or loop to get a list of files in a different folder.
Any explanation and what is the correct way to fix it?
By the way I am using PS 1.0.
This is just a minor variation of what has already been suggested, but it uses some techniques that make the code a bit simpler ...
function foo()
{
# Get a list of files matched pattern and timestamp
$fs = #(Get-Item C:\Temp\*.txt | Where {$_.lastwritetime -gt "11/01/2009"})
foreach ($o in $fs) {
# new bak file
$fBack = "C:\Temp\test\$($o.Name).bak"
if (!(Test-Path $fBack))
{
Copy-Item $fs.Fullname $fBack
}
$fs1 = Get-Item -Path $fBack
....
}
}
For more info on the issue with foreach and scalar null values check out this blog post.
I modified the above code slightly to create the backup file, but I am able to use the Get-Item within the loop successfully, with no exceptions being thrown. My code is:
function foo()
{
# get a list of files matched pattern and timestamp
$files = Get-Item -Path "C:\Temp\*.*" | Where-Object {$_.lastwritetime -gt "11/01/2009"}
foreach ($file in $files)
{
$fileBackup = [string]::Format("{0}{1}{2}", "C:\Temp\Test\", $file.Name , ".bak")
Copy-Item $file.FullName -destination $fileBackup
# Test that backup file exists
if (!(Test-Path $fileBackup))
{
Write-Host "$fileBackup does not exist!"
}
else
{
$fs1 = Get-Item -Path $fileBackup
...
}
}
}
I am also using PowerShell 1.0.
Related
I want to create a 0-filesize mirror image of a set of folder, but while robocopy is really good, it doesn't save all of the information that I would like:
robocopy D:\documents E:\backups\documents_$(Get-Date -format "yyyyMMdd_HHmm")\ /mir /create
The /create switch makes each file in the duplicate folder have zero-size, and that is good, but I would like each file in the duplicate folder to have [size] appended to the end of the name with the size in KB or MB or GB, and the create / last modified time on every file to exactly match the original file. This way, I will have a zero-size duplicate of the folder that I can archive, but which contains all of the relevant information for the files in that directory, showing the size of each and the exact create / last modified times.
Are there good / simple ways to iterate through a tree in PowerShell, and for each item create a zero size file with all relevant information like this?
This would be one way to implement the copy command using the approach I mentioned in the comments. This should give you something to pull ideas from. I didn't intend to spend as much time on it as I did, but I ran it on several directories and found some problems and debugged each problem I encountered. This is a pretty solid example at this point.
function Copy-FolderZeroSizeFiles {
[CmdletBinding()]
param( [Parameter(Mandatory)] [string] $FolderPath,
[Parameter(Mandatory)] [string] $DestinationPath )
$dest = New-Item $DestinationPath -Type Directory -Force
Push-Location -LiteralPath $FolderPath
try {
foreach ($item in Get-ChildItem '.' -Recurse) {
$relPath = Resolve-Path -LiteralPath $item -Relative
$type = if ($item.Attributes -match 'Directory')
{ 'Directory' }
else { 'File' }
$destItem = New-Item "$dest\$relPath" -Type $type -Force
$destItem.Attributes = $item.Attributes
$destItem.LastWriteTime = $item.LastWriteTime
}
} finally {
Pop-Location
}
}
Note: the above implementation is simplistic and represents anything that isn't a directory as a file. That means symbolic links, et al. will be files with no information what they would be linked to.
Here's a function to get the conversion from number of bytes to N.N B/K/M/G format. To get more decimal places, just add 0's to the end of the format strings.
function ConvertTo-FriendlySize($NumBytes) {
switch ($NumBytes) {
{$_ -lt 1024} { "{0,7:0.0}B" -f ($NumBytes) ; break }
{$_ -lt 1048576} { "{0,7:0.0}K" -f ($NumBytes / 1024) ; break }
{$_ -lt 1073741824} { "{0,7:0.0}M" -f ($NumBytes / 1048576) ; break }
default { "{0,7:0.0}G" -f ($NumBytes / 1073741824); break }
}
}
Often, people get these conversions wrong. For instance, it's a common error to use 1024 * 1000 to get Megabytes (which is mixing the base10 value for 1K with the base2 value for 1K) and follow that same logic to get GB and TB.
Here is what I came up with with the additional parts in the question, change $src / $dst as required (D:\VMs is where I keep a lot of Virtual Machines). I have included setting all of CreationTime, LastWriteTime, LastAccessTime so that the backup location with zero-size files is a perfect representation of the source. As I want to use this for archival purposes, I have finally zipped things up and included a date-time stamp in the zipfile name.
# Copy-FolderZeroSizeFiles
$src = "D:\VMs"
$dst = "D:\VMs-Backup"
function ConvertTo-FriendlySize($NumBytes) {
switch ($NumBytes) {
{$_ -lt 1024} { "{0:0.0}B" -f ($NumBytes) ; break } # Change {0: to {0,7: to align to 7 characters
{$_ -lt 1048576} { "{0:0.0}K" -f ($NumBytes / 1024) ; break }
{$_ -lt 1073741824} { "{0:0.0}M" -f ($NumBytes / 1048576) ; break }
default { "{0:0.0}G" -f ($NumBytes / 1073741824); break }
}
}
function Copy-FolderZeroSizeFiles($FolderPath, $DestinationPath) {
Push-Location $FolderPath
if (!(Test-Path $DestinationPath)) { New-Item $DestinationPath -Type Directory }
foreach ($item in Get-ChildItem $FolderPath -Recurse -Force) {
$relPath = Resolve-Path $item.FullName -Relative
if ($item.Attributes -match 'Directory') {
$new = New-Item "$DestinationPath\$relPath" -ItemType Directory -Force -EA Silent
}
else {
$fileBaseName = [System.IO.Path]::GetFileNameWithoutExtension($item.Name)
$fileExt = [System.IO.Path]::GetExtension($item.Name)
$fileSize = ConvertTo-FriendlySize($item.Length)
$new = New-Item "$DestinationPath\$(Split-Path $relPath)\$fileBaseName ($fileSize)$fileExt" -ItemType File
}
"$($new.Name) : creation $($item.CreationTime), lastwrite $($item.CreationTime), lastaccess $($item.LastAccessTime)"
$new.CreationTime = $item.CreationTime
$new.LastWriteTime = $item.LastWriteTime
$new.LastAccessTime = $item.LastAccessTime
$new.Attributes = $item.Attributes # Must this after setting creation/write/access times or get error on Read-Only files
}
Pop-Location
}
Copy-FolderZeroSizeFiles $src $dst
$dateTime = Get-Date -Format "yyyyMMdd_HHmm"
$zipName = "$([System.IO.Path]::GetPathRoot($dst))\$([System.IO.Path]::GetFileName($dst))_$dateTime.zip"
Add-Type -AssemblyName System.IO.Compression.FileSystem
[IO.Compression.ZipFile]::CreateFromDirectory($dst, $zipName)
I am trying to use .NET classes instead of native compress-archive to zip multiple directories (each containing sub-directories and files), as compress-archive is giving me occasional OutOfMemory Exception.
Some articles tell me .NET classes, makes for a more optimal approach.
My tools directory $toolsDir = 'C:\Users\Public\LocalTools' has more than one directory that need to be zipped (please note everything is a directory, not file) - whichever directory matches the regex pattern as in the code.
Below is my code:
$cmpname = $env:computername
$now = $(Get-Date -Format yyyyMMddmmhhss)
$pattern = '^(19|[2-9][0-9])\d{2}\-(0?[1-9]|1[012])\-(0[1-9]|[12]\d|3[01])T((?:[01]\d|2[0-3])\;[0-5]\d\;[0-5]\d)\.(\d{3}Z)\-' + [ regex ]::Escape($cmpname)
$toolsDir = 'C:\Users\Public\LocalTools'
$destPathZip = "C:\Users\Public\ToolsOutput.zip"
Add-Type -AssemblyName System.IO.Compression
Add-Type -AssemblyName System.IO.Compression.FileSystem
$CompressionLevel = [ System.IO.Compression.CompressionLevel ]::Optimal
$IncludeBaseDirectory = $false
$stream = New-Object System.IO.FileStream($destPathZip , [ System.IO.FileMode ]::OpenOrCreate)
$zip = New-Object System.IO.Compression.ZipArchive($stream , 'update')
$res = Get-ChildItem "${toolsDir}" | Where-Object {$_ .Name -match "${pattern}"}
if ($res -ne $null) {
foreach ($dir in $res) {
$source = "${toolsDir}\${dir}"
[ System.IO.Compression.ZipFileExtensions ]::CreateEntryFromFile($destPathZip , $source , (Split-Path $source -Leaf), $CompressionLevel)
}
}
else {
Write-Host "Nothing to Archive!"
}
Above code gives me this error:
When I researched about [ System.IO.Compression.ZipFileExtensions ]::CreateEntryFromFile , it is used to add files to a zip file already created. Is this the reason I am getting the error that I get ?
I also tried [ System.IO.Compression.ZipFile ]::CreateFromDirectory($source , $destPathZip , $CompressionLevel, $IncludeBaseDirectory) instead of [ System.IO.Compression.ZipFileExtensions ]::CreateEntryFromFile($destPathZip , $source , (Split-Path $source -Leaf), $CompressionLevel)
That gives me "The file 'C:\Users\Public\ToolsOutput.zip' already exists error.
How to change the code, in order to add multiple directories in the zip file.
There are 3 problems with your code currently:
First argument passed to CreateEntryFromFile() must be a ZipArchive object in which to add the new entry - in your case you'll want to pass the $zip which you've already created for this purpose.
CreateEntryFromFile only creates 1 entry for 1 file per call - to recreate a whole directory substructure you need to calculate the correct entry path for each file, eg. subdirectory/subsubdirectory/file.exe
You need to properly dispose of both the ZipArchive and the underlying file stream instances in order for the data to be persisted on disk. For this, you'll need a try/finally statement.
Additionally, there's no need to create the file if there are no files to archive :)
$cmpname = $env:computername
$pattern = '^(19|[2-9][0-9])\d{2}\-(0?[1-9]|1[012])\-(0[1-9]|[12]\d|3[01])T((?:[01]\d|2[0-3])\;[0-5]\d\;[0-5]\d)\.(\d{3}Z)\-' + [regex]::Escape($cmpname)
$toolsDir = 'C:\Users\Public\LocalTools'
$destPathZip = "C:\Users\Public\ToolsOutput.zip"
Add-Type -AssemblyName System.IO.Compression
Add-Type -AssemblyName System.IO.Compression.FileSystem
$CompressionLevel = [System.IO.Compression.CompressionLevel]::Optimal
$res = Get-ChildItem -LiteralPath $toolsDir | Where-Object { $_.Name -match $pattern }
if ($res) {
try {
# Create file + zip archive instances
$stream = New-Object System.IO.FileStream($destPathZip, [System.IO.FileMode]::OpenOrCreate)
$zip = New-Object System.IO.Compression.ZipArchive($stream, [System.IO.Compression.ZipArchiveMode]::Update)
# Discover all files to archive
foreach ($file in $res |Get-ChildItem -File -Recurse) {
$source = $dir.FullName
# calculate correct relative path to the archive entry
$relativeFilePath = [System.IO.Path]::GetRelativePath($toolsDir, $source)
$entryName = $relativeFilePath.Replace('\', '/')
# Make sure the first argument to CreateEntryFromFile is the ZipArchive object
[System.IO.Compression.ZipFileExtensions]::CreateEntryFromFile($zip, $source, $entryName, $CompressionLevel)
}
}
finally {
# Clean up in reverse order
$zip, $stream | Where-Object { $_ -is [System.IDisposable] } | ForEach-Object Dispose
}
}
else {
Write-Host "Nothing to Archive!"
}
Calling Dispose() on $zip will cause it to flush any un-written modifications to the underlying file stream and free any additional file handles it might have acquired, whereas calling Dispose() on the underlying file stream flushes those changes to the disk and closes the file handle.
I wanted to write script which will merge files to one, IF they were modified later then the one which should be a destination. My script looks like this:
Function UnifyConfigs { param ( $destination = "C:\temp\all.txt", [Parameter()] $files )
foreach ($config in $files) {
If((Get-ChildItem $config ).LastWriteTime -gt (Get-Item $destination).LastWriteTime)
{
Clear-Content -path $destination
Set-Content -path $destination -value (Get-Content $config)
}
else {
break
}
}
}
My main problem is that $destination file is modified ALWAYS . As far as I understand it should be changed only if modification date of $config is greater than modification date of $destination. But now, it is overwritten each time I run script. What is wrong?
$destination as defined in your param block is a string - you need to resolve the corresponding item in the file system provider to get to the LastWriteTime value - here using Get-Item:
if((Get-Item $config).LastWriteTime -gt (Get-Item $destination).LastWriteTime)
{
# ...
}
My problem is, that the string for replacement needs to change according to the folder depth the designated file is located and I don't have a clue how to get that info. I need to work with relative addresses.
I want the script to be run from 2 folder levels above the folder where all the files are that need correcting. So I've set the $path in line 1. That folder suppose to be 'depth 0'. In here, the replacement string needs to be in it's native form -> stylesheet.css.
For files in the folders one level below 'depth 0' the string for replacement needs to be prefixed with ../ once -> ../stylesheet.css.
For files in the folders two level below 'depth 0' the string for replacement needs to be prefixed with ../ twice -> ../../stylesheet.css.
...and so on...
I'm stuck here:
$depth = $file.getDepth($path) #> totally clueless here
I need $depth to contain the number of folders under the root $path.
How can I get this? Here's the rest of my code:
$thisLocation = Get-Location
$path = Join-Path -path $thisLocation -childpath "\Files\depth0"
$match = "findThisInFiles"
$fragment = "stylesheet.css" #> string to be prefixed n times
$prefix = "../" #> prefix n times according to folder depth starting at $path (depth 0 -> don't prefix)
$replace = "" #> this will replace $match in files
$depth = 0
$htmlFiles = Get-ChildItem $path -Filter index*.html -recurse
foreach ($file in $htmlFiles)
{
$depth = $file.getDepth($path) #> totally clueless here
$replace = ""
for ($i=0; $i -lt $depth; $i++){
$replace = $replace + $prefix
}
$replace = $replace + $fragment
(Get-Content $file.PSPath) |
Foreach-Object { $_ -replace $match, $replace } |
Set-Content $file.PSPath
}
Here's a function I've written that uses Split-Path recursively to determine the depth of a path:
Function Get-PathDepth ($Path) {
$Depth = 0
While ($Path) {
Try {
$Parent = $Path | Split-Path -Parent
}
Catch {}
if ($Parent) {
$Depth++
$Path = $Parent
}
else {
Break
}
}
Return $Depth
}
Example usage:
$MyPath = 'C:\Some\Example\Path'
Get-PathDepth -Path $MyPath
Returns 3.
Unfortunately, I had to wrap Split-Path in a Try..Catch because if you pass it the root path then it throws an error. This is unfortunate because it means genuine errors won't cause an exception to occur but can't see a way around this at the moment.
The advantage of working using Split-Path is that you should get a consistent count regardless of whether a trailing \ is used or not.
Here is a way to get the depth in the folder structure for all files in a location. Hope this helps get you in the right direction
New-Item -Path "C:\Logs\Once\Test.txt" -Force
New-Item -Path "C:\Logs\Twice\Folder_In_Twice\Test.txt" -Force
$Files = Get-ChildItem -Path "C:\Logs\" -Recurse -Include *.* | Select-Object FullName
foreach ($File in $Files) {
[System.Collections.ArrayList]$Split_File = $File.FullName -split "\\"
Write-Output ($File.FullName + " -- Depth is " + $Split_File.Count)
}
Output is this just for illustration
C:\Logs\Once\Test.txt -- Depth is 4
C:\Logs\Twice\Folder_In_Twice\Test.txt -- Depth is 5
I've written a PowerShell function to create a custom object and stores it into a hashtable. The issue I'm facing is retrieving that object. I need to retrieve that object because it contains an array, I need to loop through that array and write it to a text file.
function removeItem {
<#Madatory Parameters for function, it takes the path to the files/folders
to clean up and path to the hashtable.#>
Param([Parameter(Mandatory=$True)]
[string]$path,
[string]$writetoText,
[hashtable] $hashWrite=#{}
)
<#Begin if statement: Test if Path Exists#>
if (Test-Path ($path)) {
<#Begin if statement: Check if file is Directory#>
if ((Get-Item $path) -is [System.IO.DirectoryInfo]) {
$pathObj = [pscustomobject]#{
pathName = $path
Wipe = (Get-ChildItem -Path $path -Recurse)
Count = (Get-ChildItem -Path $path -Recurse | Measure-Object).Count
}
# Write-Output $pathObj.Wipe
#Add Data to Hashtable
$hashWrite.Add($pathObj.pathName,$pathObj)
foreach ($h in $hashWrite.GetEnumerator()) {
Write-Host "$($h.Name): $($h.Value)"
}
<#
[string[]]$view = $pathObj.Wipe
for ($i=0; $i -le $view.Count; $i++){
Write-Output $view[$i]
}
#>
$pathObj.pathName = $pathObj.pathName + "*"
}<#End if statement:Check if file is Directory #>
}
}
My function takes 3 arguments, a path, the text file path, and a hashtable. Now, I create a custom object and store the path, files/folders contained in that path, and the count. Now, my issue is, I want to retrieve that custom object from my hashtable so that I can loop though the Wipe variable, because it's an array, and write it to the text file. If I print the hashtable to the screen it see the Wipe variable as System.Object[].
How do I retrieve my custom object from the hash table so I can loop through the Wipe Variable?
Possible Solution:
$pathObj = [pscustomobject]#{
pathName = $path
Wipe = (Get-ChildItem -Path $path -Recurse)
Count = (Get-ChildItem -Path $path -Recurse | Measure-Object).Count
}
#Add Data to Hashtable
$hashWrite.Add($pathObj.pathName,$pathObj)
foreach ($h in $hashWrite.GetEnumerator()) {
$read= $h.Value
[string[]]$view = $read.Wipe
for ($i=0; $i -le $view.Count; $i++) {
Write-Output $view[$i]
}
}
Is this the ideal way of doing it?
There are uses for GetEnumerator(), but in your case you're better off just looping over the keys of the hashtable:
$hashWrite.Keys | % {
$hashWrite[$_].Wipe
} | select -Expand FullName